IBM TotalStorage NAS 300, TotalStorage NAS Gateway 300, G27 User Reference

TotalStorage™NAS Gateway 300 Model G27
User’s Reference

GA27-4321-00
Before using this information and the product it supports, be sure to read the general information in Appendix A, “Notices” on page 115.
First Edition (October 2002)
This edition applies the IBM 5196 TotalStorage NAS Gateway 300, (Model G27, product number 5196-G27) and to all subsequent releases and modifications until otherwise indicated in new editions.
Order publications through your IBM representative or the IBM branch office servicing your locality. Publications are not stocked at the address below.
IBM welcomes your comments. A form for reader’s comments is provided at the back of this publication. If the form has been removed, you can address your comments to:
International Business Machines Corporation Design & Information Development Department CGFA PO Box 12195 Research Triangle Park, NC 27709–9990 U.S.A.
You can also submit comments on the Web at www.ibm.com/storage/support.
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.
© Copyright International Business Machines Corporation 2002. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Figures ...........................vii
Tables ............................ix
About this book ........................xi
Who should read this book .....................xi
Frequently used terms ......................xi
Publications ..........................xi
Descriptions of the NAS Gateway 300 publications ...........xi
Hardcopy publications shipped with the NAS Gateway 300 ........xii
Related publications ......................xii
Accessibility ..........................xii
Web sites ...........................xii
Chapter 1. Introduction ......................1
Roadmap for setting up and configuring the NAS Gateway 300 .......3
Cluster setup requirements .....................6
Chapter 2. Getting started.....................9
Methods for setting up the NAS Gateway 300 ..............9
Accessing Universal Manageability Services .............9
Initial setup and configuration ...................10
Setting the date and time ....................10
Setting up the network .....................11
Chapter 3. Configuration and administration tools...........13
Using a keyboard, monitor, and mouse for setup and configuration ......13
Summary of configuration and administration tools ............13
Terminal Services and the IBM NAS Administration console ........15
Installing Terminal Services ...................15
Connecting to the desktop through Terminal Services..........15
IBM NAS Administration console .................16
Determining who is using the network-attached storage .........16
IBM Advanced Appliance Configuration Utility..............16
Installing the IBM Advanced Appliance Configuration Utility........17
Initial network adapter selection and connection to the IAACU ......18
IAACU agent ........................18
IAACU console ........................18
Universal Manageability Services ..................20
System requirements .....................20
Starting UM Services .....................21
Windows 2000 for Network Attached Storage..............23
Determining the tool to use ....................24
Telnet Server support ......................25
SNMP support .........................25
Chapter 4. Setting up storage ...................27
Configuring arrays and logical drives on the fibre-attached storage ......27
Expanding the LUN .......................28
Using DiskPart with clustering ..................29
Formatting the logical drives ....................30
Chapter 5. Completing networking, clustering, and storage access setup 33
© Copyright IBM Corp. 2002 iii
Networking setup ........................33
Configuring the interconnect (private) network adapter .........33
Configuring the public local area connection .............34
Verifying network connectivity and names resolution ..........35
Checking or changing the network binding order ...........36
Joining a node to a domain ....................36
Creating an Active Directory Domain .................37
Cluster setup .........................39
Configuring clusters .......................41
Configuring cluster state and properties ...............41
Setting up cluster resource balancing ...............41
Setting up failover .......................42
Creating users ........................42
Creating shares .......................49
Creating clustered file shares (CIFS and NFS) ............50
Recovering from a corrupted Quorum drive .............52
Before you add software .....................53
Chapter 6. Managing and protecting the network and storage ......55
IBM Director ..........................55
Dependencies ........................56
Hardware requirements .....................56
Director extensions ......................57
Naming conventions ......................57
Web-based access ......................57
Disaster recovery .......................58
Software distribution ......................58
Rack Manager and inventory enhancements .............59
Dynamic NAS groups .....................59
NAS Web UI task .......................60
Predictive Failure Analysis ....................60
For more information......................60
NAS Backup Assistant ......................60
Restoring using the NT Backup panel ...............61
Persistent Images ........................62
Global Settings ........................63
Volume Settings .......................63
Persistent Images .......................64
Schedules .........................65
Restore Persistent Images ...................65
Disaster Recovery.......................66
Granting user access to persistent image files ............69
PSM notes .........................69
Storage Manager for SAK .....................74
Uninterruptible power supply support .................74
Tivoli SANergy .........................75
Antivirus protection .......................76
Chapter 7. Managing adapters and controllers ............77
Managing Fibre Channel host bus adapters ..............77
Enabling communication between system management adapters ......78
Enabling ISMP to RSA communication on a single machine .......79
Using the RSA ........................80
Enabling Ethernet adapter teaming .................80
Alacritech Ethernet adapter teaming ................80
Intel Ethernet adapter teaming ..................83
iv NAS Gateway 300 User’s Reference
RAID-1 mirroring ........................85
Memory notes .........................86
Adding more engine memory to increase performance .........86
Using the Recovery CD-ROM if you have added more processor memory 86
Chapter 8. Troubleshooting....................87
Shutting down and powering on the NAS Gateway 300 ..........87
Shutting down the NAS Gateway 300 when clustering is active ......87
Powering on the NAS Gateway 300 when clustering is active .......88
Diagnostic tools overview .....................88
Identifying problems using LEDs .................89
POST ...........................91
SCSI messages .......................92
Diagnostic programs ......................93
Troubleshooting the Ethernet controller ................95
Network connection problems ..................95
Gigabit Ethernet controller troubleshooting chart............96
Troubleshooting adapters .....................97
Ethernet adapters .......................97
Running adapter diagnostics..................103
Testing the connection between two NAS Gateway 300s .........105
Power checkout ........................105
Replacing the battery ......................106
Temperature checkout ......................109
Recovering BIOS .......................109
Chapter 9. Using the Recovery and Supplementary CD-ROMs ......111
Using the Recovery Enablement Diskette and Recovery CD-ROM......111
Using the Supplementary CD-ROM .................114
Appendix A. Notices ......................115
Trademarks..........................116
Appendix B. Getting help, service, and information ..........117
Service support ........................117
Before you call for service ....................118
Getting customer support and service ................118
Getting help online: www.ibm.com/storage/support ..........118
Getting help by telephone ...................119
Appendix C. Purchasing additional services ............121
Warranty and repair services ...................121
Appendix D. Symptom-to-part index ................123
Beep symptoms ........................123
No beep symptoms .......................126
Information-panel system error LED .................126
Diagnostic error codes .....................131
Error symptoms ........................135
POST error codes .......................140
Fan error messages ......................146
Power-supply LED errors.....................146
Power error messages .....................148
SCSI error codes .......................148
Bus fault messages.......................149
DASD checkout ........................149
Contents v
Engine shutdown .......................149
Voltage-related appliance engine shutdown .............149
Temperature-related appliance engine shutdown ...........150
Temperature error messages ...................150
Host Built-In Self Test ......................151
Undetermined problems .....................151
Problem determination tips ....................152
Appendix E. Fast!UTIL options ..................155
Configuration settings ......................155
Host adapter settings .....................155
Selectable boot settings ....................156
Restore default settings ....................156
Raw NVRAM data ......................156
Advanced adapter settings ...................156
Extended Firmware Settings ..................158
Scan Fibre Channel Devices ...................159
Fibre Disk Utility ........................159
Loopback Data Test ......................160
Select Host Adapter ......................160
Appendix F. Communication adapters ...............161
PCI adapter placement .....................161
Adapter placement rules ....................161
Adapter placement charts ...................162
No options .........................164
RSA only options ......................164
Tape only options ......................164
Network only options .....................165
Tape and network options ...................166
Appendix G. Fibre Channel adapter event logs ...........169
Glossary of terms and abbreviations ...............173
Index ............................181
vi NAS Gateway 300 User’s Reference
Figures
1. Opening screen of the NAS Setup Navigator .....................3
2. UM Services default page ...........................22
3. Advanced Settings for binding order ........................36
4. Cluster Information panel ............................40
5. File share dependencies ............................50
6. Operator information panel ...........................89
7. Location of the power-supply LEDs ........................90
8. LED diagnostics panel .............................91
9. Replacing the battery .............................107
10. Releasing the battery .............................108
11. Inserting the new battery ...........................108
12. System-board LED locations ..........................129
13. Diagnostics panel LEDs (viewed with the cover off)..................130
14. System-board switches and jumpers .......................147
15. PRO/1000 XT Server Adapter by Intel ......................163
16. Alacritech 1000x1 Single-Port Server and Storage Accelerated adapter ..........163
17. IBM PCI Ultra160 SCSI adapter (LVD/SE) .....................163
18. Qlogic 2340 1-port Fibre Channel adapter .....................163
19. Qlogic 2342 2-port Fibre Channel adapter .....................164
20. Alacritech 100x4 Quad-Port Server Accelerated Adapter ................164
21. Remote Supervisor Adapter ..........................164
22. IBM Gigabit Ethernet SX Server Adapter .....................164
© Copyright IBM Corp. 2002 vii
viii NAS Gateway 300 User’s Reference
Tables
1. Networking information worksheet for the public connection ...............8
2. Summary of configuration and administration tools for the NAS Gateway 300 ........24
3. Example of local area connection names and network adapter IP addresses .........35
4. Persistent image global settings .........................63
5. Persistent image volume settings .........................63
6. ISMP compared to the RSA ...........................78
7. Troubleshooting index .............................87
8. Power supply LED errors ............................90
9. Ethernet controller troubleshooting chart ......................96
10. IBM Gigabit Ethernet SX Server Adapter troubleshooting chart ..............97
11. PRO/1000 XT Server Adapter by Intel troubleshooting chart ...............98
12. Alacritech 1000x1 Single-Port Server and Storage Accelerated adapter troubleshooting chart 100
13. Alacritech 100x4 Quad-Port Server Accelerated Adapter LED definitions ..........103
14. Supplementary CD-ROM 1 directories ......................114
15. Supplementary CD-ROM 2 directories ......................114
16. IBM Web sites for help, services, and information ..................117
17. Error symptoms index ............................123
18. Examples of beep symptoms ..........................124
19. Beep symptoms ...............................124
20. No beep symptoms .............................126
21. Errors diagnosed by the diagnostic panel LEDs ...................127
22. Diagnostics-panel LED descriptions .......................130
23. Diagnostic error codes ............................131
24. Error symptoms and suggested actions ......................136
25. POST error codes ..............................140
26. Fan error messages .............................146
27. Power-supply LED errors ...........................146
28. Power error messages ............................148
29. SCSI error codes and actions..........................148
30. Bus fault messages .............................149
31. DASD checkout messages...........................149
32. Voltage-related shutdown ...........................149
33. Temperature related shutdown .........................150
34. Temperature error messages ..........................150
35. Host built-in self test messages .........................151
36. Host adapter settings .............................155
37. Advanced adapter settings ...........................156
38. Extended firmware settings ..........................158
39. RIO operation modes.............................158
40. Connection options .............................158
41. Adapter installation rules ...........................161
42. No options.................................164
43. RSA only options ..............................164
44. Tape only options ..............................164
45. Ethernet network options ...........................165
46. Tape backup with Ethernet network option .....................166
47. Fibre Channel adapter error codes ........................169
© Copyright IBM Corp. 2002 ix
x NAS Gateway 300 User’s Reference
About this book
This book provides information necessary to configure and administer the IBM 5195 TotalStorage NAS Gateway 300, hereafter referred to as the NAS Gateway 300.
Who should read this book
This book is for NAS Gateway 300 administrators.
The NAS Gateway 300 administrator should have experience in at least the following skills, or have access to personnel with experience in these skills:
v Microsoft v Networking and network management v Disk management v SAN management v General technologies of the product (such as Microsoft Cluster Service, Services
for UNIX v Critical business issues (such as backup, disaster recovery, security)
Frequently used terms
This document contains certain notices that relate to a specific topic. The caution and danger notices also appear in the multilingual Safety Information on the Documentation CD-ROM that came with the appliance. Each notice is numbered for easy reference to the corresponding notices in the Safety Information.
®
Windows®and Windows Advanced Server
®
, storage, RAID, and so on)
The following terms, used within this document or within the Safety Information, have these specific meanings:
Term Definition in this document
Notes These notices provide important tips, guidance, or advice.
Attention These notices indicate possible damage to programs, devices, or
data. An attention notice is placed just before the instruction or situation in which damage could occur.
Caution These notices indicate situations that can be potentially hazardous
to you. A caution notice is placed just before descriptions of potentially hazardous procedure steps or situations.
Danger These notices indicate situations that can be potentially lethal or
extremely hazardous to you. A danger notice is placed just before descriptions of potentially lethal or extremely hazardous procedure steps or situations.
Publications
The latest versions of the following product publications are available in softcopy at:
www.ibm.com/storage/support/nas
Descriptions of the NAS Gateway 300 publications
The NAS Gateway 300 library consists of the following publications:
v Hardware Installation Guide GA27-4320
© Copyright IBM Corp. 2002 xi
This book describes hardware physical specifications, electrical specifications, cabling, environmental specifications, and networking specifications for installing the NAS Gateway 300.
v User’s Reference GA27-4321
This book describes such operational and administrative activities as:
– Using the configuration utilities
– Administering the NAS Gateway 300
– Troubleshooting
– Using the Recovery and Supplementary CD-ROMs
Hardcopy publications shipped with the NAS Gateway 300
The following publications are shipped in hardcopy and are also provided in softcopy (PDF) form at:
www.ibm.com/storage/support/nas
v NAS Gateway 300 Hardware Installation Guide GA27-4320
v Release Notes
This document provides any changes that were not available at the time this book was produced.
Note that the User’s Reference is provided in softcopy only.
Related publications
The following publications contain additional information about the NAS Gateway 300:
v NAS Gateway 300 Hardware Installation Guide GA27–4320 v NAS Gateway 300 Service Guide GY27–0414 v NAS Gateway 300, NAS 200, and NAS 100 Planning Guide GA27–4319 v UM Services User’s Guide (on the Documentation CD-ROM that came with the
appliance)
Additional information on Universal Manageability Services, IBM Director, and Advanced System Management is located on the Documentation CD-ROM that came with the appliance.
Accessibility
The softcopy version of this manual and other related publications are accessibility-enabled for the IBM Home Page Reader.
Web sites
The following Web site has additional and up-to-date information about the NAS Gateway 300:
www.ibm.com/storage/nas/
A highly recommended Web site: for the latest troubleshooting guidance and symptom-fix tip information, go to the IBM support Web site at:
www.ibm.com/storage/support/nas
xii NAS Gateway 300 User’s Reference
This site contains additional information, gathered from field experience, not available when this document was developed.
About this book xiii
xiv NAS Gateway 300 User’s Reference
Chapter 1. Introduction
The NAS Gateway 300 connects clients and servers on an IP network to Fibre Channel storage, efficiently bridging the gap between LAN storage needs and SAN storage capacities.
This appliance offers a storage solution for both Windows, UNIX environments, including mixed Windows-UNIX environments that enable Windows and UNIX clients and servers to share the same Fibre Channel storage.
Model G27 replaces Models G01 and G26. Enhancements provided by the new model include:
v More options in configuring Ethernet connections
v More options in configuring Fibre Channel connections
v More options for tape backup
v Faster processor
v Gigabit Ethernet connection
v Faster adapters
The dual-node Model G27 features:
v Two engines (IBM 5187 NAS Model 7RY), each with:
– Dual 2.4-GHz processors
– 512 MB of ECC memory standard (plus one upgrade); up to 4.5 GB available
– Two redundant hot-swap 270 watt power supplies
– Qlogic 2340 1-port Fibre Channel adapter for storage area network (SAN)
connection
– Four PCI adapter slots for plugging in optional adapters, including three
high-performance slots. (Communication between the two engines takes place through an integrated 10/100/1000 Mbps Ethernet port on each engine’s planar board.)
v Optional adapters:
– Alacritech 1000x1 Single-Port Server and Storage Accelerated adapter
– IBM Gigabit Ethernet SX Server Adapter
– IBM PCI Ultra160 SCSI adapter (LVD/SE)
– Alacritech 1000x1 Single-Port Server and Storage Accelerated adapter
– PRO/1000 XT Server Adapter by Intel
– Qlogic 2340 1-port or Qlogic 2342 2-port Fibre Channel adapter (to replace
single-port Fibre Channel SAN adapter) – Qlogic 2340 1-port Fibre Channel adapter for tape backup – Remote Supervisor Adapter
®
, and UNIX-like
In addition, the Model G27 provides clustering and failover protection. This high-availability design helps protect against appliance failure to provide continuous access to data.
Note: Throughout this book, information about the Model G27 node and engine
applies to both its nodes and engines.
© Copyright IBM Corp. 2002 1
The preloaded software stack is based on the Windows Powered OS operating system, which is very similar to Microsoft®Windows®2000 Advanced Server. Preloaded software includes:
Microsoft Windows 2000 for Network Attached Storage
Enables remote administration of the appliance using a Web-based graphical user interface (GUI).
Microsoft Windows Terminal Services
Enables remote administration of the appliance using its Windows desktop.
Microsoft Cluster Service
Provides clustering support and failover protection.
Microsoft Services for UNIX
Provides file access to UNIX and UNIX-based clients and servers through the Network File System (NFS) protocol. Note that the NAS Gateway 300 supports Linux and other platforms that employ NFS.
IBM Director Agent and Universal Manageability Server Extensions
Provides system management support based on industry standards (in conjunction with the IBM Director console application as well as other management software).
IBM Advanced Appliance Configuration Utility agent
Supports management through the IBM Advanced Appliance Configuration Utility console application (supports aggregate Web-based management of all of your IBM appliances).
IBM FAStT Management Suite Java (MSJ)
Provides diagnostics for the Fibre Channel adapters.
®
PROSet II
Intel
Provides diagnostics for the Intel Ethernet adapters.
®
Alacritech
SLICuser
Provides diagnostics for the quad-port and accelerated Ethernet adapters.
®
Columbia Data Products
Persistent Storage Manager (PSM)
Provides 250 persistent images of customer data and enables full online backup of system with Microsoft backup applications.
®
Tivoli
Storage Manager Client
Provides data backup and archive support (in conjunction with Tivoli Storage Manager Server).
Tivoli SANergy
Provides shared data access to the SAN storage at Fibre Channel speed.
Services for NetWare
Provides interoperability within the Novell environment and a complete set of new interoperability services and tools for integrating the NAS Gateway 300 into existing NetWare environments. Only Netware V5.0 Print and File services are included in the preloaded code and is required for supporting Netware File system protocol. Clustering is not supported by the SFN5.
Storage Manager for SAK
A storage management tool that includes storage reports, directory quotas, and file screening functions.
2 NAS Gateway 300 User’s Reference
Roadmap for setting up and configuring the NAS Gateway 300
A suggestion for first-time users...
Your understanding of the NAS Gateway 300 and your ability to use it will be greatly enhanced if you first proceed to the NAS Setup Navigator tutorial.
The NAS Setup Navigator maps out the initial configuration tasks and leads you through the tasks in the proper order. The tool detects which NAS appliance it is running on and adjusts the menu and content appropriately. You can follow links to more in-depth information and to the configuration panels used to perform the steps. You can also tailor the instructions to fit your needs by selecting optional topics. The Navigator not only presents information on functions and features–such as clustering–but also allows you to enable the functions and features. To start the NAS Setup Navigator, click on the NAS Setup Navigator icon on the desktop.
After you have become familiar with the NAS Gateway 300, you can refer to this book for more details.
Figure 1. Opening screen of the NAS Setup Navigator
The following roadmap presents the requirements and instructions for setting up and configuring the NAS Gateway 300. Following these directions and referring to the appropriate sections of this book will help you in this task.
Prerequisites
v A domain controller must exist on the network and a login ID must be
defined for each node to log on. Each node must join the same domain.
Chapter 1. Introduction 3
v All Windows shared disks must be defined as basic. Windows 2000
dynamic disks are not supported.
v A Quorum drive must be available to both nodes and have the same
drive letter on each node.
v All disks shared between the two cluster nodes must have the same
drive letter.
v All shared storage must be defined as NTFS and be on primary
partitions.
v Compression cannot be enabled on any disk partition.
v Each node must have one private and one public adapter.
Cluster setup requirements
See “Cluster setup requirements” on page 6.
Configuration and administration tools
The NAS Gateway 300 is a network-attached storage appliance that has several different methods of configuration depending on your environment.
First, determine how you will manage the device. You can manage the NAS Gateway 300 in “headless” mode or with a keyboard, display, and mouse directly attached to each node. See “Using a keyboard, monitor, and mouse for setup and configuration” on page 13 for information on managing this device using a keyboard, display, and mouse. For “headless” management of the NAS Gateway 300, you can use one of the following tools: v Terminal Services, for remote configuration and management from
another device on the network
v Universal Manageability Services (UMS) for management through a Web
browser
v Windows 2000 for NAS, a Web-based GUI for those not familiar with the
Windows desktop
v IBM Advanced Appliance Configuration Utility (IAACU) for setup and
configuring multiple devices or other appliances on a single network
After you determine how you will manage the NAS Gateway 300, you can begin setup and configuration of the device.
For more information on configuration and administration tools, see Chapter 3, “Configuration and administration tools” on page 13.
Step 1 - Initial network setup
Configure both nodes to enable access over the network. The general steps to do this are given below. More details are given in Chapter 2, “Getting started” on page 9.
1. Use Dynamic Host Configuration Protocol (DHCP) or static addressing
4 NAS Gateway 300 User’s Reference
to set up one public network connection in each node.
a. If you are operating with a keyboard, display, and mouse, set up a
public network connection to access the device.
b. If you are operating in a headless environment, use one of the
following methods:
v If DHCP is installed and the IP address requested can be
determined, you can use DHCP for initial setup, but you should change this address to static later in the configuration.
v If you have multiple appliances or cannot determine the DHCP
address, you can install the IAACU utility to identify appliances and define IP addresses. The tool will also allow you to set static addresses.
2. Complete the steps in “Setting the date and time” on page 10 and “Setting up the network” on page 11.
Step 2 - Define storage and setup partitions
The NAS Gateway 300 attaches to your SAN-attached storage device, through the Fibre Channel, and provides your Ethernet LAN-attached clients access to that storage. You must define storage arrays and logical drives on the SAN-attached storage device and set up Windows partitions on the logical drives as defined in Chapter 4, “Setting up storage” on page 27.
For more information on defining storage and setting up partitions, see Chapter 4, “Setting up storage” on page 27.
Step 3 - Complete network setup and cluster installation
1. Power on either node. (This becomes the first node.)
2. Set up the first node:
a. Networking setup
See “Networking setup” on page 33. Note the cautionary statement at the beginning of that section.
b. Domain setup
See “Joining a node to a domain” on page 36.
3. Shut down the first node (see “Shutting down and powering on the NAS Gateway 300” on page 87 for more information on shutting down the NAS Gateway 300).
4. Power on the other node (the joining node).
5. Set up the joining node:
a. Networking setup
See “Networking setup” on page 33.
b. Shared storage setup
For the joining node, the only part of this step that you must complete is assigning drive letters on the shared storage; make sure that the drive letters are the same as those on the first node.
Also, if you have trouble with the Fibre Channel connection, you can use the steps in “Fibre Channel adapter” on page 104 to diagnose the problem.
c. Domain setup
See “Joining a node to a domain” on page 36.
d. Shut down the joining node (see “Shutting down and powering on
the NAS Gateway 300” on page 87 for more information on shutting down the NAS Gateway 300).
6. Power on the first node and complete “Cluster setup” on page 39.
7. Power on the joining node and complete “Cluster setup” on page 39.
For more information on network setup and cluster installation, see Chapter 5, “Completing networking, clustering, and storage access setup”
on page 33.
Chapter 1. Introduction 5
Step 4 - Cluster administration
At this point you can add users, file shares, and complete other configuration tasks to improve operations of the NAS Gateway 300 in a cluster environment.
1. Add users (see “Creating users” on page 42).
2. Add file shares (see “Creating clustered file shares (CIFS and NFS)” on page 50). Note that you must configure Server for NFS before NFS file sharing can be used.
For more information on cluster administration, see “Configuring clusters” on page 41.
Step 5 - Additional functions
Additional functions are available for backup, persistent images, and adding more storage areas. It is recommended that after you complete the setup and configuration procedures, you use the Persistent Storage Manager Disaster Recovery option (“Disaster Recovery” on page 66) or other method to back up the system configuration in the event of a failure.
Also, it is imperative to use the system shutdown procedure described in “Shutting down and powering on the NAS Gateway 300” on page 87 to ensure system integrity.
For more information, see Chapter 6, “Managing and protecting the network and storage” on page 55.
Cluster setup requirements
Before you configure the NAS Gateway 300 nodes for clustering, ensure that the following requirements are met:
Network requirements
v A unique NetBIOS cluster name.
v You will need at least seven static IP addresses: five for the node and
cluster setup, and two for each file share served by the cluster. A formula for the number of static IP addresses is:5+(2xnumber_of_file shares). The IP addresses required for node and cluster setup are:
– At least three unique, static IP addresses for the public network: one
for each node (for client access through the PCI NIC adapter) and one for the cluster itself (the administration IP address).
Table 1 on page 8 shows a summary of the networking information necessary for the public connection.
– Two static IP addresses for the cluster interconnect on a private
network or crossover, through the onboard Ethernet adapter. The default IP addresses for the private network adapters are 10.1.1.1 for the first node in the cluster, and 10.1.1.2 for the node that joins the cluster. (The top node in the NAS Gateway 300 is considered the first node, and the bottom node is considered the joining node.)
Notes:
1. If you are not the system administrator, contact that person for the applicable IP addresses.
2. Each node in a cluster must join the same domain and be able to access a Primary Domain Controller (PDC) and DNS server, but it is not required that the nodes log into the domain.
6 NAS Gateway 300 User’s Reference
3. Each node in the cluster must have at least two network adapters: at least one for the public network and the other for the private interconnect.
Shared disk requirements
v All shared disk arrays and devices, including the quorum disk, must be
physically attached to a shared storage bus.
v All shared disks must be configured as basic (not dynamic) disks.
v All shared disks must have the same drive letter on each node.
v All partitions on these disks must be formatted with NTFS.
v All partitions on these disks must also be Primary Partitions.
v Compression must not be enabled.
Shutting down and powering on the NAS Gateway 300
The clustering function requires special considerations when you need to shut down and power on the NAS Gateway 300. See “Shutting down and powering on the NAS Gateway 300” on page 87 for details.
Chapter 1. Introduction 7
Table 1. Networking information worksheet for the public connection
Cluster component Information needed
Cluster
Cluster name:
IP address:
Subnet mask:
First node
Joining node
Computer name (example: IBM5196–23H1234):
IP address:
Subnet mask:
Gateway:
Preferred DNS:
WINS server (optional):
Computer name:
IP address:
Subnet mask:
Domain to join
8 NAS Gateway 300 User’s Reference
Gateway:
Preferred DNS:
WINS server (optional):
Domain name:
Chapter 2. Getting started
This chapter gives details to set up the initial communication to the NAS Gateway 300 to enable setup and configuration. These instructions refer specifically to a base configuration as shipped and do not cover the setup of additional storage units, which can be purchased separately.
Note: You must follow these procedures for both nodes.
Methods for setting up the NAS Gateway 300
The following sections detail how to set up the NAS Gateway 300. You must first ensure that the network recognizes the new appliance. Which method you should use depends on several conditions:
v In “headless” mode (without a keyboard, monitor, and mouse directly attached to
the unit), use one of the following methods:
IBM Advanced Appliance Configuration Utility
If you have multiple appliances or cannot determine the DHCP address, install the IAACU to identify appliances and define IP addresses. The tool also allows you to set static addresses.
If you are using this method, proceed with “Installing the IBM Advanced Appliance Configuration Utility” on page 17.
Windows Terminal Services
If DHCP is installed and the IP address requested can be determined, use this method for initial setup, but you should change the address to static later in the configuration. This condition is most appropriate when using Windows Terminal Services for operation of the NAS Gateway 300.
If you are using this method, proceed with “Initial setup and configuration” on page 10.
v The use of a keyboard, display, and mouse is most appropriate when there is a
single or few appliances in the network and you use static setup and definition.
If you are using this method, proceed with “Initial setup and configuration” on page 10.
Accessing Universal Manageability Services
1. You will be prompted to authenticate with the administrative user name (“Administrator”) and password (initially “password,” but you can change it later; note that the password is case-sensitive, but the user name is not).
If this is the first time you have accessed the UM Services browser (on any appliance) from this workstation, you will also be prompted to install the Swing and XML Java libraries in your Web browser. You can download these libraries from the NAS Gateway 300 through the network link.
2. The UM Services browser starts. In the left pane, Microsoft Windows 2000 for Network Attached Storage is automatically selected on the Appliance tab. In the right pane, Windows 2000 for Network Attached Storage starts.
3. Again, you are prompted to authenticate with the administrative user name and password.
4. Click Administer this server appliance to bring up the Microsoft Windows 2000 for Network Attached Storage GUI.
© Copyright IBM Corp. 2002 9
You are now ready to begin administering the appliance. Details for this task are described in “Initial setup and configuration”.
Initial setup and configuration
This section provides details on the initial setup and configuration of the NAS Gateway 300.
Note that if you are administering the NAS Gateway 300 without a keyboard, monitor, and mouse (“headless” mode), you can use one of two methods:
v Terminal Services, which provides full administrative function. (See “Terminal
Services and the IBM NAS Administration console” on page 15.)
v Windows 2000 for Network Attached Storage, which provides a subset of the full
administrative function in Terminal Services. (See “Windows 2000 for Network Attached Storage” on page 23.)
In general, you administer the appliance by adjusting information contained in the following task groups:
Note: In this example, you access the task groups through the Windows 2000 for
Network Attached Storage Web-based GUI.
v “Setting the date and time”
v “Setting up the network” on page 11
Although you can modify multiple appliance and network attributes in each task group, the information given here is the minimum you need to know to administer the appliance and network.
You can find more information on administration elsewhere in this book and in the online help.
You can access these task groups in one of three ways:
1. Click the Home tab and then select the task group link.
2. Click the top tab associated with that task group.
3. Click the Back button on the browser until you arrive Home and then select the task group link.
Setting the date and time
To change the date and time, click Date and Time. (Remember that you can also access all of these task groups by clicking the titled tabs at the top of the page.) The Set Date and Time page appears, allowing you to adjust information as necessary.
10 NAS Gateway 300 User’s Reference
Setting up the network
Note: All appliances have an initial default user name of “Administrator” and
password of “password.”
As part of the Network task group, you can change the administrator password and (optionally) you can configure the properties of each network interface that resides on the appliance.
To change the administrator password, click Change Administrator Password. The Change Administrator Password page appears, allowing you to change the password. Note the warning on the page that any information that you enter can be viewed by others on the network. To prevent others from seeing your information, set up a secure administration Web site as described in the online help.
To change IP addresses, click Interfaces. The Network Adapters on Server Appliance page appears. Use this page primarily to change IP addresses from dynamic (DHCP, which is the system default) to static.
Note: During the initial setup, you should configure the nonplanar Ethernet adapter
only. The NAS Gateway 300 engine uses the Ethernet adapter that is integrated on the planar board as the interconnect private network for clustering.
If you want to use an Ethernet adapter other than the default Ethernet adapter (in slot 2) as the network interface to be attached to the subnet, then you can change the order of precedence later with the Windows Networking Properties option. The order of precedence for the initial configuration is: PCI slot 2, then PCI slot 3.
Note that you might need to enable some of the NAS Gateway 300 NIC connections, because the NICs in slots 1, 3, and 4 are not enabled. During initial setup, the IAACU first looks for a 10/100 adapter in slot 2, which is enabled by default. If there is no adapter in slot 2, the IAACU looks for a Gigabit adapter card in slot 3 and it should be enabled. If the Gigabit adapter card is not enabled, right-click the adapter icon to enable it. After the initial setup, you can then enable all other NIC interfaces installed.
You must modify the adapter by completing the IP task (to modify IP configurations) and then choosing one or more of the following tasks, as appropriate:
v DNS (to modify DNS configurations)
v WINS (to modify WINS configurations)
v HOSTS (to modify host configurations)
Chapter 2. Getting started 11
12 NAS Gateway 300 User’s Reference
Chapter 3. Configuration and administration tools
Attention
Changing the preloaded software configuration of this product, including applying or installing unauthorized service packs or updates to preinstalled software, or installing additional software products that are not included in either the preloaded image or on the Supplementary CD-ROM, might not be supported and could cause unpredictable results. For updated compatibility information, refer to the IBM Web site:
www.ibm.com/storage/nas
To correct problems with a preloaded software component, back up your user and system data. Then, use the Recovery CD-ROM to restore the preloaded software image.
The NAS Gateway 300 appliance comes with the following configuration programs that you can use to configure and administer the appliance:
v Terminal Services Client (page 15)
This tool enables you to remotely administer the appliance.
v IBM Advanced Appliance Configuration Utility (IAACU, page 16)
You can use the IAACU to set up and configure the network configuration on the appliance.
v Universal Manageability Services (page 20)
This tool allows you to remotely manage your appliance using a Web browser.
v Windows 2000 for Network Attached Storage (page 23)
This is a Web-based GUI for administrators who are not familiar with Windows.
This chapter describes these tools in general and then in detail.
Using a keyboard, monitor, and mouse for setup and configuration
It is recommended that you directly attach a keyboard, monitor, and mouse to the NAS Gateway 300 when:
v Initially setting up and configuring the device
v Changing or adding to RAID arrays defined on the fibre-attached storage
v Troubleshooting the device
Summary of configuration and administration tools
There are several ways to set up and administer the NAS Gateway 300.
Terminal Services Client
The Terminal Services Client, when installed on a workstation that is attached to the same network as the NAS Gateway 300 desktop. If you are familiar with administrative tasks using a Windows desktop, you can use Terminal Services.
See “Terminal Services and the IBM NAS Administration console” on page 15 for more information.
© Copyright IBM Corp. 2002 13
IBM Advanced Appliance Configuration Utility (IAACU)
The IBM Advanced Appliance Configuration Utility (IAACU) aids in setting up and reconfiguring the network configuration on your appliances. The IAACU agent works with the IAACU console to automatically detect the presence of appliances on the network.
After the appliance is detected by the IAACU console, you can use the IAACU to:
v Set up and manage the network configuration for the appliance, including
assigning the IP address, default gateway, network mask, and DNS server to be used by the appliance. (See the note in “Setting up the network” on page 11, regarding the Ethernet adapter that is integrated on the planar board.)
v Start Universal Manageability Services on the appliance, enabling you to
perform advanced systems-management tasks.
See “IBM Advanced Appliance Configuration Utility” on page 16 for more information.
Universal Manageability Services
Universal Manageability Services (UM Services) provides point-to-point remote management of client systems using a Web browser. Use UM Services to:
v Learn detailed inventory information about your computers, including
operating system, memory, network cards, and hardware.
v Track your computers with features such as power management, event
log, and system monitor capabilities.
v Integrate with Tivoli Enterprise, Tivoli NetView
®
Unicenter, Microsoft SMS, and Intel®LANDesk Management Suite.
, Computer Associates
In addition, you can link to Windows 2000 for Network Attached Storage and Terminal Services from UM Services.
See “Universal Manageability Services” on page 20 for more information.
Windows 2000 for Network Attached Storage
The NAS Gateway 300 provides a Web-based GUI, Microsoft Windows 2000 for Network Attached Storage (Windows 2000 for NAS). Using Windows 2000 for NAS, you navigate through administrative task categories by clicking the appropriate tabs and then selecting a task from that category.
See “Windows 2000 for Network Attached Storage” on page 23 for more information.
14 NAS Gateway 300 User’s Reference
Terminal Services and the IBM NAS Administration console
If you are familiar with Windows operating systems, you can use Terminal Services. In some cases, you must use Terminal Services to complete administrative tasks.
You can access Terminal Services in two ways:
1. Through the UM Services browser, as described in “Starting UM Services” on page 21.
2. By using the Terminal Services Client software.
Installing Terminal Services
To use the Terminal Services Client, complete the following steps to install it on the remote workstation and connect to the NAS Gateway 300 appliance:
1. Insert the Supplementary CD-ROM into the workstation CD-ROM drive.
2. Select Start Run.
3. In the Open field, type (with quotation marks)
"x:\Terminal Services Client\Disk 1\setup.exe"
where x is the drive letter assigned to the CD-ROM drive.
4. Click OK to begin the Terminal Services Client Setup program.
5. Accept the defaults in each window that opens or refer to the Microsoft Windows documentation for more instructions.
6. When the Terminal Services Client Setup program completes, ensure that the workstation has network-connectivity to the NAS appliance so that you can administer the appliance.
Connecting to the desktop through Terminal Services
To connect to Terminal Services from your workstation, do the following:
1. Click Start Programs Terminal Services Terminal Services Client.
2. In the Server field, select the computer name of the appropriate NAS Gateway
300. If that NAS Gateway 300 is not listed, type the IP address or the computer name of the NAS Gateway 300. The computer name is predefined as IBM5196-xxxxxxx, where xxxxxxx is the serial number located in the lower right corner of the bezel on the front of the appliance. If you have changed the computer name from the predefined value, use that name instead.
Note: Although you can do so, it is recommended that you not change the
default computer name to avoid the chance of propagating misidentification through the system. And, if you are using IBM Director to manage your appliance, and you change the default name, the default name continues to appear in IBM Director.
3. For Size, select a screen size in which the NAS Gateway 300 desktop will appear. It is recommended that you choose a size other than full screen.
4. Click Connect to start the Terminal Services Client session. A user login window opens.
5. Log in. Type Administrator in the Username field, type password in the Password field, and then click OK to log in. After you log in, you can begin using Terminal Services Client to configure and manage the NAS Gateway 300, as if a keyboard, mouse, and monitor were directly attached to it. The NAS Gateway 300 desktop contains a shortcut, titled IBM NAS Admin, to a special console, the IBM NAS Administration console.
Chapter 3. Configuration and administration tools 15
IBM NAS Administration console
The IBM NAS Administration console includes all the standard functions provided by the standard Computer Management console available on any Windows 2000 desktop, plus the following functions specific to the NAS Gateway 300:
v Cluster Administration (see “Configuring clusters” on page 41)
v These advanced functions (see Chapter 6, “Managing and protecting the network
and storage” on page 55):
– FAStT MSJ
– NAS Backup Assistant
– Persistent Storage Manager
– Tivoli SANergy
Determining who is using the network-attached storage
Occasionally, you might want to know who is using the network-attached storage. To determine this information:
1. Start a Windows Terminal Services session from the administrator’s console to the NAS Gateway 300.
2. Click the IBM NAS Admin icon on the desktop.
3. In the left pane, click File Systems Shared Folders Sessions.
4. The users currently using the storage are displayed. To close those sessions, use a right-click. Before you close a session, notify the user that you are going to close the session by clicking Start Programs Accessories Command
Prompt, and then issuing the net send hostname messagetext command.
IBM Advanced Appliance Configuration Utility
Note: Although you can do so, it is recommended that you not change the default
computer name of your NAS appliance to avoid the chance of propagating misidentification through the system. Also, The IBM Advanced Appliance
Configuration Utility (IACCU) depends on the original name to function. The IBM Advanced Appliance Configuration Utility helps you to set up and reconfigure the network configuration on the NAS Gateway 300 appliance, as well as other IBM appliances.
The IAACU agent, preinstalled on the NAS Gateway 300 appliance, works with the IAACU console, a Java-based application that is installed on a network-attached system. You can use the IAACU as a systems-management console to automatically detect the presence of NAS Gateway 300 appliances on the network. After the NAS Gateway 300 appliance is detected by the IAACU console, use the IAACU to set up and manage the appliance’s network configuration, including assigning the IP address, default gateway, network mask, and DNS server to be used by the appliance. You can also use the IAACU to start Universal Manageability Services (UM Services) on the appliance, enabling you to perform more advanced systems-management tasks.
For networks that are not currently running DHCP servers, the IAACU is useful for automatically configuring network settings for newly added appliances, such as the NAS Gateway 300.
However, networks with DHCP servers will also benefit from using the IAACU because it enables you to reserve and assign the appliance IP address in an orderly, automated fashion. Even when you use DHCP and do not reserve an IP
16 NAS Gateway 300 User’s Reference
address for the appliance, you can still use the IAACU to discover appliances and to start UM Services Web-based systems management.
Notes:
1. The IAACU configures and reports the TCP/IP settings of the first adapter (excluding the integrated Ethernet controller that is used for the interconnection of the two engines) on each appliance. The “first” adapter is defined by its position: if there is an adapter in slot 2, it is the first adapter; if there is an adapter in slot 3, it is the first adapter.
Be sure to connect the first adapter to the same physical network as your systems-management console. You can do this by manually configuring the network adapter to be on the same subnetwork as the systems-management console.
2. The IAACU must be running to configure newly installed appliances automatically.
3. The system running the IAACU console automatically maintains a copy of its database (ServerConfiguration.dat) in the Advanced Appliance Configuration Station installation directory (Program files\IBM\iaaconfig). To remove previous configuration data, close the IAACU, delete this file, and then restart the utility. This deletes all previously configured Families. However, the IAACU will automatically discover connected appliances and their network settings.
Installing the IBM Advanced Appliance Configuration Utility
These instructions assume that you have installed and powered on the appliance according to the installation guide procedures. You are now ready to install the IAACU console application from the Supplementary CD-ROM.
Install the IACCU console application from the Supplementary CD-ROM onto a Windows NT 4.0 or Windows 2000 workstation that is attached to the same IP subnetwork to which the appliance is attached.
Note: The IAACU creates a private database that is specific to the IP subnetwork
to which it is attached. Therefore, do not install it on more than one systems-management console residing on the same IP subnetwork.
For information on how to install the IAACU console, see “Installing the IBM Advanced Appliance Configuration Utility”.
After you install the IACCU console application, the following steps will take you to the point where you can administer the appliance.
1. Start the IACCU console application by clicking its icon.
2. On the left pane of the Advanced Appliance Configuration console, select the appliance to administer. Initially, the appliance name is IBM5196-serial number; the serial number is located in the lower right corner of the bezel on the front of the appliance.
3. Click Start Web Management to start the UM Services browser. This will open a separate Web browser window.
4. Proceed to “Accessing Universal Manageability Services” on page 9.
For more information on the IAACU, see “IAACU console” on page 18.
Chapter 3. Configuration and administration tools 17
Initial network adapter selection and connection to the IAACU
Unlike the limited number of network adapter placement options in the previous release, in this release there are an increased number of network adapter types and locations from which you can connect. Assuming you have a keyboard and monitor attached, perform the following steps to take into account the new adapter placement options:
1. Decide which adapter will be used to connect to the IAACU, and connect the appropriate cable type.
2. Open the Network and Dial-up Connections panel. (From the desktop, right-click My Network Places, and select Properties.)
3. Determine the connection name of the adapter that you have selected to use. Move the mouse cursor over the adapter name, and a description of the adapter type will appear. If this is inconclusive, right-click the adapter, and select Properties. Under the General tab, click Configure. The line that contains the location information will provide the adapter’s slot location. For example, Location 1 means the adapter is in PCI slot number 1. Close the adapter properties panel.
4. On the Network and Dial-up Connections menu bar, select Advanced and then Advanced Settings. From the Connections menu, select the adapter’s connection name. Then using the down arrow, move the selection down to the next-to-last position in the list. (The last entry in the list should be the remote access connections, shown as the telephone icon.) Save your changes by clicking OK.
5. The IAACU will now detect the appliance using the adapter that you have just enabled.
IAACU agent
IAACU console
The IAACU agent is preinstalled on the NAS Gateway 300 appliance.
After you connect the NAS Gateway 300 to your network, the IAACU agent automatically reports the appliance serial number and type, the MAC address of its onboard Ethernet controller, and whether DHCP is in use by the appliance. Furthermore, it reports the host name, primary IP address, subnet mask, primary DNS server address, and primary gateway address if these are configured on the system.
Note: The IAACU agent periodically broadcasts the appliance IP settings. To
prevent the service from broadcasting this data periodically, stop the iaaconfig service.
The IAACU console is a Java application that you install on one system in your network for use as a systems-management console. For information on how to install the IAACU console, see “Installing the IBM Advanced Appliance Configuration Utility” on page 17.
Note: The IAACU creates a private database that is specific to the IP subnetwork
to which it is attached. Therefore, do not install it on more than one systems-management console residing on the same IP subnetwork.
18 NAS Gateway 300 User’s Reference
The IAACU console enables you to:
v Automatically discover NAS Gateway 300 appliances, as well as other IBM
appliances that run the IAACU agent and are attached to the same physical subnet as the IAACU console.
v Use a GUI-based application to configure the appliance network settings.
Use the IAACU to assign network parameters such as IP addresses, DNS and gateway server addresses, subnet masks, and host names.
v Start UM Services Web-based systems-management console.
Launch UM Services on your appliances and perform advanced systems-management tasks on a selected appliance with a single mouse click.
The IAACU console is divided into two panes:
v Tree View Pane
The Tree View Pane, located on the left side of the IAACU console window, presents a list of all discovered NAS Gateway 300 appliances. The Tree View Pane also includes groups for appliances that were not configured using the IAACU or that have IP addresses that conflict with other devices on your network. When you click any item in the Tree View, information about that item (and any items that are nested below that item in the tree view) appears in the Information Pane.
v Information Pane
The Information Pane, located on the right side of the IAACU console, displays information about the item that is currently selected in the Tree View Pane. The information that appears in the Information Pane varies depending on the item that is selected. For example, if you select the All Appliances item from the Tree View Pane, the Information Pane displays configuration information (IP settings, host name, serial number, and so on) about each of the NAS Gateway 300 appliances that have been discovered by the IAACU console.
The IAACU console also features the following menus:
File Use the File menu to import or export the IAACU console configuration
data, to scan the network, or to exit the program.
Appliance
Use the Appliance menu to remove a previously discovered appliance from a group.
Help Use the Help menu to display product information.
Discovering NAS Gateway 300 appliances
Any NAS Gateway 300 appliance, or other IBM appliance, that is running and is connected to the same subnet as the system running the IAACU console is automatically discovered when you start the IAACU console. Discovered appliances appear in the IAACU console tree view (in the left pane of the IAACU console window). Every discovered appliance is listed in the tree view under All Appliances.
Chapter 3. Configuration and administration tools 19
Universal Manageability Services
Universal Manageability Services (UM Services) is a Windows application that functions as both a stand-alone management tool for the system it is installed on and a client to IBM Director.
As a Director Client, it receives and sends information to the Director Server as controlled from the IBM Director Console.
As a stand-alone tool, it provides a Web-browser based interface and a Microsoft Management Console (MMC) interface, where you can view the system status, perform certain management tasks and configure alerts.
The UM Services GUI enhances the local or remote administration, monitoring, and maintenance of IBM systems. UM Services is a lightweight client that resides on each managed computer system. With UM Services, you can use a Web browser and UM Services Web console support to inventory, monitor, and troubleshoot IBM systems on which UM Services is installed.
This “point-to-point” systems-management approach, in which you use a Web browser to connect directly to a remote-client system, enables you to effectively maintain IBM systems without requiring the installation of additional systems-management software on your administrator console.
In addition to point-to-point systems-management support, UM Services also includes support for UM Services Upward Integration Modules. These modules enable systems-management professionals who use any supported systems-management platform (including Tivoli Enterprise, CA Unicenter TNG Framework, and Microsoft Systems Management Server [SMS]) to integrate portions of UM Services into their systems-management console. Because it was designed to use industry-standard information-gathering technologies and messaging protocols, including Common Information Model (CIM), Desktop Management Interface (DMI), and Simple Network Management Protocol (SNMP), UM Services adds value to any of these supported workgroup or enterprise systems-management platforms.
You can use UM Services to perform the following tasks:
v View detailed information about your computers, including operating system,
memory, network cards, and hardware.
v Track your computers with features such as power management, event log, and
system monitor capabilities.
v Upwardly integrate with Tivoli Enterprise, Tivoli Netview, Computer Associates
Unicenter, Microsoft SMS, and Intel LANDesk Management Suite.
Complete documentation on how to use UM Services is included on the Documentation CD-ROM that came with the appliance.
System requirements
The UM Services client is preinstalled on the NAS Gateway 300 appliance. However, you must have a Web browser installed on your systems-management console. It is recommended that you set Microsoft Internet Explorer 5.x (or later) as the default browser.
20 NAS Gateway 300 User’s Reference
Notes:
1. You must install the optional Java Virtual Machine (VM) support to access a client system running UM Services.
2. If you reinstall Internet Explorer after installing UM Services, you must reapply the Microsoft VM update. The UM Services client requires Microsoft VM Build 3165 or later. Download the latest Microsoft VM from www.microsoft.com/java
3. If you install UM Services before you install MMC 1.1 (or a later version), you will not have an icon for MMC in the IBM Universal Manageability Services section of the Start menu.
Starting UM Services
You can use IAACU or Terminal Services Client to configure the network setting remotely, or you can attach a keyboard and mouse to your appliance and configure the Network settings using the Windows Control Panel. After you have configured the network settings for your appliance, you are ready to use UM Services.
To start UM Services:
1. Start a Web browser and then, in the Address or Location field of the browser, type:
http://ip_address:1411
where ip_address is the IP address of the NAS Gateway 300, and then press Enter.
Or, type:
http://computer_name:1411
where computer_name is the computer name of the NAS Gateway 300. The computer name is predefined as: IBM5196-xxxxxxx, where xxxxxxx is the serial number located in the lower right corner of the bezel on the front of the appliance.
If you have changed the computer name from the predefined value, use that name instead. A user log in window opens.
Chapter 3. Configuration and administration tools 21
Figure 2. UM Services default page
2. Type Administrator in the User Name field, and type password in the Password field. You can leave the Domain field blank. Make sure the Save this password
in your password list check box is not selected, and then click OK.
Note: To ensure system security, change the Administrator password from
“password” to something else. After you do, or if you create another user in the Administrator group in the future, use your new username/password combination instead of the default username/password combination.
The first time you connect, you might be prompted to install XML and Swing components. Follow the on-screen instructions to install these components and then close and restart Internet Explorer before you proceed.
You are now connected to the NAS Gateway 300 through UM Services. In addition to the standard UM Services functionality, the appliance includes functionality for administering the appliance, available from the Appliances tab in the left pane of the UM Services browser. The default view (in the right pane of the UM Services browser) when you connect to the appliance is Windows 2000 for NAS. The other selectable view in the Appliance tab is Windows 2000 Terminal Services, which displays a Terminal Services Web Connection page.
3. To start Windows 2000 for NAS, click Administer this server appliance in the right pane of the UM Services browser. To connect to the NAS Gateway 300 and manage it as though you were running Terminal Services Client from the desktop, select Terminal Services in the Appliance tab of the UM Services browser, and then follow the instructions for connecting to the NAS Gateway 300 using Terminal Services described in “Terminal Services and the IBM NAS Administration console” on page 15.
22 NAS Gateway 300 User’s Reference
Launching UM Services from the configuration utility
You can use the IAACU to launch UM Services on the NAS Gateway 300 appliances.
Note: The selected appliance must be running UM Services as a UM Services
client. Also, the systems-management console (the system that is running the IAACU console) must use a Web browser that is supported for use with UM Services. If you have not used UM Services from this system, you must install several plug-ins before proceeding.
To use the IAACU console to start UM Services on an appliance:
1. Click the appliance in the IAACU console Tree View Pane.
When you select the appliance from the tree view, information about the selected appliance appears in the Information Pane.
2. Click Start Web-Based Management.
Your default Web browser starts, loading the UM Services browser automatically.
3. Log in to the UM Services browser. Refer to Step 2 on page 22 for login instructions.
For more information on using UM Services to manage your appliances, see the Universal Manageability Services User’s Guide, included on the Documentation CD-ROM that came with the appliance.
Windows 2000 for Network Attached Storage
While you can perform most administrative tasks using Windows 2000 for NAS, you must use Terminal Services Client for some advanced tasks. See “Terminal Services and the IBM NAS Administration console” on page 15 for more information.
Task categories available to you through Windows 2000 for NAS include:
v Status
v Network
v Disks
v Users
v Shares
v Maintenance
v Controller
To start Windows 2000 for NAS, use one of these methods:
v UM Services, described in Step 3 on page 22
v Web browser, by entering http://ip_address:8099 or http://computer_name:8099
and then logging on to the NAS Gateway 300
v NAS Gateway 300 desktop while using Terminal Services Client and starting a
browser
You can access online help for Windows 2000 for NAS in two ways:
1. Click the Help button at the top of any Web page. This displays a table of contents that you can navigate to find help for any Windows 2000 for NAS task.
2. Click the question mark (?) button at the top of any Web page. This displays context-sensitive help for the task you are currently performing.
Chapter 3. Configuration and administration tools 23
Determining the tool to use
Table 2 suggests which tool to use for specific functions, but does not list all options or combinations. The administrator’s training level or preferences might determine an alternate approach from that suggested in the table.
Table 2. Summary of configuration and administration tools for the NAS Gateway 300
Administration tool Main functions
Windows Domain Controller (not NAS appliance)
IBM Advanced Appliance Configuration Utility (IAACU)
Windows 2000 for NAS GUI Provides ease-of-use administration, but not all the capabilities of Terminal
IBM NAS desktop and IBM NAS Admin program, through a Terminal Services session or a directly-connected keyboard and monitor
Disaster Recovery Restores a previously saved PSM image of the system partition to a failed
Recovery CD-ROM Set Reinstalls the software to the original state as shipped on the machine;
Integrated System Management Processor (ISMP) configuration program
Users and user groups can be defined and authenticated by the Windows Domain Controller, although this is not required.
Access a headless NAS Gateway 300 node, particularly for the initial setup of the network connectivity. (Alternatively, you can attach a keyboard, mouse, display to each node of the NAS Gateway 300.) IAACU enables you to:
v Set time, date, and initial network connectivity parameters v Access to Windows 2000 for NAS GUI, Terminal Services (NAS Desktop),
and Universal Manageability Services
Services and IBM NAS Administration. The GUI enables you to: v Configure networking connectivity, private (for clustering) and public LAN
connections
v Create and format logical drives v Join domains v Set up access permissions and disk quotas for CIFS, NFS, HTTP, FTP,
and Novell NetWare shares
v Use Persistent Storage Manager
Provides in-depth administration of all aspects of NAS Gateway 300. Provides all of the Windows 2000 for NAS GUI functions above, plus the ability to:
v Use NAS Backup Assistant, or NT Backup and Restore wizard v Learn detailed inventory information about hardware, OS, and so on, using
Universal Manageability Services
v Cluster administration:
– Set up cluster – Define failover for each volume – Manually fail over cluster resources – Set up cluster resource balancing by assigning preferred node
v Diagnose system problems:
– Check Ethernet adapters using PROSet II and 10/100 Quad-Port
Ethernet adapter using SLICuser
– Check Fibre Channel card using FAStT MSJ
machine. This restores all configuration information on the failed node. You create the recovery boot diskette from the PSM tools in the Windows for 2000 NAS GUI.
however, does not restore configuration information (configuration changes you applied to the original shipped configuration are lost). You must first boot with the Recovery Enablement Diskette, and then reboot with the Recovery CD-ROM. To create the Recovery Enablement Diskette, run enablement_disk_x.y.exe (where x.y are the version number of the disk), located on the Supplementary CD-ROM. You will be prompted to insert a blank disk into drive a:.
Configures the ISMP that is integrated on the engine planar board.
24 NAS Gateway 300 User’s Reference
Table 2. Summary of configuration and administration tools for the NAS Gateway 300 (continued)
Administration tool Main functions
Remote Supervisor Adapter (RSA) configuration program
Configures the optional RSA.
Telnet Server support
Attention: When you Telnet to another machine, your user name and password
are sent over the network in plain, unencrypted, text.
The NAS Gateway 300 includes Telnet server capability. The Telnet server provides limited administrative capability. This can be useful in cases where you need to remotely administer the NAS Gateway 300, but do not have access to a Windows-based workstation (from which you could remotely administer the appliance through a supported Web browser or Terminal Services Client).
To access the NAS Gateway 300 from any Telnet client, specify the IP address or host name of the NAS Gateway 300, then log in using an ID and password (defined on the NAS Gateway 300) with administrative authority. From the command line, you can issue DOS-like commands (such as dir and cd), and some UNIX-like commands (such as grep and vi). You can launch some applications, but only character-mode applications are supported.
SNMP support
By default, the Telnet server is disabled. To enable the Telnet server, from the Windows 2000 for NAS user interface, go to the Network task group, then select
Telnet. On the Telnet Administration Configuration page, select the Enable Telnet access to this appliance check box. If you do not require Telnet access to the
NAS Gateway 300, then it is recommended that you leave the Telnet server disabled.
Support for the Simple Network Management Protocol (SNMP) is enabled. To manage the NAS Gateway 300 from an SNMP-capable management application, you must install the management information base (MIB) files for various components of the NAS Gateway 300 on the management application workstation, so that the application can recognize those SNMP elements (values, alerts, and so on) supported by the components.
Chapter 3. Configuration and administration tools 25
26 NAS Gateway 300 User’s Reference
Chapter 4. Setting up storage
This chapter gives details for setting up and configuring the fibre-attached storage for the NAS Gateway 300.
Note: You need to configure the storage on one node only. For the other node (the
joining node), the only part of shared storage setup that you will need to complete is assigning drive letters on the shared storage, making sure that the drive letters are the same as those on the first node.
Configuring arrays and logical drives on the fibre-attached storage
You will need to configure the RAID arrays and logical drives (LUNs) on the fibre-attached storage, or contact your disk administrator to do the configuration. The specific procedures for configuring arrays and LUNs are defined by the fibre-attached storage, so its documentation must be consulted for those procedures.
You will need to get the World Wide Name (WWN) of the Fibre Channel host adapter to set up an association between the NAS Gateway 300 node and each LUN that you create on the fibre-attached storage. To get the WWN:
1. Click the IBM NAS Admin icon.
2. Select NAS Management.
3. Select Storage.
4. Select NAS Utilities.
5. Select IBM Fibre WWN.
The IBM Fibre WWN panel will display information for each Fibre Channel adapter installed in your NAS Gateway 300, including PCI slot number and World Wide Name. The slot number given is not the physical PCI slot location of the adapter within the system but rather a reference slot to the PCI bridge of the PCI system. If you have only one Fibre Channel adapter, the information for that adapter will appear immediately in the fields on the right side of the panel.
If you have multiple Fibre Channel adapters, select the radio button next to each adapter listing until you find the one whose displayed PCI slot number matches the PCI slot number of the actual adapter. Make a note of the World Wide Name that is displayed so that you can provide this when configuring your fibre-attached storage to be accessed through the NAS Gateway 300.
There is an additional requirement imposed on the configuration of the fibre-attached storage: one LUN must be defined for the Quorum drive. The Quorum drive is used by Microsoft Cluster Service to manage clustered resources, including the fibre-attached storage. The requirements for the Quorum drive LUN are the following:
v The array in which you create the Quorum drive LUN should be RAID 5. This is
recommended for performance and redundancy.
v The Quorum drive LUN should be at least 500 MB in size, but no larger than 1
GB.
v The Quorum drive LUN should be completely dedicated for use by Microsoft
Cluster Service; no other data should be stored on this LUN. However, it is acceptable for the array in which this LUN is created to have other LUNs.
v See “Recovering from a corrupted Quorum drive” on page 52 in the event of a
power loss to both nodes or a hardware failure that corrupts Quorum data.
© Copyright IBM Corp. 2002 27
You can configure other arrays and LUNs for user data as required. However, do not create any arrays that are RAID 0, as this is not supported. It is recommended that all arrays be RAID 5 arrays.
Expanding the LUN
LUN expansion is enabled by the DiskPart command line utility. Using DiskPart and array/LUN management software, you can dynamically expand an existing logical drive into unallocated space that exists in a LUN.
Note that you cannot use DiskPart to dynamically expand an existing LUN in an array. You can do this only with array/LUN management software such as Storage Manager Application. DiskPart cannot change the size of the drive that the external storage has configured; it can only change how much of the drive that Windows can use.
Attention: It is highly recommended that you always perform a backup of your data before using the DiskPart utility.
To perform LUN expansion, use the following two DiskPart commands:
select This command focuses on (selects) the volume that you want to expand.
The format of the command and its options are
select volume[=n/l]
You can specify the volume by either index, drive letter, or mount point path. On a basic disk, if you select a volume, the corresponding partition is put in focus. If you do not specify a volume, the command displays the current in-focus volume.
extend
This command extends the current in-focus volume into contiguous unallocated space. The unallocated space must begin where the in-focus partition ends. The format of the command and its options are
extend [size=n]
where size is the size of the extension in MB.
Note that if the partition had been formatted with the NTFS file system, the file system is automatically extended to occupy the larger partition, and data loss does not occur. However, if the partition had been formatted with a file system format other than NTFS, the command is unsuccessful and does not change the partition.
DiskPart blocks the extension of only the current system or boot partition.
Several other commands are useful when you expand the LUN:
assign
Use this command to assign a letter or mount point to the current selected (in-focus) partition. If you do not specify a drive letter, the next available drive letter is assigned. If the letter or mount point is already in use, an error is generated.
You can use this command to change the drive letter that is associated with a removable drive. The drive letter assignment is blocked on the system, boot, or paging volumes. You cannot use this command to assign a drive letter to an OEM partition or any globally unique identifier (GUID) partition table (GPT) partition, other than the Msdata partition.
28 NAS Gateway 300 User’s Reference
The format of the command and its options are:
assign [letter=l] or [mount=path]
convert
You can use several commands to convert disks. The format and options for each of the commands are:
convert mbr convert gpt convert dynamic convert basic
convert mbr sets the partitioning style of the current disk to master boot record (MBR). The disk can be a basic disk or a dynamic disk but the disk must not contain any valid data partitions or volumes.
convert gpt sets the partitioning style of the current disk to GPT. The disk may be a basic or a dynamic disk but it must not contain any valid data partitions or volumes. This command is valid only on Itanium
-based
computers; it can be unsuccessful on x-86-based computers.
convert dynamic changes a basic disk into a dynamic disk. The disk can contain valid data partitions.
convert basic changes an empty dynamic disk to basic.
list You can use several commands to display summaries of disk configuration.
The format for each of the commands is:
list disk list partition list volume
list disk displays summary information about each disk in the computer. The disk with the asterisk (*) has the current focus. Only fixed disks (for example, IDE or SCSI) or removable disks (for example, 1394 or USB) are listed. The removable drives are not displayed.
list partition displays information about each partition on the in-focus disk.
list volume displays information about each volume in the computer.
Using DiskPart with clustering
To expand a volume, you first need to add free space at the end of the volume that you want to expand. This free space is now directly behind the existing volume that is to be extended.
Verify which computer is the owner of the volume that you want to expand:
1. Open the IBM NAS Admin utility and select Cluster Tools Cluster
Administration.
2. Select the group where the disk is located and see what computer is the owner of the disks.
3. Shut down the other node. You can move all cluster resources to the other server prior to this shutdown by right-clicking the group in the Cluster Administrator and selecting Move Group.
Chapter 4. Setting up storage 29
Important
Stop all I/O to the disk while performing this procedure by setting offline all the resources in the cluster group that contains the disks. One way to do this is by bringing offline the cluster group that contains the disk in the cluster administration utility, and then bringing online only the physical disk. This should close all open handles to the disk.
4. Open a command prompt window and issue the diskpart command. Or, in the
IBM NAS Admin utility, select Storage Diskpart Diskpart.
5. In the DiskPart utility, you can list the volumes in the computer by issuing the List Volume command. Check the number for your volume by comparing the label.
6. Select the volume you want to extend by entering:
select volume X
where X is the number of the volume you want to extend.
7. Issue the command extend to extend the selected volume.
8. Rescan the disks with the rescan command and list all volumes with the capacity changed.
9. Exit the utility by issuing the exit command.
10. Bring online all resources in the cluster group containing the disk by right-clicking the group and selecting Bring Online.
11. Power on the node that is down and move the group that owns the disk to
ensure proper operation. When the node is up, perform these steps: a. Open the IBM NAS Admin utility and select Cluster Tools Cluster
Administration.
b. Right-click the group in the Cluster Administrator and select Move Group.
12. When the group has been moved, check that the volume size has been increased by opening the IBM NAS Admin tool and selecting Storage Disk
Management (local).
Formatting the logical drives
Note the following restrictions when formatting logical drives:
1. Disk 0 (the internal hard disk drive) is an 18-GB drive, preformatted into two
partitions: a 6-GB partition (label System, drive letter C:) and a 12-GB partition (label MAINTENANCE, drive letter D:). Do not reformat or repartition these partitions. Doing so could wipe out important data and seriously impair the functioning of your system.
2. Do not upgrade any disks to dynamic. Only basic disks are supported for
clustering. In addition, all partitions used for clustering must be primary partitions.
3. Do not use drive letter F: as a volume drive letter. This drive letter is reserved
for Persistent Storage Manager-based backup using NAS Backup Assistant.
Follow this procedure to format logical drives.
1. Open IBM NAS Admin and select Disk Management (Local) in the Storage
folder.
2. At the Write Signature and Upgrade Disk Wizard, click Cancel.
30 NAS Gateway 300 User’s Reference
3. Right-click Disk 1 and select Write Signature.
4. Write Signature to all disks that will be accessed by the NOS (all disks in view).
5. On each disk:
a. Right-click and select Create Partition and click Next.
b. Select Primary Partition and click Next.
c. Select the entire disk size and click Next.
d. Specify NTFS as the file system. If this is the Quorum disk, specify Quorum
disk as the Volume Label; otherwise, specify whatever name you want to assign to the partition.
e. Click Finish. Do not enable disk compression, and select Finish.
6. Format all other drives but do not enable compression. Use all space available for each drive for each logical drive assigned by the operating system. Assign a drive letter of G for the first drive (the Quorum drive), H for the second drive (the first user volume), and so on.
7. Shut down the first node and make sure the drives are available on the joining node. Change the drive letters to match those on the first node. Rescan the disks if the LUNs do not show up.
At this point, you have completed shared storage setup. You can now continue with Chapter 5, “Completing networking, clustering, and storage access setup” on page 33.
Chapter 4. Setting up storage 31
32 NAS Gateway 300 User’s Reference
Chapter 5. Completing networking, clustering, and storage access setup
The NAS Gateway 300 uses Microsoft Cluster Server (MSCS) software to provide clustering technology for your storage. Clustering ensures availability of the storage, regardless of individual component failure.
After installing the clustering function, you can use Cluster Administration to set up the failover function. Then, if a node or a node’s component were to fail, the NAS Gateway 300 detects the failure and begins a failover process in less than 10 seconds, and completes the process within 60 seconds. Failover/Failback includes Active/Active support for the CIFS and NFS protocols.
Active/Active support is available for HTTP and FTP. See the online cluster administration guide for this procedure.
Novell NetWare and Apple Macintosh shares are available on both nodes, but not through clustering services. If either node fails, the shares become unavailable until the node is brought back up.
This chapter gives the details for installing and initially configuring MSCS on the NAS Gateway 300. Administrative concepts and procedures are provided in the online help and at the following Web sites:
v www.microsoft.com/windows2000/library/technologies/cluster/default.asp
v www.microsoft.com/ntserver/support/faqs/clustering_faq.asp
v http://support.microsoft.com/default.aspx?scid=kb;EN-US;q248025
Networking setup
Attention: Before you install cluster services on the first node, make sure that the
joining node is shut down (see “Shutting down and powering on the NAS Gateway 300” on page 87 for more information on shutting down the NAS Gateway 300). This is required to prevent corruption of data on the shared storage devices. Corruption can occur if both nodes simultaneously write to the same shared disk that is not yet protected by the clustering software.
Note: After you complete this procedure on the first node, you must complete it on
the joining node with the first node shut down.
Configuring the interconnect (private) network adapter
To configure the interconnect (private) network adapter, perform the following steps on both nodes. The Private connection is the “heartbeat” interconnect for the cluster.
1. Right-click My Network Places and then select Properties.
2. Select the network connection that uses the integrated Ethernet controller.
3. Right-click the adapter icon and click Properties.
4. Click Configure, select the Advanced tab, and verify that the following characteristics are set:
© Copyright IBM Corp. 2002 33
Link speed and Duplex
100 Mbps / Full Duplex
1000 Mbps / Full Duplex
5. Click OK.
6. If prompted to restart the node, select No.
7. In the Properties panel for the integrated Ethernet controller connection, select
Internet Protocol (TCP/IP) from the components section, and click Properties.
8. The default IP addresses should be:
v 10.1.1.1 for the first node
v 10.1.1.2 for the joining node
If they are not, it is recommended that you set them to those values.
9. Ensure a Subnet Mask of 255.255.255.0.
10. Click Advanced, and select the WINS tab.
11. Select the Disable NetBIOS over TCP/IP radio button.
12. Click OK.
13. Select Yes at the prompt to continue using an empty Primary WINS address.
14. Click OK on the Internet Protocol (TCP/IP) Properties panel.
15. Click OK on the Local Area Connection Properties (Private) panel.
16. Rename the connection to Private.
Configuring the public local area connection
Note: While the public network adapter’s IP address can be automatically obtained
if a DHCP server is available, this is not recommended for cluster nodes. It is strongly recommended that you set static IP addresses for all network adapters in the cluster. If IP addresses are obtained through DHCP, access to cluster nodes could become unavailable if the DHCP server goes down.
To configure each public local area connection, perform the following steps on each node:
1. Right-click My Network Places, then click Properties.
2. Select a Local Area Connection.
When you perform this step, the connection that uses the integrated Ethernet controller is the private connection. The other active connection is the public connection. Use that other active connection for this step and the next step.
3. To rename the connection, click Rename, and then type (for example) Public 1, and press Enter. Ensure that local area connection names are unique.
When you perform these renaming steps for the joining node, ensure that the local area connection name for each physically connected network is identical on each server. See Table 3 on page 35 for a further example.
4. Use the networking information in Table 1 on page 8 to enter the networking addresses: a. Right-click My Network Places. b. Click Properties. c. Right-click the Public icon, and then click Properties. d. Select Internet Protocol (TCP/IP).
34 NAS Gateway 300 User’s Reference
e. Click Properties, select Use the following IP address:, and enter the
addresses for the IP, subnet mask, default gateway, and preferred DNS server.
5. If needed, configure the DNS, WINS, HOSTS, or whichever method you will be using for names resolution. To view this information, click Advanced on the Properties window.
Note: NetBIOS should be disabled.
6. Click OK on each panel to return to the Properties window.
Do not place paired adapters on the same IP network unless you are going to use adapter teaming or adapter load balancing.
Verifying network connectivity and names resolution
Verify network connectivity and names resolution after you have installed clustering on the joining node.
To verify that the private and public networks are communicating properly:
1. Click Start Run, type cmd in the text box, and click OK to bring up an MS-DOS prompt.
2. Type ping ipaddress where ipaddress is the IP address for the corresponding network adapter in the other node, and press Enter.
For example, assume that the IP addresses are set as follows:
Table 3. Example of local area connection names and network adapter IP addresses
Node Local area connection name Network adapter IP address
1 Private 10.1.1.1
1 Public 1 192.168.1.12
1 Public 2 192.168.2.12
2 Private 10.1.1.2
2 Public 1 192.168.1.13
2 Public 2 192.168.2.13
In this example, you would type ping 192.168.1.12 and ping 192.168.2.12 for the first node, and you would type ping 192.168.1.13 and ping 192.168.2.13 for the joining node. You can do this from any machine that is physically connected to the network of each node.
To view the addresses, use the ipconfig command on each node:
1. Click Start Run, type cmd in the text box, and click OK to bring up an MS-DOS prompt.
2. Type ipconfig /all and press Enter. IP information should appear for all network adapters in the machine.
Chapter 5. Completing networking, clustering, and storage access setup 35
Checking or changing the network binding order
The clustering function requires the following binding order:
v Private v Public 1 v Public 2
v
. . .
The top-most connection is first in the binding order. Typically, this is the most frequently used network adapter.
To check the binding order and change it:
1. From the desktop, right-click My Network Places and then select Properties.
2. Select Advanced Settings from the Advanced menu.
3. Reorder the position of the adapters by selecting them, then pressing the up or down arrow keys, then clicking OK.
Figure 3. Advanced Settings for binding order
If prompted to restart, click No. If you change the binding order, you do not have to reboot until after you join the node to the domain.
Joining a node to a domain
For the Windows Cluster service to form a cluster on a given node, the service must authenticate with a Windows domain. If a Windows domain controller is available on a public network to which both nodes will be physically connected, follow the instructions below. Otherwise, follow the instructions in “Creating an Active Directory Domain” on page 37 to create a new domain that will encompass just the cluster itself. All nodes in the cluster must be members of the same domain and be able to access a Primary Domain Controller (PDC) and a DNS server.
36 NAS Gateway 300 User’s Reference
All nodes in the cluster must be members of the same domain and be able to access a PDC and a DNS server.
1. Right-click My Computer, and click Properties.
2. Click Network Identification. The System Properties dialog box displays the full computer name and workgroup or domain.
3. Click Properties and perform these steps to join a domain:
a. Select the Domain radio button.
b. Type the name of your domain and click OK.
c. When prompted, enter the Administrator user ID and password and click
OK.
4. Close the System Properties window.
5. Restart the node, and proceed with “Cluster setup” on page 39.
After the computer restarts, it is recommended that you do not log on to the domain. If you do, you will see the Windows 2000 Configure Your Server window. Click the I will configure this server later radio button, and then click the Next button. On the next window, clear the Show this screen at startup check box and click Finish.
Creating an Active Directory Domain
The Windows 2000 Cluster service runs in the context of a Windows-based domain security policy, typically created specifically for the Cluster service to use. For the Cluster service to form a cluster on a given node, the service must first authenticate itself using the credentials of this policy. A domain controller must be available for the domain that issued the policy for authentication to occur. If the Cluster service does not have access to a domain controller, it cannot form a cluster.
Note: For Active Directory to function properly, DNS servers must provide support
for Service Location (SRV) resource records described in RFC 2052, A DNS RR for specifying the location of services (DNS SRV). SRV resource records map the name of a service to the name of a server offering that service. Active Directory clients and domain controllers use SRV records to determine the IP addresses of domain controllers. Although not a technical requirement of Active Directory, it is highly recommended that DNS servers provide support for DNS dynamic updates described in RFC 2136,
Observations on the use of Components of the Class A Address Space within the Internet.
The Windows 2000 DNS service provides support for both SRV records and dynamic updates. If a non-Windows 2000 DNS server is being used, verify that it at least supports the SRV resource record. If not, it must be upgraded to a version that does support the use of the SRV resource record. A DNS server that supports SRV records but does not support dynamic update must be updated with the contents of the Netlogon.dns file created by the Active Directory Installation wizard while promoting a Windows 2000 Server to a domain controller.
By default, the Active Directory Installation wizard attempts to locate an authoritative DNS server for the domain being configured from its list of configured DNS servers that will accept a dynamic update of an SRV resource record. If found, all the appropriate records for the domain controller are automatically registered with the DNS server after the domain controller is restarted.
If a DNS server that can accept dynamic updates is not found, either because the DNS server does not support dynamic updates or because dynamic updates are
Chapter 5. Completing networking, clustering, and storage access setup 37
not enabled for the domain, the following steps are taken to ensure that the installation process is completed with the necessary registration of the SRV resource records:
1. The DNS service is installed on the domain controller and is automatically configured with a zone based on the Active Directory domain.
For example, if the Active Directory domain that you chose for your first domain in the forest was example.microsoft.com, a zone rooted at the DNS domain name of example.microsoft.com is added and configured to use the DNS service on the new domain controller.
2. A text file containing the appropriate DNS resource records for the domain controller is created.
The file called Netlogon.dns is created in the %systemroot%\System32\config folder and contains all the records needed to register the resource records of the domain controller. Netlogon.dns is used by the Windows 2000 NetLogon service and to support Active Directory for non-Windows 2000 DNS servers.
If you are using a DNS server that supports the SRV resource record but does not support dynamic updates (such as a UNIX-based DNS server or a Windows NT
®
Server 4.0 DNS server), you can import the records in Netlogon.dns into the appropriate primary zone file to manually configure the primary zone on that server to support Active Directory.
If you are configuring the first node, complete these steps to create the Active Directory Domain Controller:
1. Start the Active Directory Installation Wizard from the IBM NAS Admin console by selecting Local Domain Controller Setup in the Cluster Tools folder.
2. Read the first page and click Next.
3. On the Domain Controller Type page, select Domain controller for a new domain; then click Next.
4. On the Create Tree or Child Domain page, select Create a new domain tree; then click Next.
5. On the Create or Join Forest page, select Create a new forest of domain trees; then click Next.
6. On the New Domain Name page, type the full DNS name for the new domain. Write down this value now; it will be needed later on. Click Next.
7. On the NetBIOS Domain Name page, click Next.
8. On the Database and Log Locations page, click Next to accept the default values.
9. On the Shared System Volume page, click Next to accept the default value.
10. On the Permissions page, select Permissions compatible only with Windows 2000 servers; then click Next.
11. On the Directory Services Restore Mode Administrator password, type your
password. Write down this value now; it will be needed later on. Click Next.
12. On the Summary page, review the values; then, click Next and wait for the Configuring Active Directory process to complete.
13. On the Active Directory Installation Wizard page, click Finish. If prompted, defer the system reboot.
14. Restart the node.
38 NAS Gateway 300 User’s Reference
If you are configuring the joining node, perform these tasks to join this node to the existing Active Directory Domain Controller (previously created on the first node):
1. Start the Active Directory Installation Wizard from the IBM NAS Admin console by selecting Local Domain Controller Setup in the Cluster Tools folder.
2. Read the first page and click Next.
3. On the Domain Controller Type page, select Additional domain controller for an existing domain; then click Next.
4. On the Network Credentials page, type the user name (Administrator), password, and domain name (enter the Active Directory Domain name created on the other node); then click Next.
5. On the Additional Domain Controller page, type the full DNS name of the existing domain; then click Next.
6. On the Database and Log Locations page, click Next to accept the default values.
7. On the Shared System Volume page, click Next to accept the default value.
8. On the Directory Services Restore Mode Administrator password page, type your password. Write down this value now; it will be needed later on. Click Next.
9. On the Summary page, review the values; then click Next. Wait for the Configuring Active Directory process to complete.
10. On the Active Directory Installation Wizard page, click Finish. If prompted, reboot the system now.
Cluster setup
At this step, you have completed the cluster installation steps on each node and are ready to set up the cluster.
Perform the following steps:
1. Power on the first node. The joining node should be shut down (see “Shutting down and powering on the NAS Gateway 300” on page 87 for more information on shutting down the NAS Gateway 300).
2. To begin setting up the cluster on the node, open IBM NAS Admin, then the Cluster Tools folder, and click the Cluster Setup icon.
3. At the prompt, verify that you have completed the steps that precede this cluster setup step. If you have, click Continue.
4. If this is the first node, click First Node. If this is the joining node, go to Step 12 on page 40 and continue from there.
5. The Cluster Information panel appears. Enter the data for the following fields (some of this data comes from Table 1 on page 8): v Administrator ID and password
Note: The ID and password are any valid user ID and password with
administrator privileges on the domain.
v Domain name v Cluster name v Cluster IP address v Subnet mask v Quorum drive (select from the pulldown menu)
Chapter 5. Completing networking, clustering, and storage access setup 39
Figure 4. Cluster Information panel
6. After you enter the data, click Continue.
7. Verify the information. If it is correct, click Yes to start the configuration. Configuration takes a few minutes.
8. If you are prompted to select a user account, enter the user name and password for the domain account that you want the cluster service to use.
9. If you are prompted to select a disk on which to store cluster checkpoint and log files, do the following:
a. Select the disk on which the Quorum is located (for instance, G, if this is
what you specified earlier) and click Next.
b. Click Finish at the Cluster Information panel.
10. Cluster configuration completes for the first node.
11. Power on the joining node. (You will join this node to the cluster.)
12. In the Cluster Setup wizard, click Joining Node.
13. In the First Node Information panel, enter the name of the first node.
14. At the prompt, specify the domain.
15. If prompted to confirm the Administrator name and password, enter that information and click Finish.
You will see a message that configuration takes a few minutes. When configuration completes, the Cluster Administration function starts.
Go to “Verifying network connectivity and names resolution” on page 35 and complete the procedure to verify network connectivity and names resolution.
You have now completed cluster setup.
40 NAS Gateway 300 User’s Reference
Configuring clusters
This section contains procedures to assist you in configuring basic cluster functions. It is assumed that the cluster installation procedures in “Cluster setup” on page 39 have completed without errors, and both cluster nodes are running.
It is recommended that you review the Cluster Administration Guide, located in the IBM NAS Admin in the Cluster Tools folder, before continuing with the following steps.
Configuring cluster state and properties
You must complete the following steps on the first node to reset the size of the logfile and set the priority and purpose of the private network.
1. Select Cluster Administration, located in IBM NAS Admin, in the Cluster Tools folder.
If prompted for a cluster name, enter the name of the cluster, and then click Open.
2. The cluster name appears in the left panel. Click the cluster name to see the status of the cluster nodes in the right pane. The state of both nodes should be “Up”.
3. Right-click the cluster name and select Properties.
a. Select Quorum Disk, and change the Reset quorum log at: field from 64 KB
to 4096 KB.
b. Select Network Priority to view all networks acknowledged by the cluster
server, and then select the private network connection and move it to the top for cluster communication priority by clicking Move Up.
This provides internal communication to the private network before attempts are made to communicate over any public networks that are installed. Do not change the communication options for the public network adapters as they should support both network and cluster traffic.
4. Open the properties for the private network and select Internal cluster communication only (private network) to ensure that no client traffic will be placed on the private network.
5. Click Apply, OK, and then OK.
Setting up cluster resource balancing
When you configure cluster resources, you should manually balance them on the disk groups to distribute the cluster resource functions between the two nodes. This allows for a more efficient response time for the clients and users accessing these resources.
To set up cluster resource balancing:
1. Select a disk group and bring up its Properties panel by right-clicking it.
2. Click the General tab.
3. Click the Modify button to the right of the Preferred owners: field.
4. In the Available nodes pane, select a node and click the button to move the node to the Preferred Owners pane.
5. Complete Steps 1 through 4 for each disk group.
Each disk group has a preferred owner so that, when both nodes are running, all resources contained within each disk group have a node defined as the owner of
Chapter 5. Completing networking, clustering, and storage access setup 41
those resources. Even though a disk group has a preferred owner, its resources can run on the other node in the cluster following a failover. If you restart a cluster node, resources that are preferentially owned by the restarted node switch to the standby system when the cluster service detects that the node is operational, and provided that the defined failover policy allows this to occur. If you have not defined the node as the preferred owner for the resources, then they do not switch to the standby system.
Note: You must reboot before you can see changes made to the cluster resource
Setting up failover
The failover of resources under a disk group on a node enables users to continue accessing the resources if the node goes down. Individual resources contained in a group cannot be moved to the other node; rather, the group it is contained in is moved. If a disk group contains a large number of resources and any one of those resources fails, then the whole group will perform a failover operation according to the group’s failover policy.
The setup of the failover policies is critical to data availability.
To set up the failover function:
1. Open the Properties panel for the disk group.
2. Select the Failover tab to set the Threshold for Disk Group Failure.
3. Select the Failback tab to allow, or prevent, failback of the disk group to the
balancing.
For example, if a network name fails, clustering services attempts to perform a failover operation for the group 10 times within six hours, but if the resource fails an eleventh time, the resource remains in a failed state and administrator action is required to correct the failure.
preferred owner, if defined.
Creating users
In allowing failback of groups, there is a slight delay in the resources moving from one node to the other. The group can also be instructed to allow failback when the preferred node becomes available or to perform a failover operation during specific off-peak usage hours.
Each resource under each disk group has individual resource properties. The properties range from restart properties, polling intervals to check if a resource is operational, to a timeout to return to an online state. The default settings for these properties are selected from average conditions and moderate daily use.
The creation of users is performed through normal procedures. Users do not need to be created exclusively for use on the cluster resources. You must define properties of the resources for users to access the resources within the domain policies. All user-accessible cluster resources have the same properties as standard Microsoft Windows resources, and should be set up following the same policies.
Note: If your storage will be accessed by UNIX or UNIX-based clients and servers,
continue with “Defining UNIX users and groups” on page 44. The NAS Gateway 300 is on a Windows domain and inherits those Windows users, eliminating the need to define local Windows users and groups. Also, shares are created in the clustering setup.
42 NAS Gateway 300 User’s Reference
v If Windows clients and servers will access your storage, follow the steps in
“Defining Windows users and groups”.
v If UNIX and UNIX-based clients and servers will access your storage, follow the
steps in “Defining UNIX users and groups” on page 44.
v If both Windows and UNIX clients and servers will access your storage, follow
the steps in “Defining Windows users and groups” and then follow the steps in “Defining UNIX users and groups” on page 44.
Defining Windows users and groups
This section describes how to set up Windows users and groups that will access the NAS Gateway 300 storage.
You can define new local users and groups on the NAS Gateway 300 and also allow existing users and groups to access the NAS Gateway 300 storage. You can also add the NAS Gateway 300 to an existing Windows domain that is controlled by a PDC and define new users and groups on the PDC who can access the NAS Gateway 300.
If you are defining local Windows users and groups, follow the steps in “Defining local Windows users and groups”. If you are giving access to the NAS Gateway 300 storage to users and groups in an existing Windows domain, follow the steps in “Giving storage access to Windows domain users and groups” on page 44.
Defining local Windows users and groups: If you are defining local Windows users and groups, you can use the Windows 2000 for NAS user interface. In the Users task group, you create and manage local users and groups on the NAS Gateway 300. To go to the users page, click Users. From this page you can create, edit, and delete local users and groups on the NAS Gateway 300 by clicking either Local Users or Local Groups.
To create new local users:
1. Click Local Users.
2. Click New....
3. Type user name, password, and description (optional).
4. Click OK. The new user name should appear in the list of user names.
5. Repeat Steps 1 through 4 for each new local user that you want to add.
6. When you finish adding new users, click Back to return to the Users and Groups page.
To create new local groups:
1. Click Local Groups.
2. Click New....
3. Type group name and description (optional).
4. Click Members.
5. For each user that you want to add to the group, select the user name from the list of users, and then click Add.
6. Click OK. The new group name should appear in the list of group names.
7. Repeat Steps 1 through 6 for each new local group that you want to add. If your storage is also going to be accessed by UNIX or UNIX-based clients and servers, continue with “Defining UNIX users and groups” on page 44. Otherwise, continue with “Creating shares” on page 49.
Chapter 5. Completing networking, clustering, and storage access setup 43
Giving storage access to Windows domain users and groups: You must first join the NAS Gateway 300 to the Windows domain. You can use the Windows 2000 for NAS user interface to do this. Start the Windows 2000 for NAS user interface, and then do the following:
1. Click Network.
2. Click Identification.
3. Select the radio button labeled Domain, and specify the name of the domain being joined.
4. Specify a user name and password that can be used to log on to the domain.
5. Click OK.
6. Shut down and restart the NAS Gateway 300.
Users and groups already defined in the domain can now be given access to any file shares that you create on the NAS Gateway 300. If you need to add new users and groups to the domain, consult the online documentation on the PDC for information on performing this procedure, or if you are not the administrator of the domain (PDC), contact the domain administrator to have the users and groups defined.
If your storage is also going to be accessed by UNIX or UNIX-based clients and servers, continue with “Defining UNIX users and groups”. Otherwise, continue with “Creating shares” on page 49.
Defining UNIX users and groups
This section describes how to set up UNIX users and groups to access the NAS Gateway 300 storage using the Network File System (NFS) protocol.
Support for NFS is provided in the NAS Gateway 300 by a preloaded and preconfigured software component, Microsoft Services for UNIX. The levels of NFS supported by Services for UNIX, and in turn the NAS Gateway 300, are NFS Versions 2 and 3. Any client or server that is using an NFS software stack supporting NFS Version 2 or NFS Version 3, regardless of the operating system, should be able to connect to the NAS Gateway 300 and access its storage through NFS.
You administer NFS file shares and other attributes with standard Windows administration tools, including those provided as part of the IBM NAS desktop and the Microsoft Windows 2000 for NAS user interface. Additional configuration of the User Name Mapping component of Services for UNIX, which maps the UNIX user name space to the Windows user name space, is required to support NFS security.
Consult the online documentation for Services for UNIX for more information on configuring User Name Mapping. To view the online documentation for Services for UNIX on the NAS Gateway 300:
1. From the NAS Gateway 300 desktop, click the IBM NAS Admin icon.
2. On the left pane of the IBM NAS Admin console, expand File Systems.
3. Expand Services for UNIX.
4. Select any of the items that appear under Services for UNIX.
5. Click anywhere on the right pane of the IBM NAS Admin console, and then press the F1 key to bring up the online documentation for Services for UNIX in a separate window.
44 NAS Gateway 300 User’s Reference
You can define a local UNIX name space on the NAS Gateway 300 by configuring the Server for PCNFS component of Services for UNIX. Alternately, you can point Services for UNIX to an existing Network Information Service (NIS) domain that defines the UNIX name space. In both cases, you must configure the User Name Mapping component to map the UNIX name space that you select to the Windows name space, because file shares and individual file and directory permissions on the NAS Gateway 300 are defined in the context of the Windows name space.
To define a local UNIX name space, continue with “Using a local UNIX name space”. To use a UNIX name space defined on a NIS domain, continue with “Using the UNIX name space on an NIS domain” on page 47.
Using a local UNIX name space: This procedure should be performed only once. You might have to add more groups and users in the Server for PCNFS page if you add more users and groups to your UNIX environment and NAS Gateway 300 or Windows domain at a later time.
1. Open the IBM NAS Administration console by double-clicking the IBM NAS Admin icon on the NAS desktop.
2. In the left pane, select File Systems; then select Services for UNIX.
3. In the left pane, click Server for NFS.
4. In the right pane, in the Computer name: field, type localhost.
5. In the left pane, click Server for PCNFS.
6. In the right pane, click Groups.
7. On the Groups page, you must add the groups from your UNIX host to which all of your UNIX users belong. You need to know both the group name and the group ID (GID) number. This information can be found in the /etc/group file on most UNIX systems, or can be copied to the c:\winnt\system32\drivers\etc directory.
As an example, on an AIX system, in the following line from an /etc/group file, the fields are separated by a colon (:). The first field (“staff”) is the group name; the third column (“1”) is the GID:
staff:!:1:pemodem,ipsec,netinst,protcs
To add a group, type the group name and GID number in the Group name and Group number (GID) fields, and then click New.
8. When you finish adding groups, click Apply.
9. Click Users.
10. On the Users page, you can add all of the UNIX users who will be accessing and storing files on the NAS Gateway 300 through an NFS share. For each user you will need to know the Windows user name, the UNIX user name, the primary group, and the user ID (UID) number. This information can be found in the /etc/passwd and /etc/group files on most UNIX systems or these files can be copied to the c:\winnt\system32\drivers\etc directory.
As an example, on an AIX system, in the following line from an /etc/passwd file, the fields are separated by a colon (:). The first field (“user1”) is the user name; the third field (“3135”) is the UID, and the fourth field (“1”) is the GID of the user’s primary group. This will correspond to a line in the /etc/group file, where you can find the primary group name corresponding to the GID.
user1:!:3135:1:User 1:/home/user1:/bin/ksh
To add a user, click New, type the required information, and then click OK.
Chapter 5. Completing networking, clustering, and storage access setup 45
Services for UNIX supports a limited syntax in the passwd file. In particular, it seems to work best when the second field of each line—the password field—is filled in with a random 13-character string. This need not have anything to do with the user’s password, so a string such as 0123456789012 is acceptable. Some UNIX systems use shadow passwords and fill in this field with a meaningless token value such as ! or x, and you will need to change this.
11. When you finish adding users, click Apply.
12. In the left pane, click User Name Mapping.
13. In the right pane, select Personal Computer Network File System (PCNFS).
14. In the Password file path and name field, type
c:\winnt\system32\drivers\etc\passwd
15. In the Group file path and name field, type
c:\winnt\system32\drivers\etc\group
16. Next, delete all special users and groups, leaving just the actual users and groups that will be used in accessing NFS resources. An example of a special user is root, usually, and UID numbers from 0 to 99 are generally reserved for system accounts and should not be mapped.
17. Click Apply.
18. Click Maps.
On the Maps page, you can configure simple maps or advanced maps. Configure simple maps if the Windows user name and UNIX user name is the same for each UNIX user to be mapped, and the Windows group name and UNIX group name is the same for each UNIX group to be mapped. Otherwise, you should configure advanced maps.
19. To configure simple maps, select the Simple maps check box and continue with Step 20.
To configure advanced maps, clear the Simple maps check box and continue with Step 21.
20. Under Simple maps, select the Windows domain name from the drop-down list, and then continue with Step 22 on page 47. (If your Windows users are defined locally on the NAS Gateway 300, select the entry containing the computer name of the NAS Gateway 300, preceded by two backslash characters (“\\”). Otherwise, select the name of the Windows domain where the users are defined from the list.)
21. Under Advanced maps, perform the following steps. a. Define user mappings:
1) Click Show user maps.
2) Select the Windows domain name from the drop-down list. (If your Windows users are defined locally on the NAS Gateway 300, select the entry containing the computer name of the NAS Gateway 300, preceded by two backslash characters (“\\”). Otherwise, select the name of the Windows domain where the users are defined from the list.)
3) Click Show Windows Users to display all of the Windows user names in the Windows domain that you selected.
4) Click Show UNIX Users to display all of the UNIX user names in the NIS domain that you selected.
5) Type a Windows user name, or select one from the list of Windows user names.
6) Type a UNIX user name to be mapped to the Windows user name you specified, or select one from the list of UNIX user names.
7) Click Add to add the mapping between the UNIX user name and Windows user name to the list of maps.
46 NAS Gateway 300 User’s Reference
8) If multiple Windows user names are mapped to one UNIX user name, select one Windows user name to be the primary user name. Select the mapping corresponding to the primary user name from the list of maps, and then click Set Primary.
b. Define group mappings:
1) Click Show group maps.
2) Select the Windows domain name from the drop-down list. (If your Windows users are defined locally on the NAS Gateway 300, select the entry containing the computer name of the NAS Gateway 300, preceded by two backslash characters (“\\”). Otherwise, select the name of the Windows domain where the users are defined from the list.)
3) Click Show Windows Groups to display all of the Windows group names in the Windows domain you selected.
4) Click Show UNIX Groups to display all of the UNIX group names in the NIS domain you selected.
5) Type a Windows group name, or select one from the list of Windows group names.
6) Type a UNIX group name to be mapped to the Windows group name that you specified, or select one from the list of UNIX group names.
7) Click Add to add the mapping between the UNIX group name and Windows group name to the list of maps.
8) If multiple Windows group names are mapped to one UNIX group name, you must select one Windows group name to be the primary group name. Select the mapping corresponding to the primary group name from the list of maps, and then click Set Primary.
22. Click Apply.
User Name Mapping rereads its enumeration source on a schedule. By default, this occurs once a day. You can reset the refresh period. To force User Name Mapping to reread the enumeration source, you can click Synchronize Now on the Configuration panel.
Note: If maps do not seem to synchronize, you might need to stop and restart User
Name Mapping. You can do this through the GUI, or by the commands:
net stop mapsvc
net start mapsvc
You can now continue with “Creating shares” on page 49.
Using the UNIX name space on an NIS domain: The following procedure applies whether your NIS server is UNIX-based or Windows-based (implemented as a Windows domain controller running a Microsoft Server for NIS).
1. To open the IBM NAS Administration console, double-click the IBM NAS Admin icon on the NAS desktop.
2. In the left pane, expand File Systems; then expand Services for UNIX.
3. In the left pane, click Server for NFS.
4. In the right pane, in the Computer name: field, type localhost
5. In the left pane, click User Name Mapping.
6. In the right pane, select Network Information Services (NIS); then click Maps.
On the Maps page, you can configure simple maps or advanced maps. Configure simple maps if the Windows user name and UNIX user name is the
Chapter 5. Completing networking, clustering, and storage access setup 47
same for each UNIX user to be mapped, and the Windows group name and UNIX group name is the same for each UNIX group to be mapped. Otherwise, you should configure advanced maps.
7. To configure simple maps, select the Simple maps check box and continue with Step 8.
To configure advanced maps, clear the Simple maps check box and continue with Step 9.
8. Under Simple maps, perform the following steps: a. Select the Windows domain name from the drop-down list. (If your
Windows users are defined locally on the NAS Gateway 300, select the entry containing the computer name of the NAS Gateway 300, preceded by two backslash characters (“\\”). Otherwise, select the name of the Windows domain where the users are defined from the list.)
b. In the NIS domain box, type the NIS domain name. You can also type the
name of a specific NIS server in the NIS server box.
c. Continue with Step 10 on page 49.
9. Under Advanced maps, perform the following steps: a. Define user mappings as follows:
1) Click Show user maps.
2) Select the Windows domain name from the drop-down list. (If your Windows users are defined locally on the NAS Gateway 300, select the entry containing the computer name of the NAS Gateway 300, preceded by two backslash characters (“\\”). Otherwise, select the name of the Windows domain where the users are defined from the list.)
3) In the NIS domain field, type the NIS domain name. You can also type the name of a specific NIS server in the NIS server field.
4) Click Show Windows Users to display all of the Windows user names in the Windows domain you selected.
5) Click Show UNIX Users to display all of the UNIX user names in the NIS domain you selected.
6) Select a Windows user name from the list of Windows user names.
7) Select a UNIX user name to be mapped to the Windows user name that you specified.
8) Click Add to add the mapping between the UNIX user name and Windows user name to the list of maps.
9) If multiple Windows user names are mapped to one UNIX user name, you must select one Windows user name to be the primary user name. Select the mapping corresponding to the primary user name from the list of maps, and then click Set Primary.
b. Define group mappings as follows:
1) Click Show group maps.
2) Select the Windows domain name from the drop-down list. (If your Windows users are defined locally on the NAS Gateway 300, select the entry containing the computer name of the NAS Gateway 300, preceded by two backslash characters (“\\”). Otherwise, select the name of the Windows domain where the users are defined from the list.)
3) In the NIS domain field, type the NIS domain name. You can also type the name of a specific NIS server in the NIS server field.
4) Click Show Windows Groups to display all of the Windows group names in the Windows domain that you selected.
5) Click Show UNIX Groups to display all of the UNIX group names in the NIS domain that you selected.
6) Select a Windows group name from the list of Windows group names.
48 NAS Gateway 300 User’s Reference
Creating shares
7) Select a UNIX group name to be mapped to the Windows group name that you specified.
8) Click Add to add the mapping between the UNIX group name and Windows group name to the list of maps.
9) If multiple Windows group names are mapped to one UNIX group name, you must select one Windows group name to be the primary group name. Select the mapping corresponding to the primary group name from the list of maps, and then click Set Primary.
10. Click Apply.
You can now continue with “Creating shares”.
To create new file shares on the NAS Gateway 300, do the following:
1. Start the Windows 2000 for NAS user interface.
2. Click the Shares tab.
3. Click the Shares task.
4. Click New....
5. Specify the share name (the name that clients and servers will use to access the share).
6. Specify the share path and select the Create folder if it does not already exist check box.
7. By default, the Microsoft Windows (CIFS) and UNIX (NFS) check boxes are selected (enabled). If this share is not to be accessed by Windows clients and servers, clear (disable) the Microsoft Windows (CIFS) check box. If this share is not to be accessed by UNIX clients and servers, clear the UNIX (NFS) check box.
8. If this share is to be accessed by: v Windows clients and servers, then click CIFS Sharing and specify the
access permissions that you want. (Note that, by default, every user has full access to all files and directories under the shared folder.)
v UNIX clients and servers, then click NFS Sharing and specify the access
permissions that you want. (Note that by default, every user has full access to all files and directories under the shared folder.)
9. Click OK. The new share should appear in the list of shares.
10. Repeat Steps 4 through 9 for each additional share that you want to create.
A note on anonymous access: It is strongly recommended that you not disable anonymous access. If a client presents a UID that is not recognized, Server for NFS can still grant that client a very limited form of access as a special nobody user. This is known as anonymous access, and you can enable or disable on a per-share basis. This anonymous user will have very limited access to resources on the NAS: it has only the permissions that are granted to the Everybody group in Windows, which corresponds to the other (or world) bits in a POSIX permissions mode.
Allowing anonymous access is not a security risk, so disabling it might provide a false sense of security. (The real security risk is to grant everyone access to resources that should be protected.) And disabling anonymous access has one severe consequence: it is so unexpected by NFS clients that they might not be able to connect as NFS V3 clients at all, and might instead downgrade the connection to use the NFS V2 protocol.
Chapter 5. Completing networking, clustering, and storage access setup 49
Creating clustered file shares (CIFS and NFS)
Note: For HTTP and FTP clustering setup and file sharing, refer to the information
at the following URL:
http://support.microsoft.com/default.aspx?scid=kb;EN-US;q248025
The creation of file shares on a cluster involves dependencies on a physical disk, a static IP address, and a network name. These dependencies allow resources that are defined to the same disk group to move as a group. The dependencies also assure necessary access for the given resource.
Note: You must configure Server for NFS before NFS file sharing can be used.
See “Enabling Server for NFS” on page 52 for details.
Figure 5 illustrates the file share dependencies. Descriptions of the diagram components follow the figure.
File share
Network name
(Virtual server name)
IP
(Physical LAN connection)
Figure 5. File share dependencies
Physical disk
The base resource in which to store user data. It is not dependent on any other resources except the physical disk that it defines. The disk resource must also have the same drive letters on both nodes so that the definitions of resources that depend on it do not change if the resource is moved to the other node.
Static IP address
A virtual address that binds onto an existing IP address on one of the cluster’s public networks. This IP address provides access for clients and is not dependent on a particular node, instead a subnet that both nodes can access. Because this address is not the physical adapter’s permanent address, it can bind and unbind to its paired adapter on the same network on the other node in the cluster. You can create multiple IP addresses using the Cluster Administrator on the same physical network. A unique static IP address is required for each virtual server.
Physical disk
Note: The cluster IP address should not be used for file shares. That
Network name
An alternate computer name for an existing named computer. It is physically dependent on an IP address of one of the public networks. When a disk group contains an IP address resource and a network name, it is a virtual
50 NAS Gateway 300 User’s Reference
address is reserved to connect to and manage the cluster through the network that it is defined on.
server and provides identity to the group, which is not associated with a specific node and can be failed over to another node in the cluster. Users access the groups using this virtual server. A virtual server can have multiple file shares.
In the creation of a basic file share that is publicized to the network under a single name, you must set it up to be dependent on the physical disk and network name in the same disk group you are creating the file share in. The network name is dependent on the IP address, so do not add that to the dependency list. You can set the share permissions and advanced share resources also.
Users will access the cluster resources using \\<network_name>\<fileshare_name>.
Clustered file share creation example
An example of how to create a clustered file share follows. For this example, assume that you are creating a file share in Disk Group 2.
1. Create the IP address resource:
a. Right-click Disk Group 2, and select New Resource.
b. Enter an IP address name, for example ipaddr2, and change the resource
type to IP Address.
c. Select Run this resource in a separate Resource Monitor and click Next.
d. A list of possible owners appears, and both nodes should remain as
assigned. Click Next.
e. There are no resource dependencies on this panel, so click Next.
f. Enter your TCP/IP parameters. This will be the first virtual IP address. The
value in the Network field identifies to the system the network on which the address is located. Click Finish to create the resource.
g. Right-click the resource and select Bring online.
2. Create the network-name resource:
a. Right-click Disk Group 2, and select New Resource.
b. Enter the virtual server name to use (for example, NN2), select Network
Name as the resource type, and click Next.
c. Both nodes are possible owners. Click Next.
d. Add the IP address you created as a resource dependency in Step 1 and
click Next.
e. Type the virtual server name, NN2, into the Network Name Parameters field
and click Finish.
f. It takes a few moments to register the virtual server name with your name
server. After this completes, bring the resource online.
3. Create the CIFS or NFS file share resource:
a. Right-click Disk Group 2 and select New Resource.
b. Enter a file share name (for example, FS2) and select either File Share or
NFS Share.
c. Both nodes are possible owners. Click Next.
d. Add the resource dependencies for the physical disk and network name that
the file share will use and click Next.
e. Enter the share name of FS2 and the path to the disk in this group, either
drive or subdirectory. You can then set: v For CIFS shares properties:
– User Limit – Permissions
Chapter 5. Completing networking, clustering, and storage access setup 51
– Advanced File Share
v For NFS shares properties:
– Permissions – Share
A note on anonymous access: When you create an NFS share, it is strongly recommended that you not disable anonymous access to avoid client-connection problems. See “Enabling Server for NFS” for more details.
f. Click Finish to create the resource.
g. Right-click the resource and select Bring online.
Enabling Server for NFS
To enable Server for NFS, you need to specify where User Name Mapping is running.
To specify where User Name Mapping is running, follow this path, Services for UNIX User Name Mapping, and then enter the server name that is running User Name Mapping in the Computer Name field. For a cluster, this entry must be the clustered name or IP address, not that of an individual node.
When planning an NFS installation, consider which machines you want to have particular access-levels to NFS shares. Each class of access should be captured by defining a separate client group.
v To define a client group, click Services for UNIX Client Groups, type the
group name in the Group Name field, then click New.
v To add members to a client group, select a group name from the current groups
list; then click Advanced and type the name of a client (a valid computer name).
v A note on anonymous access: It is strongly recommended that you not disable
anonymous access. If a client presents a UID that is not recognized, Server for NFS can still grant that client a very limited form of access as a special nobody user. This is known as anonymous access, and you can enable or disable on a per-share basis. This anonymous user will have very limited access to resources on the NAS: it has only the permissions that are granted to the Everybody group in Windows, which corresponds to the other (or world) bits in a POSIX permissions mode.
Allowing anonymous access is not a security risk, so disabling it might provide a false sense of security. (The real security risk is to grant everyone access to resources that should be protected.) And disabling anonymous access has one severe consequence: it is so unexpected by NFS clients that they might not be able to connect as NFS V3 clients at all, and might instead downgrade the connection to use the NFS V2 protocol.
Recovering from a corrupted Quorum drive
Attention: Restoring a Quorum rolls the cluster back in time to the backup date.
There are impacts to performing this operation that include loss of data. You should perform this operation only when it is absolutely necessary.
Clustering relies on data stored on the Quorum disk to maintain resource synchronization between the two nodes in the cluster. In the event of a power loss to both nodes or a hardware failure that corrupts the Quorum data, the cluster service might not start, leading to the following event log error:
52 NAS Gateway 300 User’s Reference
Event ID: 1147 Source: ClusSvc Description: The Microsoft Clustering Service encountered a fatal error.
The Quorum drive data must be available so that the cluster service can confirm that the cluster configuration on the local node is up to date. If it cannot read the log, the cluster service does not start to prevent the loading of old configuration data.
To restore the Quorum disk, a Microsoft Windows Backup utility backup of the system state of the boot drive (C:) of one node must be available. Backing up the entire boot drive also saves the system state. Backing up the system state automatically saves the Quorum log and other cluster files.
A Microsoft tool is needed as part of the Quorum restore procedure. This tool is called Clusrest.exe and can be downloaded from the Microsoft Web site at the following URL:
http://download.microsoft.com/download/win2000platform/clusrest/1.0/NT5/EN­US/clusrest.exe
The Quorum restore procedure involves restoring the system state and cluster state to the node followed by execution of the Clusrest.exe tool. Upon completion of the restore, the node should rejoin the cluster and return to normal operation.
1. Restore the entire boot drive of the node if needed. Otherwise, restore the system state to the node.
2. Ensure that the cluster service is stopped on the other node.
3. Restore the Quorum/cluster information to that node by selecting to restore at least the system state. This creates a temporary folder under the Winnt\Cluster folder called Cluster_backup.
4. Run the Clusrest.exe tool to rebuild the Quorum drive. The tool moves the cluster information from the node’s boot drive to the Quorum drive.
5. After you complete the process and the cluster service has started successfully on the newly restored node, restart the cluster service on the other node.
Note: If you do not follow this process, and another node with a more current
database takes ownership of the Quorum before you update the database from the restored node, the restore does not work.
Before you add software
You have now completed networking and clustering setup and administration, and the NAS Gateway 300 is at a point where you can install software on it. But before you do, it is recommended that you take advantage of the Persistent Storage Manager (PSM) disaster recovery function, detailed in “Disaster Recovery” on page 66.
The PSM disaster recovery function enables you to restore the system drive from a single image, without having to go through the entire recovery procedure and then additionally having to restore a system drive backup. So, if any software you install creates unresolvable problems for your system, you can regain the stable system you had before you installed the software.
Chapter 5. Completing networking, clustering, and storage access setup 53
54 NAS Gateway 300 User’s Reference
Chapter 6. Managing and protecting the network and storage
This chapter describes the additional administrative functions that you can use to manage and protect the network and storage on the NAS Gateway 300.
The following functions are available:
v “IBM Director”, accessed through Start Programs
v “NAS Backup Assistant” on page 60, accessed through IBM NAS Admin
v “Persistent Images” on page 62, accessed through the Windows 2000 for
Network Attached Storage user interface
v “Tivoli SANergy” on page 75
IBM Director
Note: This section presents an overview of IBM Director functions. For more
detailed information, consult the IBM Director User’s Guide on the Documentation CD-ROM.
IBM Director is a systems-management solution that helps administrators manage single or large groups of IBM and non-IBM devices, NAS appliances, and workstations.
All of the functionality of IBM Director is contained in a simple GUI that enables single-click and drag-and-drop commands. IBM Director can manage up to 5,000 clients depending on configuration density. Powerful remote management functions include:
v Sophisticated discovery of network components
v Scheduled asset (hardware and software) inventories with persistent storage of
data
v Proactive problem notification and tools for problem resolution
v Hardware system component monitors and thresholds to trigger alerts of
impending problems
v Alert management with automated actions, manual intervention, or both
v Process scheduling to automate wide-scale client software maintenance (clean
up temp files, restart tasks, backups, and so on) according to any timetable
v Help desk and routine maintenance functions such as remote control and file
transfer
v Extensive security and authentication
IBM Director consists of three main components:
v Management Server
v Agent
v Console
The Management Server is a centralized systems manager and is the core of the IBM Director product. Management data, the server engine, and the management application logic reside there. Install the IBM Director Management Server on a dedicated server that has high-availability features. When installed on a Windows 2000 server or Windows NT 4.0 server system in the managed environment, the Management Server provides the management application logic and persistent data
© Copyright IBM Corp. 2002 55
Dependencies
storage of management information using an SQL database. The Management Server maintains a database of all Director Agents and their inventory. All alerts from the agents flow to the management server, which also acts as a central point of configuration for Event Action Plans and System Tasks.
The Agent resides on the NAS Appliances and other systems that IBM Director manages. IBM Director recognizes two types of managed systems: native agents (IBM Director Agent installed) and nonnative agents (SNMP agent installed). The Agent comes preinstalled on all IBM NAS appliances. It runs as a service that is automatically started at boot time. IBM Director Agent provides valuable information to IBM Director management server and other supported management applications. In addition to its native interface with the Director Management Console, it provides point-to-point remote management of client systems through a Web browser window.
You perform administrative tasks at the Console. It is a Java application that serves as the user interface to the Director-managed environment. The console provides comprehensive hardware management using a single click or drag-and-drop operation. You can install the console on a machine at a remote location from the server. Consoles are not licensed, so you can distribute them freely among unlimited number of machines. In addition, there is no limit to the number of IBM Director Consoles that can connect into the Management Server.
The IBM Director 3.1 Agent (the version included in this release) must be managed by an IBM Director 3.1 Management Server. If your Management Server is running an earlier version of IBM Director (V2.2 or earlier), then you must upgrade it to ensure proper operation. This includes Director Consoles as well. The IBM Director
3.1 Management Server contains an Agent software distribution package that you can use to upgrade pre-version 3.1 Agents. This allows easy and automated upgrading of the entire system to version 3.1. You can check the version of IBM Director Agent running on a NAS appliance by issuing: http://<system_name>:411/ on a local Web browser.
Hardware requirements
It is highly recommended that you install the IBM Director Server on a server separate from the IBM NAS appliance. The IBM Director Server running on an IBM NAS appliance will significantly reduce its performance. The server must meet these minimum requirements:
Hardware vendor Must be IBM. The management tools of IBM
CPU A 733 MHz PIII processor is recommended.
Memory 512 MB RAM is recommended. During idle times,
56 NAS Gateway 300 User’s Reference
Director and Director Extensions require IBM equipment.
Standard PII processors can be functional, but these processors might not be sufficient during heavy usage.
while using the standard JET database, the Management Console can consume 300+ MB RAM. The number of managed agents, active consoles, and amount of alerts being processed increases the amount of memory needed.
Disk Because the Management Server software requires
All IBM NAS products exceed the minimum hardware requirements for operating an IBM Director Agent.
Director extensions
A portfolio of advanced management tools for IBM-specific hardware is provided by IBM Director as a set of optional enhancements. These tools integrate into IBM Director and provide management capabilities from a single console with a consistent look and feel. These extensions are provided as part of the preinstalled IBM Director Agent on the IBM NAS appliances:
v Management Processor
v Assistant Capacity Manager
v Cluster Systems Management
v Rack Manager
v Software Rejuvenation
v Systems Availability
To use these extensions, you must load them on the IBM Director Management Server during installation.
only 250 MB, and the JET database has a maximum size of 1 GB, 9 GB of disk space is sufficient. Usea4GBpartition for the operating system (including the swap file).
Naming conventions
All IBM Director Agents have a Director system name that it is known by the Management Server and Consoles. This Director System Name is defaulted to the computer name during the NAS appliance preinstallation process system name does not have to be the same as the computer name. The Director system name is displayed on the IBM Director Console to identify the NAS Appliance under the Group Contents column. You can optionally change the Director System Name on an agent using the following procedure:
1. Open a command prompt window and enter the following IBM Director Agent command to open the GUI interface:
twgipccf.exe
2. Type the new Director System Name and click OK.
The change takes place immediately.
Note: You might need to delete the NAS appliance from the Group Contents and
Web-based access
IBM Director Agent uses an Apache Web Server for Web-based access. All traffic, even logon, is certificate-based encrypted. The Web server requires two ports. One port (411) accepts non-SSL HTTP requests and automatically redirects to the second port (423), which handles SSL requests.
have it rediscover the appliance by its new name.
1
. The Director
1. Although you can do so, it is recommended that you not change the default computer name to avoid the chance of propagating misidentification through the system. And, if you are using IBM Director to manage your appliance, and you change the default name, the default name continues to appear in IBM Director.
Chapter 6. Managing and protecting the network and storage
57
Disaster recovery
It is important to provide adequate backup for key IBM Director Management Server files for restoration purposes. It is recommended that you regularly back up the IBM Director Management Server so that you can recover it in the event of a server disaster. You need to save customizations that you make to the IBM Director, including event action-plans, schedules, thresholds, and so on. Several commands are provided with IBM Director to accomplish this task:
twgsave
twgrestore
This command saves the complete settings to a directory named Director.save.#, where # shows the number of backups (for example, the third backup of the server will be saved in directory Director.save.3). You must stop the IBM Director Management Server service to execute this command. The command supports the following options:
twgsave -s
where the optional parameter -s specifies that software distribution packages not be saved. This helps reduce the size of the backup files.
This command restores the saved data from an IBM Director Management Server. Do not attempt to use this restore feature to replicate an IBM Director Server. The command supports the following options:
twgrestore -t directory
twgreset
Software distribution
The Software Distribution task enables you to import and silently distribute predefined software distribution packages to an IBM Director Client system. These packages are prepared by IBM for IBM NAS products and include software fixes and release updates only. This includes upgrading the IBM Director client itself.
where the optional parameter -t specifies that the data is restored, but server ID and system name is not restored, and directory is where the saved data resides. IBM Director Management Server cannot be running when this command is issued.
This command resets the Director Server system to the status after installing. You can use it if you want to clear all tables in the database and erase the system ID files. This command can be helpful to make sure that after a restore only the data from the saved directory will be in the Director System. The command supports the following options:
twgreset -d -i
Where -d means to clear the tables in the database, and -i means to erase the unique identification files for the system. You can save and restore data only when the Director Support Program and service are stopped. Agents running on IBM NAS appliances do not need to be explicitly backed up because the NAS recovery CD-ROM provides this feature. Applying the Recovery CD-ROM will reinstall the IBM Director Agent.
The basic delivery is a single file package that is signed with a unique IBM NAS key. Only IBM can create the signed packages that can be used by the IBM Director Software Distribution tool.
58 NAS Gateway 300 User’s Reference
Software distribution using IBM Director can be deployed to a single IBM Director client, all IBM Director clients, or some combination in between. The administrator has complete control over which IBM Director clients receive any given package. By default, software distribution packages automatically install themselves immediately following delivery to the IBM client. Delivery of the package can be done manually or scheduled for a later, more convenient time.
Rack Manager and inventory enhancements
The Rack Manager task has been updated to include all of the IBM NAS components. A new component category, NAS, includes all of the IBM NAS appliance engines. All IBM NAS appliances are automatically discovered by the Rack Manager task for drag-and-drop rack construction. This enhancement is part of the IBM Director Server Service Pack 3.1.1; the service pack must be loaded on the IBM Director server before you can take advantage of this new category. The following component categories have been updated to include the new IBM NAS appliance components:
Racks Includes the new component, NAS Rack Model 36U
Storage
Includes these new components:
v NAS Storage Expansion Unit Model 0RU v NAS Storage Expansion Unit Model 1RU
Fibre Channel
Includes these new components:
v NAS 8-port Fibre Channel Hub Model 1RU v NAS Raid Storage Controller Model EXP v NAS Raid Storage Controller Model 0RU v NAS Raid Storage Controller Model 2RU v NAS Raid Storage Controller Model EXU
NAS Is a new component category that includes these components:
Dynamic NAS groups
Dynamic NAS groups are an IBM Director Management Server enhancement made specifically for IBM NAS appliances. You must install this enhancement on the IBM Director Management Server as well as all IBM Director Consoles. You can add dynamic NAS groups to the IBM Director Server and Consoles by downloading the InstallShield extension from the IBM Web site and invoking the executable file. This will create a new Group on all consoles that represent IBM NAS appliances in the managed network.
v NAS 100 Engine Model R12 v NAS 100 Engine Model R18 v NAS 200 Engine Model 200 v NAS 200 Engine Model 201 v NAS 200 Engine Model 225 v NAS 200 Engine Model 226 v NAS 200 Engine Model 25T v NAS 200i Engine Model 100 v NAS 200i Engine Model 110 v NAS 300 Engine Model 5RZ v NAS 300 Engine Model 6RZ v NAS 300G Engine Model 5RY v NAS 300G Engine Model 6RY v NAS Gateway 300 Engine Model 7RY
Chapter 6. Managing and protecting the network and storage 59
Dynamic groups are automatically populated and maintained based on queries to the database. These dynamic NAS groups must be added after the IBM Director Management Server has been installed on a dedicated server. IBM NAS appliances appear under the Groups column in the IBM Director Management Server. The Group Contents column will then contain all the IBM NAS devices that have been discovered on the network.
NAS Web UI task
NAS Web UI is an IBM Director Management Server enhancement made specifically for managed networks containing IBM NAS appliances. Install NAS Web UI on the IBM Director Management Server and all IBM Director Consoles to create a new task called IBM NAS Appliances with a subtask named Launch UI Web. You can apply this new console task to a NAS machine, causing a Web browser to be automatically launched with a URL pointing to the Web UI on the target NAS machine. The port specified in the URL is port 8099, which invokes Windows 2000 for NAS.
Predictive Failure Analysis
Predictive Failure Analysis (PFA) provides advanced notification of a pending failure so that corrective action can be taken to avoid unplanned downtime. The PFA alerts are sent to IBM Director, where a wide variety of Event Action Plans can be established, such as automatically notifying the administrator through e-mail, or executing tasks in response to the alert. When used in conjunction with the IBM electronic service agent, the PFA alerts are routed to an IBM support person, who responds to the customer about the alert. The alerts can also be forwarded to other management packages.
For more information
For more information on IBM Director, consult its user’s manual contained on the Documentation CD-ROM.
NAS Backup Assistant
The NAS Backup Assistant is a preloaded utility that helps you create and schedule backup batch files, and maintain log files. It can be used for backing up either the NAS Gateway 300 operating system or user data.
If you want to back up selected folders, you can use NT Backup without the NAS Backup Assistant (which backs up an entire volume). However, if you use NT Backup, it is recommended that you select and back up the copy of the files in a previous persistent image, rather than the original data itself. When selecting the files for the NT Backup operation, you must select the specific folders in the persistent image. If you select the entire group of persistent images, the files in those images will not be selected for backup. For more information about persistent images see “Persistent Images” on page 62.
Because NAS Backup Assistant only creates and launches scripts, and is not a comprehensive backup application, it does not support interactive error messages. To check status of jobs, you must either view the Backup Logs or view the Windows Event Viewer.
You invoke the NAS Backup Assistant by clicking the IBM NAS Admin desktop icon to open the IBM NAS Administration console. Select Backup and Restore to expand that tree, then select IBM NAS Backup Assistant. When you select this
60 NAS Gateway 300 User’s Reference
option, a logon prompt appears. Log on as a user who has backup operator privileges (an administrator or backup administrator). If a logon prompt does not appear, right-click the IBM NAS Backup Assistant link, and select refresh. When you log on, the main panel appears.
The four tabs on the main panel are:
Backup Operations
The main window where you create and schedule backup batch jobs.
Two backup methods you can select in the Backup Operations window are the standard NT Backup method and the Persistent Storage Manager (PSM) Persistent Image method. A standard NT Backup operation backs up only those files on the drive that are not in use. To guarantee a complete backup image using this method, you must ensure that no users are accessing any files on the drive, so this method is useful only for offline backup.
To do a complete online backup that includes files that are in use, choose the PSM Persistent Image backup method. This method creates a persistent image (mapped as an unused drive letter on the system), backs up a copy of that persistent image, and then deletes the original persistent image (drive letter). For more information about persistent images, see “Persistent Images” on page 62.
Scheduled Jobs
Displays a list of backup batch jobs that you scheduled.
Backup Logs
Displays a list of log files for each backup that has run.
Displayed Logs
Displays the text contained in the log files that you can select from the Backup Logs tab.
All of the options on each tab are described in detail in the online help. To access the online help:
1. Click the IBM NAS Admin icon.
2. Expand the Backup and Restore directory.
3. Select IBM NAS Backup Assistant Help.
4. Log in.
Restoring using the NT Backup panel
Note: If you are restoring a backup that you created using Persistent Images in the
NAS Backup Assistant, the NT Backup file (*.BKF) was created for the persistent image virtual drive letter instead of the original drive letter. For example, if you selected drive C for backup, a persistent image was created on the next available drive letter in the system, and that drive was backed up instead of drive C. If you do not remember the original drive letter, you can view the backup log files in NAS Backup Assistant. The top section of the log file gives you the original drive letter, and the bottom section gives you the persistent image drive letter. When you have the original drive letter, perform the procedure below.
To restore backups, use the following procedure:
1. Click the Restore using NT Backup link in the Backup and Restore section of the IBM NAS Admin console to open the backup GUI.
Chapter 6. Managing and protecting the network and storage 61
2. Click Restore Wizard; then click Next. You are asked what you want to restore.
3. Select the appropriate media that you are restoring from.
4. If you are restoring from tape, expand the backup media pool name, and then double-click the media (this will normally be named media created on {date - time}. This action will read the set list from the tape.
If you are restoring a file, select Tools Catalog a backup file, then click
Browse and find the backup file (.BKF) created for this backup.
Note: If you do not know the .BKF file name, refer to the backup log in NAS
Backup Assistant.
5. Click OK. You will now have a Media created on {date - time} listed under file.
6. Click the plus sign (+) to the left of this media to see the set list. You might be prompted to enter the path to the file that you want to catalog; if so, select the same file that you just imported. This will build a set list.
7. Select the files and directories to restore.
8. Select Alternate Location from the Restore files to: pull-down.
9. In the alternate location window, select the root directory of the original backup drive letter that you determined (see the note on page 61).
10. To change restore options, select Tools from the menu bar at the top of the window, and then select Options. Refer to NT Backup online help (see Restore files from a file or a tape) for use of these options.
11. After you select the files or directories for restore, the alternate location, and
options, click Start Restore.
12. At the prompt, confirm that you want to begin the restore. Click Advanced to select advanced options (see the NT Backup online help for details); then click
OK to begin the restore.
Persistent Images
A persistent image is a copy that you make of one or more file system volumes at a specific time. You can use the Persistent Images function to restore a file or volume to the state it was in at the time that you created the persistent image. Persistent images are maintained in a way that minimizes the storage required to keep multiple copies of the volume. This is done by using a copy-on-write technique that uses, for each volume, an area of pre-allocated storage (the PSM cache file) that keeps only those data blocks that have been written since the time you made a persistent image of the volume.
Persistent Storage Manager (PSM) allows you to create and preserve images of the NAS Gateway 300 drives. You can take a persistent image immediately or schedule persistent images as one-time events or regularly repeated events.
You can access the PSM tasks in the Disks/Persistent Storage Manager task group within the Windows 2000 for Network Attached Storage user interface in one of two ways:
v Open the IBM NAS Admin console on the appliance desktop and select
v Start the Windows 2000 for Network Attached Storage user interface directly.
Persistent Storage Manager. This automatically launches the Windows 2000 for Network Attached Storage user interface and brings up the Disks/Persistent Storage Manager page containing the PSM tasks.
62 NAS Gateway 300 User’s Reference
Global Settings
When you create a persistent image, it appears as a directory on the original drive. Access rights and permissions from the original drive are inherited by the persistent image. Persistent images are used in the same way as conventional drives. However, unlike conventional drives, persistent images are records of the content of the original drive at the time you created the persistent image. Persistent images are retained following shutdown and reboot.
There are six PSM tasks in the Disks/Persistent Storage Manager group:
v Global Settings
v Volume Settings
v Persistent Images
v Schedules
v Restore Persistent Images
v Disaster Recovery
Each of these tasks is described in the following sections. More detailed descriptions and instructions for each of the control panels and topics are covered in the online help.
On this panel, you can configure the persistent image system attributes shown in Table 4.
Volume Settings
Table 4. Persistent image global settings
Attribute Default value
Maximum number of persistent images 250
Inactive period 5 seconds
Inactive period wait timeout 15 minutes
This panel displays statistics for each volume, such as total volume capacity, free space, and cache file size and usage. You can also select any volume and configure volume-specific PSM attributes for that volume, as shown in Table 5.
Table 5. Persistent image volume settings
Attribute Default value
Cache-full warning threshold 80 percent full
Cache-full persistent image deletion threshold 90 percent full
Cache size 15 percent (of the total volume
capacity)
Notes:
1. You cannot change the cache size for a volume while there are persistent images on that volume (the Cache size combination box will be disabled). You must delete all persistent images on the volume before changing the cache size for that volume.
2. Cache size (as a percent of volume size) and deletion threshold must be tuned to meet the heaviest load placed on the system. NAS Gateway 300 appliances that receive heavy write traffic for sustained periods will correspondingly generate more cached data per persistent image, as the system preserves old
Chapter 6. Managing and protecting the network and storage 63
Persistent Images
This panel lists all of the persistent images that exist on all volumes. On this panel you can:
v Create a new persistent image immediately (without scheduling it through the
data from being overwritten. Very high traffic systems can devote as much as 40% of a production volume to PSM cache, although 15% (the default) or 20% will meet the needs of most users. The cache-full persistent image deletion threshold must also be tuned to automatically delete persistent images in time to free cache space before the cache fills up. Management of the cache must be tuned carefully to avoid filling the cache completely, as any missed and uncached old data renders all the persistent images for a volume inconsistent, and PSM will automatically delete them.
Schedules panel). When you create the persistent image, you can specify properties for the persistent image, including:
Volume(s) The persistent image can contain a single
volume or multiple volumes. To select multiple volumes, hold down the Ctrl key while clicking the volumes. For multi-volume persistent images, a virtual directory containing data for a volume appears under the persistent image directory in the top level of each volume in the persistent image (the name of the persistent image directory is configured in the Global Settings panel).
Name You can name the persistent image. This
becomes the name of the virtual directory containing the persistent image, underneath the persistent image directory in the top level of the volume (the name of the persistent image directory is configured in the Global Settings panel).
Read-only or read-write A persistent image is read-only by default, so no
modifications can be made to it. However, you can set the persistent image to read-write, which permits you to modify it. When a persistent image is written, the modifications made are also persistent (they survive a reboot of the system). Changing a persistent image from read-write to read-only resets the persistent image to its state at the time you took the persistent image, as does selecting Undo Writes for a read-write persistent image from the Persistent Images panel.
Retention value A persistent image can be given a relative
retention value or weight. This is important when PSM needs to delete some persistent images for a volume because the capacity of the cache file for that volume has reached a certain threshold, as described later in this section. If the volume cache file completely fills, then all persistent images for that volume are deleted regardless of
64 NAS Gateway 300 User’s Reference
Schedules
the retention values. By default, a new persistent image is assigned a “Normal” retention value (other higher and lower values can be selected).
v Delete an existing persistent image.
v Modify properties of an existing persistent image, including read-only or
read-write, and retention value.
Use this panel to schedule persistent images to be taken at specific times (this is independent of the scheduled backup function through NAS Backup Assistant described earlier). Each PSM schedule entry defines a set of persistent images to be taken starting at a specified time and at a specified interval, with each image having the set of properties defined in the entry. This allows you to customize scheduled persistent images on a per-volume basis. For instance, you could set a persistent image for one volume to occur every hour, and for another volume to occur only once a day.
The set of properties you define are the same properties described in the Persistent Images panel description assigned above; when you define these properties, all persistent images created according to this schedule entry will be given those properties. After a scheduled persistent image is created, certain properties of that persistent image can be modified through the Persistent Images panel, independently of other persistent images created according to the schedule.
After you create a schedule entry, it appears in the list of scheduled persistent images. Subsequently, you can modify the properties of an existing entry, such as start time, repetition rate, the volumes, and so on. For a schedule, you can name the persistent images based on a pattern that you configure. The following format specifiers allow you to customize variable portions of the name:
%M 3-letter month %D Day %Y Year %h Hour in 12-hour format %s Second %i Instance %a AM/PM %H Hour in 24-hour format %W Day of week (M, T, W ...) %w 3-letter day of week (Mon, Tue, Wed ...) %% Percent sign
As an example, the name pattern %w_%M_%D_%Y_%h_%m_%a would produce the persistent image name Mon_Apr_1_2002_10_47_AM.
Restore Persistent Images
On this panel, you can select an existing persistent image and quickly restore the volume contained in the image back to the state it was in when the selected persistent image was taken. This is useful if you need to recover an entire volume, as opposed to just a few files. This volume restore function is available for the data volumes, but not the system volume.
Chapter 6. Managing and protecting the network and storage 65
Disaster Recovery
PSM provides a disaster recovery solution for the system drive. This extends the volume restore function of PSM to provide disaster recovery in the event that the system drive is corrupted to the point where the file system is corrupt, or the operating system is unbootable. Note that while disaster recovery is also supported through the Recovery CD-ROM and backup and restore capability, it is a two-step process. In contrast, the method supported by PSM allows you to restore the system drive from a single image, without having to go through the entire recovery procedure and then additionally having to restore a system drive backup.
Use the Disaster Recovery panel to schedule and create backup images of the system drive, and to create a bootable diskette that will allow you to restore the system drive from a backup image (located on the maintenance partition, or network drive). The remainder of this section provides additional information on how to perform backup and recovery operations for the NAS Gateway 300.
Note: Restoration of a PSM backup image over the network is not supported for
the Gigabit Ethernet adapter. If you have only Gigabit Ethernet adapters installed, it is recommended that you perform PSM backup of each node to its maintenance partition (D: drive), which would allow you to recover if the system volume is corrupt or unbootable. Should the hard disk drive fail completely, you would need to use the Recovery CD-ROM as described in Chapter 9, “Using the Recovery and Supplementary CD-ROMs” on page 111 to restore the node to its original (factory) configuration.
Backing up the system drive
The Disaster Recovery panel lists status information for backup operations, both scheduled and immediate, as well as buttons for starting and stopping a backup operation, for configuring backup, and for creating a recovery diskette.
Click Modify Settings to open the Disaster Recovery Settings page. Modify the settings that you want for backup. Do not include spaces in the Backup name field. When you have modified the settings, click OK to save the changes.
On the Disaster Recovery page, click Start Backup to begin the backup. The backup process will first create a persistent image of the system drive (C:), named System Backup. Then, it will create the backup images from that persistent image, and then delete that persistent image when the backup operation is complete.
Creating a PSM recovery diskette
You will now create a bootable PSM recovery diskette which, when used to boot up the node, will use the backup location settings that you configured on the Disaster Recovery Settings page to locate the backup image and restore it to the system drive of the node.
1. Insert a blank, formatted diskette in the diskette drive of the node.
2. On the Disaster Recovery page, click Create Disk.
3. Click OK on the Create Recovery Disk page. The diskette drive LED will turn off when the creation is complete. The diskette creation should take no more than two minutes.
4. The utility makes the disk DOS-bootable. From a command prompt, either through the desktop of the node itself (with the diskette still in the diskette drive of the node), or on another system with the diskette in its diskette drive, type
a:\fixboot.exe and answer the prompts.
66 NAS Gateway 300 User’s Reference
Note: When you run fixboot.exe on the diskette, the diskette remains bootable
unless you reformat it; if you later erase files on the diskette, you do not need to run fixboot.exe again.
5. Remove the diskette from the appropriate diskette drive. Label the diskette appropriately and keep it in a safe place.
You can create additional copies of the diskette using the above procedure for each new copy.
Note: If you change the backup location or logon settings using the Disaster
Recovery Settings page, you must rebuild the PSM recovery diskettes for that node to reflect the new settings for that node.
Static IP addressing
If you do not have a DHCP server on your network, and you must access a backup image that is accessible only through the network (for example, no backup image is located on the maintenance partition [D: drive] of the node to be recovered), then you must configure the recovery diskette so that it will use a static IP address and subnet mask when accessing the network.
On the PSM recovery diskette, edit the file a:\net_sets.bat. Set the IPAddress and SubnetMask environment variables as follows:
1. Uncomment the two lines that begin with rem (comment lines) by removing the rem from the beginning of both lines.
2. For each line, what follows the equals sign (=) is an IP address expressed as a set of four space-separated numbers (an IP address without the dots [.]). Change the SubnetMask value to match the subnet mask that your network uses. Change the IPAddress value to match the IP address that you want to assign to the node, during the recovery operation. Do not insert dots between the numbers (octets) in either value.
As an example, here is how the lines would look for a node using IP address
192.168.1.200, and subnet mask 255.255.255.0:
set SubnetMask=255 255 255 0 set IPAddress=192 168 1 200
If you later want to reconfigure the recovery diskette to use DHCP to obtain an IP address instead of static IP addressing, you must reinsert rem in front of the SubnetMask and IPAddress lines to disable static IP addressing, as follows (based on the previous example):
REM set SubnetMask=255 255 255 0 REM set IPAddress=192 168 1 200
Restoring the system drive using the PSM recovery diskette
To restore the system drive from a backup image created through the PSM Disaster Recovery panel as described above, you must use a PSM recovery diskette created through the Disaster Recovery panel. If you did not create a PSM recovery diskette, you must use the Recovery CD-ROM as described in Chapter 9, “Using the Recovery and Supplementary CD-ROMs” on page 111 to restore the system drive to its original (factory) configuration.
To restore the system drive:
1. Set the write-protection tab of the PSM recovery diskette to the write-protect position. This prevents accidental initiation of the recovery process (by booting the node with the PSM recovery diskette in the diskette drive).
Chapter 6. Managing and protecting the network and storage 67
2. Insert the PSM recovery diskette in the diskette drive of the node, and restart the node.
3. The recovery process begins. The PSM recovery diskette software locates the first backup image it can find, based on the backup locations specified when the diskette was created. When it locates a backup image, it begins restoring the system drive from the image. During the restore operation, the hard disk drive LED (on the front right of the node’s hard disk drive) will flash green or stay nearly solid green; this indicates write activity to the system volume.
Note: If the hard-disk drive LED stays off for at least 10 minutes since you
restarted the node, then there is a problem with the recovery procedure and it will not be able to restore the system volume from a backup image. Should this occur, you will need to restore the system drive as described in Chapter 9, “Using the Recovery and Supplementary CD-ROMs” on page 111.
4. When the restore operation completes, the hard disk drive LED turns off, and a short song will play periodically (every 15 seconds). Remove the diskette, set the write-protection tab back to the write-enabled position, and reinsert the diskette. The log file RESULTS.HTM will be written to the diskette; this log file can be viewed with any Web browser to examine the results of the restore operation.
5. When the log file is written, another song will play (continuously). Remove the diskette and restart the node. If the restore was successful, the node will come back up in the state it was in at the time when you created the backup image used for the recovery operation.
Note: The persistent image that was created on the system drive (named
System Backup) by the backup process is restored by the restore process as it is preserved in the backup image. It is recommended that you now delete that persistent image as it is no longer needed. On the Persistent Images panel, select the persistent image named System Backup on drive C: from the list of persistent images, then click Delete, then click OK on the Delete Persistent Image panel that appears.
If the restore was unsuccessful, then you must use the Recovery CD-ROM as described in Chapter 9, “Using the Recovery and Supplementary CD-ROMs” on page 111.
Rebuilding the maintenance partition
If this is a new hard drive or if the Maintenance (D:) partition is unusable, you must rebuild the Maintenance partition by performing the following steps:
1. Start Disk Management on the node. You can do this in one of two ways:
v Start a Terminal Services session to the node, then click the IBM NAS
Admin icon, and then from the IBM NAS Administration console that appears, select Computer Management, then Disk Management.
v Start a Windows 2000 for NAS user interface session to the node, then select
Disks and Volumes, then select Disks, and then provide your administrator user name and password when prompted.
2. In the Disk Management window, right-click the unallocated area of Disk 0, and then click Create Partition.
3. In the Create Partition wizard, click Next and select Primary Partition.
4. Click Next and select D: as the drive letter.
5. Click Next and select FAT32 as the file system and change the drive label to Maintenance.
68 NAS Gateway 300 User’s Reference
6. Click Finish to close the wizard.
The partition will then be formatted. When formatting is complete, the status of the partition should appear as Healthy, and the other properties should appear as:
v Name: Maintenance
v Drive letter: D:
v File system: FAT32
Granting user access to persistent image files
You can give end-users access to files in the persistent images. For example, this would be helpful to a user who has accidentally corrupted a file and needs to get an uncorrupted copy of that file.
To enable end-user access to persistent image files:
1. Go into Terminal Services.
2. Click the My Computer icon.
3. Select the volume on which you want to enable persistent image access.
4. Go into the persistent images directory and right-click the mouse on the selected persistent image mount point, select Sharing, then specify sharing as appropriate. If you want to enable the same access to all persistent images on the volume, right-click the persistent images directory (from the top level of the volume), select Sharing, and then specify sharing as appropriate.
PSM notes
Note: The share settings are maintained in a persistent image. Therefore, granting
access to all end-users only permits those users to access files and directories within the persistent image that they had permission to access originally on the actual drive.
v You can take and keep a maximum of 250 persistent images at one time. These
can be taken on local drives, or drives on the external storage that are logically local.
On various panels, such as the New Persistent Image Schedule panel, the Keep the last: field indicates the number of persistent images. The total number of persistent images that you enter in these fields does not override the maximum number of persistent images that you set in the Global Settings panel. For example, if the maximum number of persistent images is 10, and you enter numbers in other fields that add up to greater than 10, only 10 persistent images will be taken.
v You cannot take a persistent image of the maintenance drive (D:). Hence, you
will not see it as a choice in either the New Persistent Image Schedule panel or the Create Persistent Image panel. Do not take a persistent image of the clustering Quorum disk. See “Recovering from a corrupted Quorum drive” on page 52 for information on how to recover from a corrupted Quorum drive.
v PSM stores the cache file for each drive on the drive itself. The first persistent
image created on a particular drive will require a significant amount of time because the PSM cache file must be created (pre-allocated) for that drive.
The time required for creation depends on the configured size of the cache file (15 percent of the total drive size by default). Creation takes roughly three to four minutes per gigabyte. For example, a 10-GB cache file would require 30 to 40 minutes to create. You should create a persistent image for a drive before
Chapter 6. Managing and protecting the network and storage 69
scheduling any persistent images for that drive, to build the cache file. You can then delete the persistent image that you just created if you do not need to keep it.
After the creation of the first persistent image on a volume, future persistent images on that volume will complete faster.
v The default size of the cache file per drive is 15 percent of the total drive
capacity.
In most cases, that should be sufficient. However, it is possible that it will not be enough to maintain the number of persistent images that you want to keep concurrently on the drive, given the amount of file-write activity to the drive. PSM automatically takes action to prevent the cache file from overflowing, because if that occurred, PSM would be forced to automatically delete all persistent images on the drive (when it cannot keep track of changes made to the drive, it cannot maintain a valid persistent image).
PSM takes the following actions as the cache file usage approaches a full condition:
– When the cache file usage exceeds the warning threshold (configured in the
PSM Volumes panel for the drive; the default value is 80 percent), PSM generates a warning message to the system event log (viewable through the Windows 2000 Event Viewer in the IBM NAS Admin console), and to the alert log in the Microsoft Windows 2000 for Network Attached Storage user interface. The name of the source for the message is psman5. Additionally, while the cache file usage is above the warning threshold, PSM prohibits any attempt to create a new persistent image, and logs error messages (to the system log and alert log). The text of the error message that is logged in the system event log (from psman5) is “A persistent image could not be created due to error 0xe000102b”.
– When the cache file usage exceeds the automatic deletion threshold (also
configured in the PSM Volumes panel for the drive; the default value is 90 percent), PSM automatically selects a persistent image on the volume and deletes it to reduce the cache file usage. It selects the persistent image with the lowest retention value (as described in “Persistent Images” on page 64). If more than one persistent image has the same (lowest) retention value, then the oldest image will be selected for deletion. If this deletion does not reduce the cache file usage below the automatic deletion threshold, then it will continue to select and delete persistent images until the cache file usage is reduced below the automatic deletion threshold. For each deletion, PSM generates an error message to the system event log and to the Windows 2000 for Network Attached Storage alert log indicating that a persistent image was deleted.
You should periodically check the system event log or Windows 2000 for Network Attached Storage alert log to ensure that the cache file usage is not consistently high, forcing existing persistent images to be deleted and preventing new persistent images from being created. If the cache file usage is high, you can increase the size of the cache file using the PSM Volumes page. However, because dynamic cache file resizing is not supported in this release, you must delete all persistent images currently on that volume first.
v When a shared volume performs a failover operation from one engine in the NAS
Gateway 300 to the other engine, the persistent images for that volume move with the volume. The Persistent Images panel on a particular engine will display only those persistent images which are on volumes that the engine owns at a point in time. If persistent images are scheduled for a volume, on a particular
70 NAS Gateway 300 User’s Reference
engine, a scheduled persistent image is created only as long as that engine owns the volume at the time the scheduled persistent image is to occur.
To ensure that a scheduled persistent image will take place regardless of which engine owns the volume, you must do the following:
1. Use the Schedules panel to create the schedule on the engine that currently owns the volume.
2. Use the Cluster Administrator to move the disk group that contains the volume to the other engine. You can create or edit a schedule only for a volume on the engine that currently owns the volume. If an engine does not own the volume, you cannot select the volume when creating a new schedule through the New Persistent Image Schedule panel (under Schedules).
3. Use the Schedules panel on the other engine to create the same schedule that you created on the original engine, with all of the same parameters (start time, frequency, number to keep, and so on).
4. Use the Cluster Administrator to move the disk group that contains the volume back to the original engine.
v Volume restore of the system volume (C: drive) is not supported. If you attempt
to restore a persistent image containing the system volume, the restore operation will not take place.
v Volume restore of a data volume might require a reboot of the node. You will be
notified by the Restore Persistent Images panel whether a reboot is required after a restore operation is initiated.
v When you restart the NAS Gateway 300 (“restart” in this case means that with
both nodes down, the node that was shut down last is restarted first so that it initially owns all of the shared data volumes), Persistent Storage Manager (PSM) takes two actions:
1. Loading
2. Mapping
During loading, PSM loads existing persistent images from the cache files on each of the volumes. The loading time depends on the amount of cache data there is to read. Cached data is used by PSM to maintain the persistent images, and the more cache data there is, the longer it takes to load the persistent images, and thus the longer it might take the NAS Gateway 300 to become fully operational after a restart.
During mapping, PSM makes the loaded persistent images accessible through the file system by mounting each of them as a virtual volume underneath the persistent images directory on the real volume for which the persistent image was created. Mapping takes place five minutes after the real volume has been mounted. The mapping time varies with the number of persistent images, as well as the size of the volume.
As an example, suppose that on your NAS Gateway 300, you defined a 1 TB volume with 50 percent of the volume allocated to the cache (500 GB cache), and that you had 20 persistent images on the volume, using 100 GB (20 percent) of the cache (based on the write activity to the volume since the first persistent image was created). You would observe an increase in the startup time of roughly 3 minutes, 20 seconds over what it would be without any persistent images on the volume. Then, once the NAS Gateway 300 has become fully operational, all 20 persistent images would become accessible within another 18 minutes (including the five minutes that PSM waits after the volume comes up to begin the mapping).
Chapter 6. Managing and protecting the network and storage 71
When a volume is moved between nodes during a failover operation, then PSM must perform persistent image loading and mapping on the node to which the volume is moving, just as it does when the “first node” is restarted.
In the failover scenario, loading must take place before the volume can be brought online on the node (when the clustered disk resource is shown as being Online in Cluster Administrator). Then, as in the restart case, mapping begins five minutes after the volume comes online.
Microsoft Cluster Server, which controls the disk resource failover, waits a certain period, called the pending timeout, for the disk to come online. (During the loading phase, the disk resource is shown as being in Online Pending state.) With a default value of 180 seconds (3 minutes) for the pending timeout, this time interval might be exceeded because of the time it takes to load the persistent images on the volume. If this occurs, the delay might cause Cluster Server to mark the disk as Failed and to not be available to either NAS Gateway 300 node. Other dependent resources (IP addresses, network names, file shares, and so on) might also fail.
For this reason, it is recommended that you increase the pending timeout value for all clustered resources to 1200 seconds (20 minutes). To do this, open Cluster Administrator, select Resources from the left pane to display all clustered resources in the right pane, and then for each resource listed in the right pane:
1. Right-click the resource name and select Properties.
2. Select the Advanced tab.
3. Change the Pending timeout value to 1200 (seconds).
4. Click Apply; then click OK.
v PSM imposes a limit of 1 terabyte (TB) of cached data, across all volumes on the
NAS Gateway 300. For this reason, you should ensure that the total configured size of all cache files on the NAS Gateway 300 is not greater than 1 TB.
You can do this by accessing Persistent Storage Manager, then going to the Volume Settings page, and making sure that the total of all values in the Cache Size column is 1 TB or less. (You can access Persistent Storage Manager through the Persistent Storage Manager link on the IBM NAS Admin console on the NAS Gateway 300 desktop, or by starting the Windows 2000 for Network Attached Storage user interface and then selecting Disks, then Persistent Storage Manager.)
If the total is greater than 1 TB, you should reduce the size of the cache on one or more of the volumes by selecting the volume from the list, then clicking Configure, and then selecting a smaller value from the “Cache size” drop-down list and clicking OK.
Note: You cannot change the size of the cache on a volume that has persistent
images. You must delete all persistent images on the volume before changing the cache size. You should try to reduce the cache size on a volume that has no persistent images, if possible, before deleting any persistent images.
If more than 1 TB of cache is configured on the NAS Gateway 300, the following can occur (note that a volume for which a persistent image has never been created is considered to have a cache size of zero, regardless of how large its cache is configured to be):
– When the NAS Gateway 300 is restarted, PSM prevents a volume from being
mounted on the file system (prevents it from being accessible) if that volume’s
72 NAS Gateway 300 User’s Reference
PSM cache would increase the total size of all cache files (on all volumes mounted to that point) above 1 TB, and an error message is written to the system event log. The event source is psman5, and the text of the error message is:
There is insufficient memory available.
– When a volume is failed over between nodes, then PSM running on the “new”
node will behave as it would if the volume were being mounted during a restart: if that volume’s PSM cache would increase the total size of all cache files on that node above 1 TB, then PSM blocks the mount and writes the “insufficient memory available” error message to the system event log. (This will also cause the failover to fail, which means that either the volume will try to come online on the “original” node if it is up, or just simply fail to come online at all.)
– If you increase the size of any cache such that the total cache size of all
volumes on the NAS Gateway 300 becomes greater than 1 TB, and if you do not restart the NAS Gateway 300 after you change the cache size, then no persistent images can be created on the volume for which the cache size increase was made. An attempt to create a persistent image on that volume will cause an error message to be written to the system event log. The event source is psman5, and the text of the error message is:
There is insufficient memory available.
v If you delete the last persistent image on a volume, and then immediately
attempt to create a new persistent image on that volume, the creation of the new persistent image might fail, and an error message will be written to the system event log.
The event source is psman5, and the text of the error message is:
A persistent image could not be created due to error 0xc0000043.
This message is generated because when PSM is reinitializing the PSM cache file on a particular volume (after you delete the last persistent image on that volume), a new persistent image cannot be created. If this error occurs, wait for a few minutes, and then try to create the persistent image again.
v If you use the Windows Powered Disk Defragmenter to attempt to defragment a
volume containing persistent images, the volume will not be defragmented. If you select the volume and click the Defragment button, the Disk Defragmenter will run on the volume and then indicate that the volume was successfully defragmented. However, the Analysis display will appear the same as it did before you clicked Defragment, which indicates that defragmentation did not take place. You can defragment volumes without persistent images.
v PSM uses several system level files, one of which has a command line interface.
Use of this interface is only supported for IBM-provided applications and services, as well as IBM support technician assisted debugging efforts. All PSM function including sophisticated scheduling and automation of remote management is provided by the Windows 2000 for NAS Web-based GUI.
Attention: The recovery process invalidates persistent images and leaves them in an inconsistent state. So, if you plan to use the Recovery CD-ROM, it is recommended that you first delete all persistent images to ensure a clean reload of the system software. For more information on using the Recovery CD-ROM, see Chapter 9, “Using the Recovery and Supplementary CD-ROMs” on page 111.
Chapter 6. Managing and protecting the network and storage 73
Storage Manager for SAK
The NAS Gateway 300 includes Storage Manager for SAK, a storage management tool that includes the following functions:
v Storage reports
v Directory quotas
v File screening
Storage reports address disk usage, wasted storage space, file ownership, security, and administration. Reports can run interactively, scheduled on a regular basis, or run as part of a storage resource management policy when disk-space utilization reaches a critical level.
Directory quotas allow the administrator to add, delete, monitor, and change disk-space limits for selected directories on the NAS appliance. Directory quotas provide disk-space monitoring and control in real time and supports active and passive limits with two real-time space alarms.
File screening allows the blocking of any file type such as MP3, graphics files, VBS viruses, and executables from writing to the NAS appliance.
Uninterruptible power supply support
The NAS Gateway 300 includes support for uninterrupted power supplies (UPS). UPS devices provide emergency backup power for a specific period of time when the local power fails. This power comes from batteries housed within the UPS. High-performance surge suppression helps protect your appliance from electrical noise and damaging power surges. During a power failure, the UPS is designed to instantly switch your appliance to emergency battery-backup power. After you have installed a UPS for your appliance, you can set options for its operation using the UPS task on the Maintenance page. The UPS task enables you to control how the UPS service works on your appliance. The available UPS settings depend on the specific UPS hardware installed on your system. Before you use your UPS device, type the following information on the UPS Configuration page:
v UPS device manufacturer
v UPS device model
v The serial port to which the UPS device is connected
To configure the UPS service, click UPS on the Maintenance page.
To help protect your server appliance from power failures, test it by simulating a power failure by disconnecting the main power supply from the UPS device. Do not perform this test during production use. Your appliance and peripherals connected to the UPS device should remain operational, messages should be displayed, and events should be logged. Wait until the UPS battery reaches a low level to ensure that a proper shutdown occurs. Restore the main power to the UPS device, and check the event log to verify that all actions were logged and there were no errors. All detected power fluctuations and power failures are recorded in the event log, along with UPS service start failures and appliance shutdown initiations. Critical events might change the status of the appliance.
74 NAS Gateway 300 User’s Reference
Tivoli SANergy
Note: The NAS Gateway 300 is enabled for SANergy use. Although the SANergy
component is included in the product, you will need to obtain additional licenses from Tivoli to use the SANergy client with this appliance.
Tivoli SANergy allows you to deliver shared data access at the speed of a SAN, using Fibre Channel, SCSI, or SSA. It gives multiple computers the power to dynamically share file and data access on SAN-based storage, using standard networks and file systems.
SANergy combines LAN-based file sharing with the very high data transfer speeds of the Fibre Channel, SCSI, and SSA storage networks. The result is high-speed, heterogeneous data sharing without the performance-limiting bottlenecks of file servers and traditional networking protocols.
SANergy extends standard file systems and network services provided by the operating systems that it supports. As an operating system extension built on standard systems interfaces, SANergy fully supports the user interface, management, access control, and security features native to the host platforms. Therefore, it provides you with all the file system management, access control, and security you expect in your network.
With SANergy, applications that you have configured into your network can access any file at any time, and multiple systems can transparently share common data. SANergy ensures maximum compatibility with existing and future operating systems, applications, and management utilities.
In addition to the SAN, SANergy also uses a standard LAN for all the metadata associated with file transfers. Because SANergy is based on standard file systems, if your SAN fails, you can continue to access your data through the LAN.
With SANergy, you can reduce or even eliminate the expense of redundant storage and the overhead of data synchronization in multi-host environments. These environments include large-scale Web, video, or file servers. Because each system has direct access to the SAN-based storage, SANergy can eliminate your file server as a single point of failure for mission-critical enterprise applications, reducing costly downtime. Also, with SANergy, you can readily manage all your data backup traffic over the storage network, while users have unimpeded LAN access to your existing file servers.
To set up SANergy on your network, use the following steps:
1. Make sure your network is correctly configured for LAN and SAN.
2. Configure your storage system, including disk formatting, partitioning, and volume configuration.
3. Enable SANergy bus management and device assignments. The Meta Data Controller will then be enabled for the appliance operating system.
4. Install additional SANergy licenses on properly configured hardware clients.
Further details on SANergy are contained in the online help.
Chapter 6. Managing and protecting the network and storage 75
Antivirus protection
You can perform antivirus scanning of NAS Gateway 300 storage from clients having the appropriate access permissions. Also, you can install Norton AntiVirus Version 7.5 or later on the NAS Gateway 300 engine using standard Windows 2000 software installation procedures.
Depending on configuration options, antivirus scanning might use substantial CPU or disk resources. Therefore, you should carefully select the scanning options and schedule.
76 NAS Gateway 300 User’s Reference
Chapter 7. Managing adapters and controllers
This chapter describes the functions that you can use to manage various adapters and controllers installed in the NAS Gateway 300.
The following functions are available:
v “Managing Fibre Channel host bus adapters”, accessed through the IBM NAS
Admin
v “Enabling communication between system management adapters” on page 78
Managing Fibre Channel host bus adapters
The FAStT MSJ diagnostic utility allows you to manage and control Fibre Channel host bus adapters. With FAStT MSJ, you can:
v Retrieve and display general information about the adapters
v Request and display the real-time statistics of adapters
v Diagnose operations on the adapters and attached devices
v Display the NVRAM parameters of adapters (note that you cannot change the
parameters)
v Monitor alarms and indications of the adapters
The primary purpose of FAStT MSJ in the NAS Gateway 300 is to obtain diagnostic information about the Fibre Channel connections.
To use FAStT MSJ:
1. Start FAStT MSJ by double-clicking the IBM NAS Admin icon.
2. Under the NAS Management icon, double-click Storage, and then NAS Utilities.
3. Select FAStT MSJ.
4. When the FAStT MSJ opens: v If you are connected locally with a monitor, keyboard, and mouse, select
localhost; then click Connect.
v If you are connected through Terminal Services, type the host name or IP
address of the machine you are connected to through Terminal Services; then click Connect.
For further details on FAStT MSJ, see the online help.
Appendix E, “Fast!UTIL options” on page 155 provides detailed configuration information for advanced users who want to customize the configuration of the FAStT Host Adapter board and the connected devices, using Fast!UTIL to make changes.
© Copyright IBM Corp. 2002 77
Enabling communication between system management adapters
The two types of system management adapters are2:
v The Integrated System Management Processor (ISMP) integrated on the planar
board of each engine of the NAS Gateway 300
Provides basic operational status about key engine components, such as its processors, power supplies, fans, and so on.
v An optional Remote Supervisor Adapter (RSA) that can connect to up to twelve
of the ISMPs
The RSA allows you to connect through a LAN or modem from virtually anywhere for extensive remote management. The RSA works in conjunction with the ISMP of the NAS Gateway 300 and an interconnect cable that connects multiple engines to the ISMP. Remote connectivity and flexibility with LAN capability is provided by Ethernet connection. Along with ANSI terminal, Telnet, and IBM Director, the RSA enables more flexible management through a Web browser interface.
For more information, see “Using the RSA” on page 80.
Table 6 provides a summary of the features of the ISMP and the RSA.
The light-path diagnostics LED status that is available through the ISMP includes:
v Power-supply failure v Insufficient power for power-supply redundancy v Exceeded power-supply capabilities v Non-maskable interrupt occurred v Over heating v Fan failure v Memory error v Microprocessor failure v PCI-bus error v VRM failure v Planar SCSI failure for system disk or internal tape drive (if any)
Remote status includes information on power supply voltages, voltage-regulator module (VRM) readings, temperatures of system components, system power status, power-on hours, fan status, and system state.
Table 6. ISMP compared to the RSA
Feature ISMP RSA
Location On planar board Separate PCI adapter option
Light-path diagnostics Remotely reports on Remotely reports on
LED status of engine Remotely reports on Remotely reports on
LED status of HDD in engine
Remote update of system BIOS
Remote update of ISMP BIOS
2. A third might be referred to in some of the documentation that came with your system, but that adapter is not used in the NAS Gateway 300.
No No
Yes Yes
No Yes
78 NAS Gateway 300 User’s Reference
Table 6. ISMP compared to the RSA (continued)
Feature ISMP RSA
Immediate remote power on/off
Controlled remote power on/off using the OS
Remote POST (including all POST message IDs)
Remote access to engine vital product data (VPD) and serial number
Multiple login IDs No Yes
TELNET interface overIPNo Yes (through a LAN
Web-browser interface over IP
Forwarding of SNMP traps
Automated server restart Yes Yes
Remote Alerts No Yes
Configuration By DOS utility By DOS utility/serial ports
Aggregate from other ISMP processors
Yes Yes
No Yes
No Yes
No Yes
connection)
No Yes
Yes, to the RSA Yes (through a LAN
connection)
No Yes
Enabling ISMP to RSA communication on a single machine
You must follow one of two methods to enable communication between the ISMP and the RSA on a single machine:
v Using a single ISMP interconnect cable (with dual RJ-11 plugs):
1. Connect one end of the internal ISMP interconnect cable to the J-54 connector on the system board.
2. Connect the other end (the RJ-11 socket) of the internal ISMP interconnect cable to the knockout slot on the back panel of the machine until it locks into place.
3. Connect one connector on the ISMP interconnect cable to the RJ-11 socket that you just installed on the back panel (in step 2).
4. Connect the other connector to the RJ-11 socket on the RSA.
v Using two ISMP interconnect cables (each with a single RJ-11 plug):
1. Connect one end of the internal ISMP interconnect cable to the J-54 connector on the system board.
2. Connect the other end (with the RJ-11 socket) of the internal ISMP interconnect cable to the knockout slot on the back panel of the machine until it locks into place.
3. Connect the first ISMP interconnect cable to the RJ-11 socket that you just installed on the back panel (in step 2).
4. Connect the second ISMP interconnect cable to the RJ-11 socket on the RSA.
5. Connect the two ISMP interconnect cables with a single Category 5 Ethernet cable (by plugging one end of the Ethernet cable into the “black box” on the first ISMP interconnect cable, and the other end into the “black box” on the second ISMP interconnect cable).
Chapter 7. Managing adapters and controllers 79
Using the RSA
The documentation CD-ROM that came with your system contains additional information and software for the RSA.
To use the RSA, complete the following steps:
1. Consult the RSA user’s manual and the README file that is located on the documentation CD-ROM.
2. Run the executable to create a bootable floppy disk. The executable is located in:
C:\IBM\ASMP\UPDATES\33P2474.EXE
3. Boot each node of the NAS Gateway 300 with floppy disk created in the previous step to configure the RSA.
Enabling Ethernet adapter teaming
This section describes how to enable adapter teaming on the Ethernet adapters.
Note: The integrated Ethernet controller on each NAS Gateway 300 node is
dedicated to the clustering interconnection between it and another node and cannot be used for teaming.
The Ethernet adapters that you install in the PCI slots of the NAS Gateway 300 nodes support adapter teaming (also known as load balancing). With adapter teaming, two or more PCI Ethernet adapters can be physically connected to the same IP subnetwork and then logically combined into an adapter team.
The NAS Gateway 300 uses two different Ethernet vendors, Intel (PRO/1000 XT Server Adapter by Intel and IBM Gigabit Ethernet SX Server Adapter) and Alacritech (Alacritech 1000x1 Single-Port Server and Storage Accelerated adapter and Alacritech 100x4 Quad-Port Server Accelerated Adapter). Each vendor offers different means of implementing teaming, and teaming cannot be done between adapters of different vendors. There might also be restrictions on what functions are supported with adapters from the same vendor.
Note: It is strongly recommended that you configure adapter teaming before you
set up Microsoft Cluster Server (MSCS) clustering, as described in Chapter 5, “Completing networking, clustering, and storage access setup” on page 33. Additionally, for each team that you configure on one node, you must configure an identical team (same type of team, same set of adapters, and so on) on the other node.
Alacritech Ethernet adapter teaming
Alacritech uses SLIC (Session-Layer Interface Card) technology, which incorporates hardware assistance for TCP processing. Most, but not all, of the processing overhead for TCP/IP is removed from the NAS engine. This is an optional feature and can be disabled if required.
Alacritech offers four methods of teaming:
Cisco Fast EtherChannel (Fast EtherChannel and Gigabit EtherChannel compatible)
Fast EtherChannel (FEC) is a proprietary technology developed by Cisco. With FEC, you can create a team of two to four ports on an adapter to increase transmission and reception throughput. The FEC might also be
80 NAS Gateway 300 User’s Reference
referred to as load balancing, port aggregation, or trunking. When you configure this feature, the adapter ports comprising the FEC team or group create a single high-speed, fault-tolerant link between the engine and the Ethernet switch sharing one IP address. With FEC, fault tolerance and load balancing is provided for both outbound and inbound traffic, unlike other load-balancing schemes that balance only outbound traffic. Fast EtherChannel and Gigabit EtherChannel (FEC/GEC) requires a FEC/GEC compatible switch. The same teaming also must be enabled on the connected switch ports.
Note: FEC requires an Ethernet switch with FEC capability. The FEC
implementation on the Alacritech 100x4 Quad-Port Server Accelerated Adapter does not support the optional Port Aggregation Protocol (PAgP) feature of FEC-capable Ethernet switches. Likewise, The FEC/GEC implementation on the Alacritech 1000x1 Single-Port Server and Storage Accelerated adapter does not support the optional PAgP feature of FEC/GEC-capable Ethernet switches.
The following are the valid teaming configurations and restrictions for Cisco EtherChannel teaming with Alacritech adapters:
v Two Alacritech 1000x1 Single-Port Server and Storage Accelerated
adapters can be teamed together.
v No Alacritech 1000x1 Single-Port Server and Storage Accelerated
adapters can be teamed with any port of the Alacritech 100x4 Quad-Port Server Accelerated Adapters.
v One Alacritech 100x4 Quad-Port Server Accelerated Adapter can have
two or more of its ports teamed.
v Two Alacritech 100x4 Quad-Port Server Accelerated Adapters can have
any ports on any card teamed (limited to four ports per team). For example, two ports on one card can be teamed with two ports on a second card.
IEEE 802.3ad Link Aggregation Group
802.3ad is an IEEE industry-standard similar to the Cisco FEC/GEC.
802.3ad requires an Ethernet switch with 802.3ad capability. Alacritech does not support the optional Port Aggregation Protocol (PAgP) feature of some FEC switches or the 802.3ad LACP protocol. PAgP/LACP facilitates the automatic creation of link aggregation groups. All EtherChannel and Link Aggregation groups must be manually configured.
The following are the valid teaming configurations and restrictions for IEEE
802.3ad teaming with Alacritech adapters:
v Two Alacritech 1000x1 Single-Port Server and Storage Accelerated
adapters can be teamed together.
v No Alacritech 1000x1 Single-Port Server and Storage Accelerated
adapters can be teamed with any port of the Alacritech 100x4 Quad-Port Server Accelerated Adapters.
v One Alacritech 100x4 Quad-Port Server Accelerated Adapter can have
two or more of its ports teamed.
v Two Alacritech 100x4 Quad-Port Server Accelerated Adapters can have
any ports on any card teamed (limited to four ports per team). For example, two ports on one card can be teamed with two ports on a second card.
Chapter 7. Managing adapters and controllers 81
Send-Only Load Balancing
This is an inexpensive way to do load balancing when using a Ethernet switch that does not support FEC or 802.3ad. However, if TCP/IP acceleration is used with this method, all ports that are teamed must be on the same physical adapter. There is no load balancing when receiving. The following are valid teaming configurations and restrictions when doing send-only load balancing are:
v Two Alacritech 1000x1 Single-Port Server and Storage Accelerated
adapters can be teamed together; however, acceleration is disabled.
v No Alacritech 1000x1 Single-Port Server and Storage Accelerated
adapters can be teamed with any port of the Alacritech 100x4 Quad-Port Server Accelerated Adapters.
v One Alacritech 100x4 Quad-Port Server Accelerated Adapter can have
two or more of its ports teamed, with acceleration enabled.
v Two Alacritech 100x4 Quad-Port Server Accelerated Adapters can have
any ports on any card teamed (limited to four ports per team). For example, two ports on one card can be teamed with two ports on a second card. Acceleration will be disabled because the ports are on different adapter cards.
Hot Standby Failover
This technique does no load balancing but does allow failover and redundancy. One port is put online while the remaining ports in the team are offline. If the link for the online ports fails, it is taken offline and one of the other ports takes its place. It is not required that the ports in the team be on the same adapter. It is also not required that they be the same speed, although it is recommended. The following are valid teaming configurations:
v Two Alacritech 1000x1 Single-Port Server and Storage Accelerated
adapters can be teamed together.
v No Alacritech 1000x1 Single-Port Server and Storage Accelerated
adapters can be teamed with any port of the Alacritech 100x4 Quad-Port Server Accelerated Adapters.
v One Alacritech 100x4 Quad-Port Server Accelerated Adapter can have
two or more of its ports teamed.
v Two Alacritech 100x4 Quad-Port Server Accelerated Adapters can have
any ports on any card teamed (limited to four ports per team). For example, two ports on one card can be teamed with two ports on a second card.
To configure adapter teaming with the Alacritech adapters, perform the following steps:
1. Click Control Panel.
2. Click Network and Dial-Up.
3. Click Adapter.
4. Click Properties.
5. Click Alacritech SLIC Team Configurator.
6. Click New Team.
82 NAS Gateway 300 User’s Reference
Intel Ethernet adapter teaming
Intel offers five teaming modes:
Adapter Fault Tolerance (AFT)
Adapter Fault Tolerance (AFT) is similar to Hot-Standby Failover for the Alacritech adapters. Only one adapter in the team is fully active on the Ethernet network (for example, sending and receiving data) at any point in time, while the other adapters are in standby mode (receiving data only). If that adapter detects a link failure or fails completely, another adapter in the team automatically and rapidly takes over as the active adapter, and all Ethernet traffic being handled by the failing adapter is seamlessly switched to the new active adapter, with no interruption to network sessions (for example, file transfers) in progress at the time of the failover.
An AFT team consists of between two and eight ports. In the NAS Gateway 300, the maximum number of ports is four, because all Intel adapters are single port and the total number of network cards is four. All adapters in the team should be connected to the same hub or switch with Spanning-Tree Protocol (STP) set to OFF. The team members can be different speeds or different adapters.
The following are valid teaming configurations for AFT with Intel adapters:
v Two IBM Gigabit Ethernet SX Server Adapters
v Two PRO/1000 XT Server Adapter by Intels
v One to two IBM Gigabit Ethernet SX Server Adapters with one to two
PRO/1000 XT Server Adapter by Intels
Switch Fault Tolerance (SFT)
Two adapters connected to two switches to provide network availability of a second switch and adapter if the first switch, adapter, or cabling fails. STP must be set to ON. The following are valid teaming configurations for Intel SFT:
v Two IBM Gigabit Ethernet SX Server Adapters
v Two PRO/1000 XT Server Adapter by Intels
v One IBM Gigabit Ethernet SX Server Adapter with one PRO/1000 XT
Server Adapter by Intel
Adapter Load Balancing (ALB)
Adapter Load Balancing (ALB) is similar to Send-Only Load Balancing for Alacritech adapters. All adapters in the team are active, increasing the total transmission throughput over the common IP subnetwork. If any adapter in the team fails (link failure or complete failure), the other adapters in the team continue to share the network transmission load, although total throughput is decreased. Load balancing is supported only for adapter teams consisting of only one type of adapter; different types of adapters cannot be combined in a load-balancing team.
Two to eight ports from any Intel adapters are combined in a team that allows increased network bandwidth when sending. AFT is also included. When receiving, only the port identified as primary receives data. There are no special switch requirements.
The following are valid teaming configurations for ALB with Intel adapters:
v Two IBM Gigabit Ethernet SX Server Adapters
v Two PRO/1000 XT Server Adapter by Intels
v One to two IBM Gigabit Ethernet SX Server Adapters with one to two
PRO/1000 XT Server Adapter by Intels
Chapter 7. Managing adapters and controllers 83
Cisco Fast Etherchannel (FEC/GEC compatible)
FEC is a proprietary technology developed by Cisco. With FEC, you can create a team of two to four ports on an adapter to increase transmission and reception throughput. The FEC might also be referred to as load balancing, port aggregation, or trunking. When you configure this feature, the adapter ports comprising the FEC team or group create a single high-speed, fault-tolerant link between the engine and the Ethernet switch sharing one IP address. With FEC, fault tolerance and load balancing is provided for both outbound and inbound traffic, unlike other load-balancing schemes that balance only outbound traffic. FEC/GEC requires a FEC/GEC compatible switch. The same teaming must also be enabled on the connected switch ports.
The following are valid teaming configurations for Cisco FEC/GEC with Intel adapters:
v Two IBM Gigabit Ethernet SX Server Adapters
v Two PRO/1000 XT Server Adapter by Intels
v One to two IBM Gigabit Ethernet SX Server Adapters with one to two
PRO/1000 XT Server Adapter by Intels
IEEE 802.3ad Link Aggregation Group
802.3ad is an IEEE industry-standard similar to the Cisco FEC/Gigabit Etherchannel (GEC). 802.3ad requires an Ethernet switch with 802.3ad capability. PAgP/LACP facilitates the automatic creation of link aggregation groups. All EtherChannel/Link Aggregation groups must be manually configured.
For the Intel adapters, there are two implementations of the standard. Static is equivalent to Etherchannel and requires a FEC/GEC, 802.3ad or Intel Link Aggregation capable switch. Dynamic requires 802.3ad dynamic capable switches.
The following are valid teaming configurations for IEEE 802.3ad with Intel adapters:
v Two IBM Gigabit Ethernet SX Server Adapters
v Two PRO/1000 XT Server Adapter by Intels
v One to two IBM Gigabit Ethernet SX Server Adapters with one to two
PRO/1000 XT Server Adapter by Intels
To configure adapter teaming with the Intel adapters, use Intel PROSet II, which is preloaded on the NAS Gateway 300, as follows:
1. Physically connect the adapters that you want to team to the same IP subnetwork.
2. Access the NAS Gateway 300 desktop by directly attaching a keyboard, mouse, and monitor, or over the network by starting Terminal Services on another workstation (see “Terminal Services and the IBM NAS Administration console” on page 15).
3. From the NAS Gateway 300 desktop, click Start Settings Control Panel.
4. Double-click the Intel PROSet II icon in the Control Panel to start Intel PROSet II. You will see a list of all adapters for each slot and type supported under Network Components.
5. Under Network Components, you will see a list of resident and nonresident adapters for each slot and type supported. Drivers are preset for all supported adapter configurations but will be loaded only for resident adapters.
84 NAS Gateway 300 User’s Reference
10. Verify that these settings are correct, and then click Finish.
11. Perform Steps 1 on page 84 through 10 for the other node.
This procedure creates a device named Intel Advanced Network Services Virtual Adapter. It also binds all network protocols that were bound to the physical adapters that were added to the team to this virtual adapter and unbinds those protocols from the physical adapters. If you delete the team, the settings will return to the state prior to creating the team.
For complete help on adapter teaming, from Intel PROSet II, click Network
Components, and then select Help from the Help menu.
RAID-1 mirroring
The NAS Gateway 300 hardware has a RAID-1 mirroring option using the onboard SCSI adapter. The System and Maintenance partitions are mirrored using two 36-GB hard drives to provide increased reliability and failover capability. This feature provides physical mirroring of the boot volume through firmware, thus providing extra reliability for the system’s boot volume without burdening the host CPU.
6. Identify which adapters you are going to team. Left-click the adapter under Network Components, and select one of the adapters that will be part of the teaming.
7. Right-click the adapter Add to Team Create New Team....
8. Select the type of team to create.
9. Select the adapters to add to the team from the list, and then click Next.
To enable RAID-1 mirroring:
1. Power OFF the appliance (see “Shutting down and powering on the NAS Gateway 300” on page 87).
2. Attach a monitor, keyboard, and mouse to the first engine.
3. Ensure that there are two hard disk drives in the appliance engine.
4. Power ON the appliance.
5. When the LSI Logic BIOS starts and displays Press CTRL-C to start LSI Logic Configuration Utility, press CTRL and C.
6. Press Enter to select channel 1.
7. Select Mirroring Properties and press Enter.
8. Press the space bar to to change No to Primary in the column labeled Mirrored Pair.
9. Press Esc.
10. Select Save changes then exit this menu and press Enter. The drives will begin to synchronize.
11. Press Esc.
12. Select Exit the Configuration Utility and press Enter.
13. The engine will reboot automatically.
14. Repeat this process for the other engine.
Chapter 7. Managing adapters and controllers 85
Memory notes
The following sections contain information on adding memory.
Adding more engine memory to increase performance
You can enhance the performance of the NAS Gateway 300 in an NFS environment by adding more RAM to its processor. To do this:
1. Purchase either of the 5187 memory field-upgrade feature codes from your IBM representative:
0301 1 GB memory upgrade
0302 2 GB memory upgrade
2. Follow the instructions in Chapter 3, section “Replacing memory modules,” of the Installation Guide.
3. Before rebooting the appliance, attach a keyboard and display directly to the rear connectors of the product. During the first IPL, you will have to read and answer questions about the additional memory you have installed.
Using the Recovery CD-ROM if you have added more processor memory
If you have installed more processor memory, and later use the Recovery CD-ROM (see Chapter 9, “Using the Recovery and Supplementary CD-ROMs” on page 111), you will have to attach a keyboard and display and answer questions about the additional memory that you have installed.
86 NAS Gateway 300 User’s Reference
Loading...