Dell FE100, FE200 User Manual

CHAPTER 2
Installation Overview
This chapter provides an overview of installing a PowerEdge FE100/FL100 and FE200/FL200 Cluster configuration with Windows NT Server, Enterprise Edition 4.0 or Windows 2000 Advanced Server operating systems. More detailed instructions are provided later in this document.
WARNING: Hardware installation should be performed only by trained service technicians. Before working inside the system, see the safety instructions in your PowerEdge System Information document to avoid a situation that could cause serious injury or death.

Windows NT 4.0 Cluster Installation Overview

support.dell.com
This section provides an overview sequence for installing Windows NT 4.0 on a PowerEdge Cluster FE100/FL100 or FE200/FL200. Specific steps are provided throughout this document.
NOTE: If you are installing a cluster configuration with multiple clusters attached to a single PowerVault Fibre Channel storage system, see the cluster consolidation basic installation procedure in the Dell PowerEdge Cluster F-Series SAN Guide.
1. Add network interface controllers (NICs), host bus adapters (HBAs), redundant array of independent disks (RAID) controllers (optional), small computer system interface (SCSI) hard-disk drives, Fibre Channel hard-disk drives, and other com­ponents to the existing system hardware to meet the requirements for a PowerEdge Cluster FE100/FL100 or FE200/FL200 configuration.
NOTE: For more information on upgrading existing non-clustered systems to a cluster configuration, see Chapter 9,“Upgrading to a Cluster Configuration.”
Installation Overview 2-1
2. Cable the system hardware for clustering.
If you are using Dell PowerVault Fibre Channel switches, see the Dell PowerEdge Cluster F-Series SAN Guide for more information.
3. If you are using hardware-based RAID for the internal SCSI hard-disk drives, con­figure them using the controllers basic input/output system (BIOS) utility.
4. Perform the low-level configuration of the HBAs.
5. Install and configure the Windows NT 4.0 Server, Enterprise Edition operating system on each node.
6. Configure the public and private NIC interconnects in each node, and place the interconnects on separate Internet protocol (IP) subnetworks using static IP addresses.
NOTES: Publicrefersto the NIC used for client connections. Privaterefers to the dedicated cluster interconnect.
If you are using Giganet cluster local area network (cLAN) Host Adapters or a Giganet cLAN Cluster Switch, see Using Giganet for Cluster Interconnect (Pow­erEdge Cluster FL100/FL200)in Chapter 4 for more information.
7. Install the device driver for the integrated video controller.
8. Install Windows NT Service Pack 6a or later.
NOTE: See the Dell PowerEdge Cluster FE100/FL100 and FE200/FL200 Platform Guide for more information on the latest supported service pack.
9. Install the miniport driver for the Fibre Channel HBAs in each node.
10. Install the QLogic Fibre Channel Configuration software.
11. Install the storage management software and failover driver that is appropriate for your storage system.
For the PowerEdge Cluster FE100/FL100, perform the following steps:
a. Install Dell OpenManage Application Transparent Failover (ATF) software on
each node and reboot.
b. Install Dell OpenManage Managed Node (Data Agent) on each node.
c. Install Dell OpenManage Data Supervisor or Dell OpenManage Data Admin-
istrator on node A.
For the PowerEdge Cluster FE200/FL200, perform the following steps:
a. Install QLDirect on each node and reboot.
b. Set the failover path in the QLogic Fibre Channel Configuration software on
each node.
c. Reboot each node.
2-2 Installation and Troubleshooting Guide
d. Install Dell OpenManage Array Manager on each node.
A dialog box appears, asking you if the PowerVault 660F will be used in an MSCS cluster without storage consolidation. Select Ye s .
e. Reboot node A.
12. Shut down node B.
13. From node A, configure the RAID level on the storage system.
14. Reboot node A.
15. From node A, partition, format, and assign drive letters to the Fibre Channel hard­disk drives in the storage system using Windows NT Disk Administrator.
16. On node A, verify disk access and functionality on all shared disks.
17. Shut down node A, power on node B, and verify disk access and functionality on node B.
18. Shut down node B, power on node A, and install and configure the Cluster ServicesoftwareonnodeA.
NOTICE: To avoid damage to the system, do not reboot the nodes until you reinstall the service pack.
NOTE: If you reinstall MSCS, you must reinstall the Windows NT service pack.
19. Reinstall Windows NT Service Pack 6a or later.
support.dell.com
20. Reboot node A and verify cluster functionality.
21. Power on node B and install the Cluster Service software on node B.
22. Reinstall Windows NT Service Pack 6a or later.
23. Reboot node B and verify cluster functionality.
24. Install and set up your application programs.
25. Install the Dell OpenManage Cluster Assistant With ClusterX software from the management console (optional).
26. Record the configuration of the cluster using the data sheets in Appendix B, Cluster Data Sheets.

Windows 2000 Cluster Installation Overview

The following section provides an overview sequence for installing Windows 2000 Advanced Server operating system and the cluster management software on a PowerEdge Cluster FE100/FL100 or FE200/FL200. Specific installation and configura­tion information is provided throughout this document.
Installation Overview 2-3
NOTE: If you are installing a cluster configuration with multiple clusters attached to a single PowerVault Fibre Channel storage system, see the cluster consolidation basic installation procedure in the Dell PowerEdge Cluster F-Series SAN Guide.
1. Add NICs, HBAs, RAID controllers (optional), SCSI hard-diskdrives, Fibre Channel hard-disk drives, and other components to the existing system hardware to meet the requirements for a PowerEdge Cluster F-Series configuration.
NOTE: For more information on upgrading existing non-clustered systems to a cluster configuration, see Chapter 9, Upgrading to a Cluster Configuration.
2. Cable the system hardware for clustering.
NOTE: If you are using Dell PowerVault Fibre Channel switches, see the Dell PowerEdge Cluster F-Series SAN Guide for more information.
3. If you are using hardware-based RAID for the internal SCSI hard-disk drives, con­figure them using the controller's BIOS utility.
4. Perform the low-level configuration of the HBAs.
5. Install and configure the Microsoft Windows 2000 Advanced Server operating system on each node and the latest Windows 2000 Service Pack.
NOTE: See the Dell PowerEdge Cluster FE100/FL100 and FE200/FL200 Platform Guide for more information on the latest supported service pack.
6. During the installation, check to install the Cluster Service files when prompted.
You will configure the Cluster Service later.
7. Configure the public and private NIC interconnects in each node, and place the interconnects on separate IP subnetworks using static IP addresses.
NOTES: Publicrefersto the NIC used for client connections. Privaterefers to the dedicated cluster interconnect.
If you are using Giganet cLAN Host Adapters or a Giganet cLAN Cluster Switch, see Using Giganet for Cluster Interconnect (PowerEdge Cluster FL100/FL200) in Chapter 4 for more information.
8. Update the miniport driver for the Fibre Channel HBAs in each node.
9. Install the QLogic Fibre Channel Configuration software.
10. Install the management software and failover driver that is appropriate for your storage system.
For the PowerEdge Cluster FE100/FL100, perform the following steps:
a. Install Dell OpenManage ATF software on each node and reboot.
b. Install Dell OpenManage Managed Node (Data Agent) on each node.
c. Install Dell OpenManage Data Supervisor or Dell OpenManage Data
Administrator on node A.
2-4 Installation and Troubleshooting Guide
For the PowerEdge Cluster FE200/FL200, perform the following steps:
a. Install QLDirect on each node and reboot.
b. Set the failover path in the QLogic Fibre Channel Configuration software on
each node.
c. Reboot each node.
d. Install Dell OpenManage Array Manager on each node.
A dialog box appears, asking you if the PowerVault 660F will be used in an MSCS cluster without storage consolidation. Select Ye s .
e. Reboot node A.
11. Shut down node B.
12. From node A, configure the RAID level on the storage system.
13. Reboot node A.
14. From node A, partition, format, and assign drive letters to the Fibre Channel hard-disk drives in the storage system using Windows 2000 Disk Management or Dell OpenManage Array Manager.
15. On node A, verify disk access and functionality on all shared disks.
16. Shut down node A, power on node B, and verify disk access and functionality on node B.
support.dell.com
17. Shut down node B, power on node A, and install and configure the Cluster ServicesoftwareonnodeA.
18. Reboot node A and verify cluster functionality.
19. Power on node B and install the Cluster Service software on node B.
20. Reboot node B and verify cluster functionality.
21. Install and set up your application programs.
22. Install the Dell OpenManage Cluster Assistant With ClusterX software from the management console (optional).
23. Record the configuration of the cluster using the data sheets in Appendix B, Cluster Data Sheets.
Installation Overview 2-5
2-6 Installation and Troubleshooting Guide
Loading...