Dell™ PowerEdge™ Cluster FE100/FL100 Datacenter Server
USER’S GUIDE
www.dell.com
support.dell.com
Notes, Notices, Cautions, and Warnings
Throughout this guide, blocks of text may be accompanied by an icon and printed in
bold type or in italic type. These blocks are notes, notices, cautions, and warnings,
and they are used as follows:
NOTE: A NOTE indicates important information that helps you make better use of
your computer system.
NOTICE: A NOTICE indicates either potential damage to hardware or loss
of data and tells you how to avoid the problem.
CAUTION: A CAUTION indicates a potentially hazardous situation which, if
not avoided, may result in minor or moderate injury.
WARNING: A WARNING indicates a potentially hazardous situation which,
if not avoided, could result in death or serious bodily injury.
Reproduction in any manner whatsoever without the written permission of Dell Computer Corporation is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and Dell OpenManage
are trademarks of Dell Computer Corporation; ClusterX is a registered trademark and VERITAS is
a trademark of VERITAS Corporation; Microsoft, Windows NT, and Windows are registered trade-
marks of Microsoft Corporation; Intel and Pentium are registered trademarks of Intel Corporation;
Computer Associates and ARCserve are registered trademarks of Computer Associates International, Inc.
Other trademarks and trade names may be used in this document to refer to either the entities
claiming the marks and names or their products. Dell Computer Corporation disclaims any proprietary interest in trademarks and trade names other than its own.
April 2001 P/N 9E926 Rev. A00
Preface
This guide provides information about the Dell PowerEdge Cluster FE100/FL100
Datacenter Server solution. This information includes procedures for installing, configuring, and troubleshooting the hardware and software components of PowerEdge
Cluster FE100/FL100 Datacenter Server configurations.
The chapters and appendixes in this guide are summarized as follows:
•Chapter 1, “Getting Started,” provides an overview of PowerEdge Cluster
FE100/FL100 Datacenter Server.
•Chapter 2, “Installation Overview,” provides an overview of the installation pro-
cess for PowerEdge Cluster FE100/FL100 Datacenter Server configurations.
Procedures are located throughout this guide.
•Chapter 3, “Preparing PowerEdge and PowerVault Systems for Clustering“pro-
vides information on adding peripherals and Fibre Channel host bus adapters.
•Chapter 4, “Cabling the Cluster Hardware,“provides instructions for properly
cabling the system hardware components.
•Chapter 5, “Configuring Storage Systems (Low-Level Configuration),“provides
information for configuring software and external shared storage.
•Chapter 6, “Configuring the System Software“provides the software configura-
tion options necessary for setting up the cluster.
•Chapter 7, “Installing Cluster Management and Systems Management Soft-
ware,“provides information on software tools for configuring and administering a
cluster including how to run Cluster Administrator locally on a cluster node and
how to install it on a remote console.
•Chapter 8, “Upgrading Your PowerEdge System to a Cluster Configuration,“pro-
vides procedures for upgrading your noncluster system to a PowerEdge Cluster
FE100/FL100 Datacenter Server configuration if the components of the cluster
hardware are already present.
•Chapter 9, “Maintaining the Cluster“provides information on maintaining
PowerEdge Cluster FE100/FL100 Datacenter Server configurations.
•Chapter 10, “SAN Components,” provides detailed information on storage area
network (SAN)-attached cluster configurations for the Dell PowerEdge Cluster
FE100/FL100 Datacenter Server configuration.
v
•Appendix A, “Troubleshooting,” provides information to help you troubleshoot
problems with installing and configuring clusters.
•Appendix B, “Cluster Data Sheets,” provides worksheets on which to record
your specific configurations.
Warranty and Return Policy Information
Dell Computer Corporation (“Dell”) manufactures its hardware products from parts and
components that are new or equivalent to new in accordance with industry-standard
practices. See your Dell PowerEdge System Information document for complete war-
ranty information for your system.
Other Documents You May Need
You may need to refer to the following documentation when performing the procedures in this guide:
•The Dell PowerVault SAN documentation for more information on storage area
networks.
•The User's Guide for your system, which describes system features and technical
specifications, small computer system interface (SCSI) device drivers, the
System Setup program, software support, and the system configuration utility.
•The Installation and Service Guide for your PowerVault storage system, which
provides installation and operation instructions for the PowerVault 65xF storage
system.
•The Dell OpenManage Data Agent Installation and Operation Guide, the Dell
OpenManage Data Supervisor Installation and Operation Guide, the Dell
OpenManage Data Administrator Installation and Operation Guide, and the Dell
OpenManage ATF Installation and Operation Guide, which provide installation
instructions for the Dell OpenManage Data Agent, Dell OpenManage Data Supervisor, Dell OpenManage Data Administrator, and the Dell OpenManage ATF.
You may also have one or more of the following documents:
•The Rack Installation Guide for your PowerVault storage system and the rack
installation documentation for your Dell PowerEdge servers provide detailed
instructions for installing the cluster components in a rack.
•Dell OpenManage Cluster Assistant With ClusterX Getting Started Guide
provides installation and configuration instructions for the ClusterX software.
•The Hardware Installation Guide for the QLA2x00 and the Software Installation
Guide for the QLA2x00 provide installation instructions for the QLogic host bus
adapter.
•The DellPowerVault Storage Area Network (SAN) Administrator’s Guide for
•The Microsoft Cluster Server Administrator's Guide for the Windows 2000
Cluster Service documentation describes the clustering software used on
PowerEdge Cluster FE100/FL100 Datacenter.
•The Microsoft Windows 2000 Datacenter Server documentation describes how
to install (if necessary), configure, and use the Windows 2000 Datacenter Server
operating system.
Documentation is included with any options you purchase separately from the system. This documentation includes information that you need to configure and install
these options in the Dell computer.
Technical information files-sometimes called “readme” files-may be installed on the
hard-disk drive to provide last-minute updates about technical changes to the system
or advanced technical reference material intended for experienced users or
technicians.
NOTE: Documentation updates are sometimes included with your system to describe
changes to your system or software. Always read these updates before consulting
any other documentation because the updates often contain the latest information.
Also consult Dell's Web site at http://www.support.com for updates.
Typographical Conventions
The following list defines (where appropriate) and illustrates typographical conventions used as visual cues for specific elements of text throughout this document:
•Interface components are window titles, button and icon names, menu names
and selections, and other options that appear on the monitor screen or display.
They are presented in bold.
Example: Click OK.
•Keycaps are labels that appear on the keys on a keyboard. They are enclosed in
angle brackets.
Example: <Enter>
•Key combinations are series of keys to be pressed simultaneously (unless other-
wise indicated) to perform a single function.
Example: <Ctrl ><Alt><Enter>
•Commands presented in lowercase bold are for reference purposes only and are
not intended to be typed when referenced.
Example: “Use the format command to . . . .“
•In contrast, commands presented in the Courier New font are part of an instruc-
tion and intended to be typed.
Example: “Type format a: to format the diskette in drive A.”
vii
•Filenames and directory names are presented in lowercase bold.
Examples: autoexec.bat and c:\windows
•Syntax lines consist of a command and all its possible parameters. Commands
are presented in lowercase bold; variable parameters (those for which you substitute a value) are presented in lowercase italics; constant parameters are
presented in lowercase bold. The brackets indicate items that are optional.
Example: del [drive:] [path] filename [/p]
•Command lines consist of a command and may include one or more of the com-
mand's possible parameters. Command lines are presented in the Courier New
font.
Example: del c:\myfile.doc
•Screen text is a message or text that you are instructed to type as part of a com-
mand (referred to as a command line). Screen text is presented in the Courier
New font.
Example: The following message appears on your screen:
•No boot device available
Example: “Type md c:\Programs and press <Enter>.”
•Variables are placeholders for which you substitute a value. They are presented in
italics.
Example: DIMM_x (where x represents the DIMM socket designation).
This chapter provides an overview of the following information for the
™
PowerEdge™ Cluster FE100/FL100 Datacenter Server configuration:
Dell
•Microsoft
®
Windows® 2000 Datacenter Server operating system
•Configuration and operation
•Cluster identification
•Failover options
•Minimum system requirements
•Support configuration requirements
Overview of Microsoft Windows 2000
Datacenter Server
Windows 2000 Datacenter Server is geared specifically for organizations implementing and managing enterprise applications that require high availability and high
performance. Typical application usages include large data warehouses, server consolidation, online transaction processing (OLTP), and modeling/simulation.
Windows 2000 Datacenter Server provides all of the features included with Microsoft
Windows 2000 Advanced Server, along with the following additional features:
•Two-to-four-node clustering—provides high availability of applications through
failover to another node in the cluster
•Support for up to 32-way symmetrical multiprocessing (SMP)—the operating sys-
tem distributes the processing workload with up to 32 central processing units
(CPUs)
•Support for up to 32 gigabyte (GB) random access memory (RAM) on the physi-
cal address extension (PAE)-capable PowerEdge 8450 system—improves
multiple application performance
•Winsock Direct—optimizes Internet protocol (IP) performance
•Process Control management tool—improves server resource management
support.dell.comGetting Started1-1
NOTE: Since Datacenter Server is one of four operating systems in the
Windows 2000 platform, some of the core services incorporated within Datacenter
Server are common to all Windows 2000 platforms. In the following sections, “Windows 2000” is used to identify the services common to all Windows 2000 platforms
and “Windows 2000 Datacenter Server” is used to identify services and components
specific to the Datacenter Server platform.
Overview of a Dell PowerEdge Cluster
FE100/FL100 Datacenter Server
Configuration
The PowerEdge Cluster FE100/FL100 Datacenter Server is a cluster solution that
implements 2-node to 4-node clustering technology based on the Microsoft
Windows 2000 Cluster Service (MSCS) software incorporated within the Windows
2000 Datacenter Server operating system.
NOTE: In this guide and in other cluster documentation, MSCS is also referred to as
Cluster Service.
PowerEdge Cluster FE100/FL100 Datacenter Server solutions provide the following
benefits in meeting the needs of mission-critical network application programs:
•High availability of system services and resources to network clients
•Redundant storage for application program data
•Failure recovery for cluster application programs
•Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a
cluster node without taking the entire cluster offline
•Load balancing between the cluster nodes
A PowerEdge cluster consists of two or more PowerEdge 8450 server systems
(referred to as cluster nodes) that are interconnected with hardware and software
components, providing a single point of continuous access to network services
(including file systems, databases, enterprise resource planning [ERP], and other
application programs, and services) for network clients. Each cluster node is configured with software, storage, and network resources that enable it to monitor and
interact with the other nodes to provide mutually redundant operation. Because the
cluster nodes interact in this way, they appear as a single system to the network
clients.
In a standard client/server environment, a user accesses a network resource by connecting to a physical server with a unique Internet Protocol (IP) address and network
name. If the server fails for any reason, the user will no longer be able to access the
resource. In a cluster environment, a user does not access a physical server, but a vir-tual server—a network resource managed by the Cluster Service that is not
associated with a physical server, and can failover to another cluster node. Virtual
servers are designed to dynamically reconfigure user resources during a connection
or hardware failure, providing a higher availability of network resources as compared
to a nonclustered PowerEdge system.
1-2User’s Guide
NOTE: For more information on failover, failback, and groups, see “Configuring
Failover and Failback Support” in Chapter 6, “Configuring the System Software.”
SAN-Attached Cluster Configuration
A PowerEdge Cluster FE100/FL100 Datacenter Server configuration is a SAN-attached
cluster configuration where all four cluster nodes are attached to a single
PowerVault™ storage system or to multiple PowerVault storage systems through a
Dell PowerVault SAN using a redundant Fibre Channel switch fabric.
NOTE: A PowerEdge Cluster FE100/FL100 Datacenter Server configuration cannot
coexist on the Fibre Channel switch fabric with other clusters or stand-alone servers.
A Fibre Channel switch fabric is an active, intelligent, and private connection of one or
more fibre channel switches that provide high-speed, point-to-point connections
between servers and storage devices. Using instructions that are programmed into
the switch, the switches in a Fibre Channel fabric provide point-to-point connection
through inbound and outbound points from one device (sender) to another device or
switch (receiver) on the network. If the data is sent to another switch, the process
repeats itself until a connection is established between the sender and the receiver.
One or more PowerVault Fibre Channel switches make up a Fibre Channel fabric.
Figure 1-1 shows an advanced SAN-attached cluster configuration.
support.dell.comGetting Started1-3
LAN/WAN
private network switch
PowerEdge
server
PowerEdge
server
Fibre Channel
Switch
PowerVa u lt
storage
system
PowerEdge
server
PowerVa u lt
storage
system
PowerEdge
server
Fibre Channel
Switch
PowerVa u lt
storage
system
Figure 1-1. SAN-Attached Cluster Configuration
PowerEdge Cluster FE100/FL100
Identification
The Dell PowerEdge Fibre Channel clusters are configured and identified by the
private network connection (cluster interconnect) that connects the cluster nodes
together— FE (Fibre Channel Ethernet) and FL (Fibre Channel Low Latency)—and the
type of storage devices in the cluster configuration.
1-4User’s Guide
Table 1-1 provides an overview of the differences between PowerEdge Cluster FE100
and FL100 Datacenter Server configurations.
MSCS and PowerEdge Clusters support multiple variations of Activen/Active and
n
Active
Table 1-2 provides a description of the configuration types and their definitions.
Table 1-2. Activen/Active and Activen/Passive Configuration Types
/Passive configurations (where n = the number of active cluster nodes).
Configuration TypeNumber
of Active
Cluster
Nodes
Active/Active
2
Definition
The active node(s) process requests
and provide failover for each other,
Active/Active/Active
3
depending on cluster node resources
and your configuration.
Active/Active/Active/Active
4
Active/Passive
1
The active node(s) processes
requests while the passive node
Active/Active/Passive
Active/Active/Active/Passive
n
An active
/active configuration refers to a cluster with virtual servers running on each
2
waits for the active node to fail.
3
node. When an application is running on node 1, the remaining cluster nodes do not
have to wait for node 1 to fail. The remaining cluster nodes can run their own clusteraware applications (or another instance of the same application) while providing
n
failover capabilities for the resources on node 1. However, an active
/active cluster
node must be configured appropriately to handle the workload of other cluster nodes
in case one cluster node fails.
n
Active
/passive refers to failover cluster configurations in which one cluster node is
actively processing requests for a clustered application while another cluster node
n
simply waits for the active node to fail. An active
/passive configuration is more costly
in terms of price and performance because one cluster node sits idle all of the time.
support.dell.comGetting Started1-5
However, this configuration is appropriate for business-critical systems since the
application can use the full power of another cluster node in case one cluster node
fails.
NOTE: For clarity, future references of activen/active and activen/passive configurations will use “n” to equal the number of active cluster nodes. For example, an
active/active/active/active configuration consisting of four active cluster nodes will be
referred to as an active
4
configuration.
Failover and Failback Support
One of the key features of Cluster Service is failover and failback support. Failover is
the process of automatically moving resources from a failed cluster node to other
healthy node(s) in the cluster. Failback is the process of returning the resources back
to the original cluster node. Both failover and failback can be an automatic or manual
process, depending on how you configure the Cluster Service.
NOTE: For more information on failover, failback, and cluster groups, see “Configuring
Failover and Failback Support” in Chapter 6, “Configuring the System Software.”
PowerEdge Cluster FE100/FL100
Datacenter Server Failover Options
The PowerEdge FE100/FL100 Datacenter Server configuration provides the following
failover options:
•N + 1
•Multiway
•Cascading
•N-Way migration solution
The following subsections describe each of these failover options.
N+1 Failover
N+1 failover is an activen/passive failover solution where one cluster node provides
backup for multiple cluster nodes. This solution provides the following features:
Advantage:
•One cluster node is a dedicated backup node for all cluster nodes, providing the
best solution for critical applications.
Disadvantages
•High expense, as one node is not used to provide network resources.
•Backup cluster node may lack the resources to support multiple cluster node
failures.
Figure 1-2 shows an example of N + 1 failover configuration.
1-6User’s Guide
cluster
node 1
cluster
node 2
cluster
node 3
cluster
node 4
(backup)
Figure 1-2. N+1 Failover
Table 1-3 provides a N+1 failover configuration for the cluster shown in Figure 1-2. For
each cluster resource group, the failover order in the Preferred Owners list provides
the order that you want that resource group to failover. If that resource group or its
cluster node fails, the cluster will try to fail that resource group to the first available
node in the list. In this example, node 1 owns cluster resource group A, node 2 owns
cluster resource group B, and node 3 owns cluster resource group C. Cluster
resource groups A, B, or C would failover to cluster node 4, if cluster node 1, 2, or 3
fails, respectively.
Table 1-3. N+1 Failover Configuration for a 4-Node Cluster
Cluster Resource GroupFailover Order in the Preferred
Owners List
A1, 4
B2, 4
C3, 4
NOTE: When implementing this type of failover solution, failback should be configured if the cluster node lacks the resources (such as memory or processing power) to
support one or more cluster node failures.
Multiway Failover
Multiway failover is an activen/active failover solution where running applications from
a failed node migrate to multiple nodes in the cluster. This active
failover provides the following features:
Advantage:
n
/active type of
•Automatic failover and load-balancing between the cluster nodes.
support.dell.comGetting Started1-7
Disadvantage:
•Must ensure that the failover cluster nodes have ample resources available to
handle the additional workload.
Figure 1-3 shows an example of multiway failover configuration.
cluster
node 1
Application A
Application C
cluster node 3cluster node 4
Application B
cluster
node 2
Figure 1-3. Example of a 4-Node Multiway Failover
Table 1-4 provides an example of a multiway failover configuration for the cluster
shown in Figure 1-3. For each cluster resource group, the failover order in the
Preferred Owners list outlines the order that you want that resource group to failover.
In this example, node 1 owns cluster resource groups A, B, and C. If node 1 fails, the
cluster resource groups A, B, and C will failover to cluster nodes 2, 4, and 3, respectively. The cluster resource groups on cluster nodes 2, 3, and 4 need to be configured
similarly.
Table 1-4. Example of a 4-Node Multiway Failover Configuration
1-8User’s Guide
Cluster Resource GroupFailover Order in the
Preferred Owners List
A1, 2, 3, 4
B1, 4, 2, 3
C1, 3, 4, 2
NOTE: When implementing this type of failover solution, failback should be configured to avoid performance degradation.
Cascading Failover
With Cascading failover, all running applications migrate from the failed node to the
next preassigned cluster node. If you do not make a failover selection, cascading
failover will be the default failover type. This type of failover provides the following
features:
Advantage:
•High resource availability to users.
Disadvantage:
•The cluster node next in line for failover may not have ample resources available
to handle the additional workload of the failed node.
Figure 1-4 shows an example of cascading failover configuration.
applications
failed cluster
node 1
cluster
node 2
cluster
node 3
cluster
node 4
Figure 1-4. Example of a 4-Node Cascading Failover
N-Way Migration
N-Way migration is the ability to manually migrate an application from one node to any
node in the cluster (based on cluster node resource availability). This type of solution
provides the following features:
Advantages:
•Adjustable resource allocation.
•Added flexibility.
Disadvantage:
•Solution is not automatic.
Figure 1-5 shows an example of an N- Way migration solution.
support.dell.comGetting Started1-9
cluster
node 1
cluster
node 2
Application A
cluster
node 3
cluster
node 4
Figure 1-5. Example of a 4-Node N-Way Migration Solution
Table 1-5 provides an overview of the failover types implemented with Datacenter
Server.
Table 1-5. Failover Configurations
Failover
Type
N + 1One server provides
Multiway Running applications
DescriptionAdvantageDisadvantage
backup for multiple
servers in the cluster
High resource availability
Backup cluster
node may lack the
resources to
support multiple
cluster node
failures
migrate to multiple
nodes in the cluster
Application load
balancing
Must ensure that
the failover cluster
nodes have ample
resources available
to handle the additional workload
CascadingRunning applications
N-Way
migration
1-10User’s Guide
migrate to the next
pre-assigned cluster
node
Any running application(s) are manually
migrated to any node
in the cluster
Higher availabilityThe cluster node
next in line for
failover may not
have ample
resources available
to handle the additional workload of
the failed node
Added flexibility
and adjustable
Solution is not
automatic
resource
allocation
PowerEdge Cluster FE100/FL100
Datacenter Server Minimum System
Requirements
Dell PowerEdge Cluster FE100/FL100 Datacenter Server configurations require the
following hardware and software components:
•Cluster nodes
•Cluster storage
•Cluster interconnects
•Operating system and system management software
Cluster Nodes
Cluster nodes require the following hardware resources:
•Two to four supported Dell PowerEdge systems, each with at least two
microprocessors.
•For each server, a minimum of 2 GB random access memory (RAM) and two
HBAs.
•For each server, at least one network interface controller (NIC) is required for
client local area network (LAN) connections (public network).
•One NIC required for the cluster interconnect (private network).
Cluster Storage
Cluster storage requires the following:
•For each cluster, at least one supported PowerVault 65xF/630F Fibre Channel
storage system with dual standby power supplies (SPSs).
•For each PowerVault 65xF disk processor enclosure (DPE), at least ten hard-disk
drives are required;
•Each PowerVault 630F disk array enclosure (DAE) attached to the cluster can con-
The cluster connections for the cluster interconnect (private network that connects
the cluster nodes together) requires the following hardware components:
•For each server, either a Broadcom NetExtreme Gigabit Ethernet NIC or
Giganet cLAN 1000 adapter.
•For each server, one Category 5 (CAT 5) Ethernet cable or Giganet cLAN cable to
connect the nodes.
support.dell.comGetting Started1-11
•For each cluster, a network switch or Giganet cLAN cluster switch to connect the
cluster nodes.
NOTE: If you have a two-node PowerEdge Cluster FE100/FL100 Datacenter Server
configuration that will not expand the configuration to a three or four node cluster, a
crossover cable or cLAN cable can be used to connect the nodes rather than using a
private network switch.
Tape library (optional)PowerVault 130T digital linear tape (DLT) library with
driver version 1.0
NOTICE: Dell recommends that you use a redundant array of independent
disks (RAID) level other than RAID 0 for your PowerVault shared storage
system. RAID 0 does not provide the level of availability required for the
quorum resource. See the Installation and Troubleshooting Guide for your
PowerVault storage system for more information about setting up RAID levels for the system.
SAN-Attached Cluster Requirements
Table 1-8 provides the requirements for a SAN-attached cluster configuration.
Table 1-8. SAN-Attached Cluster Requirements
Rule/GuidelineDescription
SAN versionSAN 3.0
HBAQLogic QLA2200/66 with firmware version 1.45 and
driver version 7.04.08.02
HBA failover driverDell OpenManage ATF version 2.3.2.5
Fibre Channel switchPowerVault 51F Fibre Channel switch with firmware
version 2.1.7
PowerVault 56F Fibre Channel switch with firmware
version 2.1.7
Fibre Channel-to-SCSI
bridge
Up to four PowerVault 35F Fibre Channel-to-SCSI
bridges with firmware version d99908e
SAN tape backupUp to two PowerVault 130T tape libraries for each
PowerVault 35F bridge
PowerSuite
Computer Associates
®
ARCserve® 2000
VERITAS™ BackupExec 8.0 Build 3316
SAN configuration
Follow SAN 3.0 rules
guidelines
Redundant switch fabric is required
support.dell.comGetting Started 1-15
1-16 User’s Guide
CHAPTER 2
Installation Overview
This chapter provides an overview for installing and implementing a Dell
PowerEdge Cluster FE100/FL100 Datacenter Server configurations. More detailed
instructions are provided later in this document.
NOTICE: Before installing the cluster, ensure that your site can handle the
power requirements of the cluster equipment. Contact your Dell sales representative for information about your region's power requirements.
WARNING: Hardware installation should be performed only by trained
service technicians. Before working inside the system, see the safety
instructions in your Dell PowerEdge System Information document to avoid
a situation that could cause serious injury or death.
The following chapter provides an overview for installing Microsoft Windows 2000
Datacenter Server on the PowerEdge Cluster FE100/FL100 Datacenter Server.
To install Windows 2000 Datacenter Server on the PowerEdge Cluster FE100/FL100
Datacenter Server cluster, perform the following steps:
1.Add network interface controllers (NIC), host bus adapters (HBA), redundant
array of independent disks (RAID) controllers, small computer system interface
(SCSI) hard-disk drives, Fibre Channel hard-disk drives, and other components to
the existing system hardware to meet the requirements for a Dell PowerEdge
Cluster FE100/FL100 Datacenter Server configuration.
2.Cable the system hardware for clustering.
3.Configure RAID for the internal SCSI hard-disk drives using the RAID controller’s
basic input/output system (BIOS) utility.
4.Create the low-level configurations and configure the hardware settings for the
host bus adapter (HBA), and ensure that the sum of the execution throttle values
does not exceed 240 and that each value does not exceed 90.
NOTE: See the Dell PowerVault SAN Administrator’s Guide for critical information
on performing low-level configurations and setting the execution throttle settings
on your HBAs.
5.Install and configure the Windows 2000 Datacenter Server operating system
with the latest Service Pack and hotfixes (if applicable) on each node.
support.dell.comInstallation Overview2-1
6.During the installation, check the appropriate box to install the Cluster Service
files when prompted.
NOTICE: Do not configure the Cluster Service in this step.
7.Configure the public and private networks in each node, and place each network
on separate subnets with static Internet protocol (IP) addresses.
NOTE: The public network refers to the NIC used for client connections. The private network refers to the cluster interconnect that connects the cluster nodes
together.
8.Update the miniport driver for the Fibre Channel HBAs in each node.
9.Install the QLogic Fibre Channel configuration software on each node and reboot.
10. Inst a ll Dell OpenManage Application Transparent Failover (ATF) on each node
and reboot.
11. I ns t a ll Dell OpenManage Managed Node (Data Agent) on each node.
12. Insta ll Dell OpenManage Data Supervisor or Dell OpenManage Data Admin-istrator on node 1.
13. Reboot node 1 and shutdown all other nodes.
14. From node 1, configure the RAID level on the storage system and then reboot
node 1.
15. Run Microsoft Windows 2000 Disk Management and format and assign drive
letters to the Fibre Channel hard-disk drives in the shared storage system.
16. Verify disk access and functionality to all new hard-disk drives in the shared storage system.
17. Power off the node.
18. Repeat steps 15 through 17 on all other nodes for each shared volume.
NOTE: You do not have to reformat the disks.
19. Power on node 1 and configure the Cluster Service.
20. After the Cluster Service has started on node 1, power on all other cluster nodes
and then install Cluster Service on these nodes.
21. Verify the functionality of the cluster.
22. Configure the failover for each cluster node.
23. Install and setup your application programs.
24. Record the configuration of the cluster using the data sheets in Appendix B,
“Cluster Data Sheets.”
2-2User’s Guide
Loading...
+ 68 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.