Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, Dell OpenManage, PowerEdge, and PowerVault are trademarks of Dell Inc.; Microsoft and
Windows are registered trademarks of Microsoft Corporation; EMC, Navispher e, and PowerPath are registered trademarks of EMC Corporation;
Access Logix, MirrorView, and SnapView are trademarks of EMC Corporation.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products.
Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
September 2004P/N 6W455Rev. A05
This document provides information for installing and connecting peripheral hardware, storage,
and storage area network (SAN) components for your Dell™ PowerEdge™ Cluster FE400 solution.
The configuration information in this document is specific to the Microsoft
®
Windows®2000
Advanced Server and Windows Server 2003, Enterprise Edition operating systems.
This document covers the following topics:
•Configuration information for installing peripheral hardware components, such as HBAs,
NICs, and PCI adapter cards into Cluster FE400 configurations
•Configuration rules and guidelines for direct-attached or SAN-attached configurations
•Best practices
NOTE: Configurations not listed in this document may not be certified or supported by Dell or Microsoft.
NOTE: In this guide and in other cluster documentation, the Microsoft Cluster Service (for Windows 2000
Advanced Server or Windows Server 2003, Enterprise Edition) is also referred to as MSCS.
Supported Cluster Configurations
This section provides information about supported cluster configurations for your
PowerEdge cluster configuration.
The Cluster FE400 solution supports the following configurations:
®
•Two-node clusters running the Microsoft
system.
•Clusters with up to eight nodes running the Windows Server 2003, Enterprise Edition
operating system.
Table 1-1 provides a list of supported cluster configurations for the Cluster FE400 systems running
Windows 2000 Advanced Server or Windows Server 2003, Enterprise Edition.
Windows® 2000 Advanced Server operating
NOTE: Each cluster node must be of the same system model and have two or more processors.
Supported Storage Systems Supported Cluster Interconnect (for the
Private Network)
Dell | EMC CX600
Dell | EMC CX400
Dell | EMC CX200
Any NIC supported by the system.
NOTE: All nodes in the same cluster must
use homogeneous (identical) NICs for the
cluster interconnect.
Platform Guide3
Obtaining More Information
See the
related documentation.
Dell PowerEdge Cluster FE400 Installation and Troubleshooting Guide
High-Availability Cluster Configurations
This section provides information about the supported operating systems, HBAs, and HBA drivers
for your cluster configuration.
NOTICE: All cluster nodes in a Cluster FE400 solution must run the same operating system.
Mixing Windows 2000 Advanced Server and Windows Server 2003, Enterprise Edition in the same cluster
is not supported except during a rolling upgrade.
www.dell.com | support.dell.com
NOTICE: HBAs installed in clusters using redundant paths must be identical. Cluster configurations are
tested and certified using identical HBAs installed in all of the cluster nodes. Using dissimilar HBAs in
your cluster nodes is not supported.
Service Pack Support
Windows 2000 Advanced Server
Microsoft Windows 2000 Service Pack 4 or later is required for Cluster FE400 systems that use
Windows 2000 Advanced Server.
You can download the latest service pack from the Microsoft Support website at
support.microsoft.com
for a detailed list of
.
Windows Server 2003, Enterprise Edition
At the time this document was printed, a service pack was not available for Windows Server 2003,
Enterprise Edition. However, hotfix KB818877 and the most recent hotfix for storport.sys
are required.
See Knowledge Base article KB818877 on the Microsoft Support website at
for more information. At the time this document was printed, KB838894 was the most recent
hotfix for storport.sys. You can also go to the Dell Support website at
recent information on this issue.
HBA Support for PowerEdge Cluster FE400 Configurations
Table 1-2 lists the systems and the HBAs that are supported for Cluster FE400 configurations
running Windows 2000 Advanced Server or Windows Server 2003, Enterprise Edition.
See "Installing Peripheral Components in Your Cluster Node PCI Slots" for PCI slot recommendations.
4Platform Guide
support.microsoft.com
support.dell.com
for the most
Table 1-2. Supported HBAs for Cluster FE400 Configurations
PowerEdge
System
1550XX
1650XX
1750XX
1800XXX
1850X*X*X**
2500XX
2550X
2600/2650XX
2800XXX
2850X*X*X**
4400XX
4600XX
6400/6450XX
6600/6650XX
8450XX
* The PowerEdge system must have a PCI-X riser installed in order to use this HBA.
** The PowerEdge system must have a PCIe riser installed in order to use this HBA.
Emulex LP982
or LP9802 (PCI-X)
HBA
QLogic QLA2340
(PCI-X) HBA
Emulex LP1050-EX
(PCIe Express [PCIe])
HBA
Fibre Channel Switches
•Dual (redundant) fabric configurations are required.
•A maximum of 16 switches may be used in a SAN.
•A minimum of two and a maximum of eight Inter-Switch Links (ISLs) may exist between any
two directly communicating switches. A single ISL is permitted only when connecting to a
remote switch in an EMC
®
MirrorView™ configuration.
•A maximum of three hops (the number of ISLs each data frame must traverse) may exist
between a host and a storage system.
Rules and Guidelines
When configuring your cluster, all cluster nodes must contain identical versions of the following:
Platform Guide5
•Operating systems and service packs
•Hardware, drivers, firmware, or BIOS for the NICs, HBAs, and any other peripheral
hardware components
•Systems management software, such as Dell OpenManage™ systems management software
and EMC Navisphere
Maximum Distance Between Cluster Nodes
The maximum cable length allowed from an HBA to a switch, an HBA directly connected to a
storage system, or a switch to a storage system is 300 meters using multimode fiber at 2 Gb/sec.
The total distance between an HBA and a storage system may be increased through the use of
switch ISLs.
www.dell.com | support.dell.com
The maximum cable length for Fast Ethernet and copper Gigabit Ethernet is 100 meters, and for
optical Gigabit Ethernet, it is 550 meters. This distance may be extended using switches and
VLAN technology. The maximum latency for a round-trip network packet between nodes is
500 milliseconds.
Obtaining More Information
See the
instructions for hardware configurations running Windows 2000 Advanced Server or
Windows Server 2003, Enterprise Edition.
Dell PowerEdge Cluster FE400 Installation and Troubleshooting Guide
Installing Peripheral Components in Your
®
storage management software
for installation
Cluster Node PCI Slots
This section provides configuration information. Table 1-3 provides information about PCI slot
configurations. Table 1-4 provides information about PCI slot assignments.
CAUTION: Only trained service technicians are authorized to remove and access any of the components
inside the system. See your PowerEdge System Information Guide or Product Information Guide for
complete information about safety precautions, working inside the computer, and protecting against
electrostatic discharge.
6Platform Guide
Table 1-3. PCI Slot Configurations for PowerEdge Cluster Nodes
PowerEdge System Riser Board OptionSlotSlot TypeSlot Speed
1550N/A1-2PCI64 bit, 66 MHz
1650Any1PCI64 bit, 66 MHz
or
32 bit, 33 MHz
2PCI64 bit, 66 MHz
1750Any1PCI-X or PCI64 bit, 133 MHz
PCI-X
or
64 bit, 33 MHz PCI
2PCI-X64 bit, 133 MHz
1800N/A1PCI64 bit, 66 MHz
2PCIe2.5 GHz PCIe
x4-lane width
3PCIe2.5 GHz PCIe
x8-lane width
4PCI32 bit, 33 MHz
5-6PCI-X64 bit, 100 MHz
1850Standard1PCI-X64 bit, 133 MHz
2PCI-X64 bit, 100 MHz
PCI-X with ROMB 1PCI-X64 bit, 133 MHz
2PCI-X64 bit, 100 MHz
PCIe with ROMB1PCIe2.5 GHz PCIe
x4-lane width
2PCIe2.5 GHz PCIe
x8-lane width
Platform Guide7
Table 1-3. PCI Slot Configurations for PowerEdge Cluster Nodes (continued)
PowerEdge System Riser Board OptionSlotSlot TypeSlot Speed
25001-2PCI64 bit, 66 MHz
25501-3PCI64 bit, 33 MHz
26001PCI32 bit, 33 MHz
www.dell.com | support.dell.com
26501PCI-X64 bit, 100 MHz
28001PCI32 bit, 33 MHz
2850PCI-X1-3PCI-X64 bit, 133 MHz
4400N/A1-2PCI64 bit, 66 MHz
4600N/A1PCI32 bit, 33 MHz
3-5PCI64 bit, 33 MHz
6-7PCI32 bit, 33 MHz
2-5PCI-X64 bit, 100 MHz
6-7PXI-X64 bit, 133 MHz
2-3PXI-X64 bit, 133 MHz
NOTE: Slot 1 must
be empty for Slot 2 to
attain an operating
speed of 133 MHz.
2-5PCI-X64 bit, 133 MHz
6PCIe2.5 GHz PCIe
x4-lane width
7PCIe2.5 GHz PCIe
x8-lane width
NOTE: If Slot 1 is
populated, Slots 2
and 3 operate at
100 MHz.
PCIe1PCIe2.5 GHz PCIe
x4-lane width
2PCIe2.5 GHz PCIe
x8-lane width
3PCI-X64 bit, 100 MHz
3-6PCI64 bit, 33 MHz
7PCI32 bit, 33 MHz
2-3PCI-X64 bit, 100 MHz
8Platform Guide
Table 1-3. PCI Slot Configurations for PowerEdge Cluster Nodes (continued)
PowerEdge System Riser Board OptionSlotSlot TypeSlot Speed
4-5PCI-X64 bit, 100 MHz
6-7PCI-X64 bit, 100 MHz
6400N/A1PCI32 bit, 33 MHz
2-5PCI64 bit, 33 MHz
6-7PCI64 bit, 66 MHz
6450N/A1PCI32 bit, 33 MHz
2-5PCI64 bit, 33 MHz
6-7PCI64 bit, 66 MHz
6600N/A1PCI32 bit, 33 MHz
2-3PCI-X64 bit, 100 MHz
4-5PCI-X64 bit, 100 MHz
6-7PCI-X64 bit, 100 MHz
8-9PCI-X64 bit, 100 MHz
10-11PCI-X64 bit, 100 MHz
6650N/A1PCI32 bit, 33 MHz
2-3PCI-X64 bit, 100 MHz
4-5PCI-X64 bit, 100 MHz
6PCI-X64 bit, 100 MHz
7PCI-X64 bit, 100 MHz
8PCI-X64 bit, 100 MHz
8450N/A1-2PCI64 bit, 33 MHz
3-6PCI64 bit, 33 MHz
7-8PCI64 bit, 66 MHz
9-10PCI64 bit, 66 MHz
Table 1-4. PCI Slot Assignments for PowerEdge Cluster Nodes
Install the HBAs in any
available PCI or
PCI-X slots.
Install the HBAs in any
available PCI slots.
Install the HBAs in any
available PCI slots.
Install the HBAs in any
available PCI or
PCI-X slots.
Install the HBAs in any
available PCI or
PCI-X slots.
Install the HBAs in any
available PCI slots.
NOTE: Whenever possible, it is recommended that the HBAs be placed on separate buses to balance the
load on the system. These buses are identified as separate rows in Table 1-3.
Attaching Your Cluster to a Shared Storage System Through
Direct-Attach Configuration
This section provides the rules and guidelines for attaching your cluster nodes to the shared storage
system using a direct connection (without Fibre Channel switches).
In a direct-attach configuration, both cluster nodes are connected directly to the storage system.
Rules and Guidelines
The rules and guidelines described in Table 1-5 apply to direct-attached clusters.
Platform Guide11
Table 1-5. Direct-Attached Clusters Rules and Guidelines
Rule/GuidelineDescription
Primary storageEach Windows 2000 Advanced Server and Windows Server 2003,
Fibre Channel
HBAs supported
Emulex driver versionSCSI port driver 5-2.22a8 or later (Windows 2000), Storport miniport driver
www.dell.com | support.dell.com
Emulex firmware version1.90a4 or later
QLogic driver versionSCSI miniport driver 9.00.12 or later (Windows 2000), Storport miniport
QLogic BIOS version1.42 or later
Operating systemEach direct-attached cluster must run Windows 2000 Advanced Server or
Windows 2000 Advanced
Server service pack
Windows Server 2003,
Enterprise Edition
service pack
Dell | EMC CX600
core software
Dell | EMC CX400
core software
Dell | EMC CX200
core software
Additional software
application programs
Enterprise Edition cluster can support up to 22 unique drive letters for
shared logical drives. Windows Server 2003 can support additional physical
drives through mount points.
Only one storage system can be directly attached to the cluster.
See Table 1-2 to determine which HBAs are supported on your
PowerEdge server.
5-1.02a3 or later (Windows 2003).
driver 9.00.17 or later (Windows Server 2003).
Windows Server 2003, Enterprise Edition.
Windows 2000 Advanced Server configurations require Service Pack 4
or later.
Windows Server 2003 configurations require KB818877 and the latest
StorPort hotfix (KB838894 at the time of print); or Service Pack 1 if
available.
2.07.600 or later; however, Access Logix™ Option 01.02.5 or later must be
installed and enabled if two clusters or a mix of clustered and non-clustered
hosts are direct-attached to the CX700.
2.07.400 or later.
2.07.200 or later.
EMC Navisphere Agent 6.7 or later.
EMC Navisphere Manager 6.7 or later.
EMC PowerPath
®
3.0.6 or later.
EMC AdmSnap version 2.4.0 or later.
EMC SnapView™ Option version 01.01.5 or later
Emulex Configuration Utility for Windows 2000 version 1.41a13 or later.
Emulex LPUtilNT for Windows Server 2003 version 1.7a12 or later.
QLogic SANsurfer SANblade Manager version 2.0.29 or later.
12Platform Guide
Attaching Your Cluster Shared Storage System to a SAN
This section provides the rules and guidelines for attaching your PowerEdge cluster nodes to the
shared storage systems through a Dell | EMC SAN using redundant Fibre Channel switch fabrics.
Rules and Guidelines
The rules and guidelines described in Table 1-6 apply to SAN-attached clusters.
Table 1-6. SAN-Attached Clusters Rules and Guidelines
Rule/GuidelineDescription
Primary storageEach Windows 2000 Advanced Server and Windows Server 2003,
Enterprise Edition cluster can support up to 22 unique drive letters for
shared logical drives. Windows Server 2003 can support additional
physical drives through mount points.
Up to four Dell | EMC Fibre Channel disk arrays are supported per
cluster in a SAN environment.
Secondary storage Up to two PowerVault™ 132T, 136T, or 160T libraries.
Any system attached to the SAN can share these devices.
Fibre Channel
switch configuration
Fibre Channel
switch zoning
Fibre Channel
switches supported
Fibre Channel
switch firmware
Fibre Channel
HBAs supported
Emulex driver versionSCSI port driver 5-2.22a8 or later (Windows 2000), Storport miniport
Emulex firmware version1.90a4 or later
QLogic driver versionSCSI miniport driver 9.00.12 or later (Windows 2000), Storport
QLogic BIOS version1.42 or later
Operating systemEach cluster attached to the SAN must run Windows 2000 Advanced
setting is the timeout value set by Windows for storage system I/O
operations. The Dell | EMC Fibre Channel storage environment requires 60 seconds for I/O
operations. When you run the Cluster Configuration wizard, the wizard sets the
TimeOutValue
setting to 20 seconds, which may not be sufficient for complex I/O operations.
Consequently, storage system I/O operations may continually time out.
Microsoft has confirmed a problem with the wizard and has implemented hotfix KB818877 to
resolve this issue. See Microsoft Knowledge Base article KB818877 on the Microsoft Support
website at
support.microsoft.com
for more information.
To resolve this issue, read the Knowledge Base article for instructions about how to obtain the
required Quick Fix Executable (QFE) file from Microsoft. Download and apply the QFE as
soon as possible.
If you
have not
configured your cluster, apply the QFE (or Service Pack 1 when available) to all
of the cluster nodes.
If you have configured your cluster, perform one of the following procedures and then reboot
each cluster node, one at a time:
–Manually change the registry
TimeOutValue
setting to 60 on each cluster node.
–Download the Cluster Disk Timeout Fix utility from the Dell Support website at
support.dell.com
When prompted, type the name of your cluster in the
Dell | EMC
associated with the cluster name and sets the
and run the utility on your cluster.
in the
Storage system type
field. The utility locates the cluster nodes
Cluster name
TimeOutValue
field and select
setting on each node to the
correct setting.
•Using a Tape Backup Library in a SAN
Cluster FE400 solutions-based on Windows 2000 Advanced Server- that are configured with
Emulex HBAs can be connected to one or more tape backup libraries that can be shared with
other clusters and systems in a SAN. To avoid disrupting I/O activities from other network
systems to the tape drive and to ensure cluster failover operations, disable the target reset to
the tape device.
To disable the target reset:
a
Click the
Start
button, select
Run
, and type the following:
c:\Program Files\HBAnyware\elxcfg.exe --emc
The
b
c
d
Emulex Configuration Tool
In the
Available Adapters
In the
Adapter Controls
In the
File
menu, select
box, select the first HBA in the list.
box, select
Apply
window appears.
Disable Target Reset for Tape Devices
.
.
Platform Guide15
e
In the
f
Repeat step c and step d.
g
Reboot the cluster node.
h
Repeat step a through step g on each additional node.
•
The cluster disks are not initialized in Disk Management.
On clusters running Windows Server 2003, Disk Management may display the cluster disks as
not initialized
the same cluster.
This behavior is normal and does not affect cluster operations. See Microsoft Knowledge Base
www.dell.com | support.dell.com
article KB818878 on the Microsoft Support website at
information.
Available Adapters
. This issue may occur if the cluster disks are owned by other nodes in
box, select the second HBA in the list.
support.microsoft.com
for more
16Platform Guide
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.