Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and Dell OpenManage are trademarks of Dell Inc.; Microsoft and
Windows are registered trademarks of Microsoft Corporation; EMC, Navisphere, and PowerPath are registered trademarks and Access Logix,
MirrorView, and SnapView are trademarks of EMC Corporation.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products.
Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
Table 1-4.Direct-Attached Clusters Rules and Guidelines
. . . . . . . . . . . . . 7
. . . . 6
. . . . . . 7
. . . . . . . . . . . 9
. . . . . . . . . . . . 8
. . . 8
Table 1-5.SAN-Attached Clusters Rules and Guidelines
. . . . 9
Contents3
Page 6
4Contents
Page 7
This document provides information for installing and connecting peripheral hardware, storage,
and storage area network (SAN) components to your Dell™ PowerEdge™ Cluster FE500W-IA64
solution. The configuration information in this document is specific to the Microsoft
®
Windows®
Server 2003, Enterprise Edition for 64-bit Itanium-based Systems operating system.
This document covers the following topics:
•Configuration information for installing peripheral hardware components, such as storage
systems, HBAs, NICs, and PCI adapter cards into Cluster FE500W-IA64 configurations
•Configuration rules and guidelines for direct-attached configurations
•Configuration rules and guidelines for SAN-attached configurations
NOTE: Configurations not listed in this document may not be certified or supported by Dell or Microsoft.
NOTE: In this guide and in other cluster documentation, the Microsoft Cluster Service is also referred to
as MSCS.
Supported Cluster Configurations
This section provides information about supported cluster configurations for your PowerEdge
cluster configuration.
Table 1-1 provides a list of supported cluster configurations for the Cluster FE500W-IA64 systems
running Windows Server 2003, Enterprise Edition for 64-bit Itanium-based Systems.
NOTE: Each cluster node must be of the same system model and have two or more processors.
Table 1-1. Supported Cluster Configurations
PowerEdge
Cluster
FE500W-IA64 7250Dell | EMC CX300
Supported
PowerEdge
Systems
Supported Storage Systems Supported Cluster Interconnect (for the
Dell | EMC CX500
Dell | EMC CX700
Obtaining More Information
See the
list of related documentation.
Dell PowerEdge Cluster FE500W-IA64 Installation and Troubleshooting Guide
Private Network)
Any Ethernet NIC supported by
the system.
NOTE: All nodes in the same cluster
must use homogeneous (identical)
Ethernet NICs for the
cluster interconnect.
for a detailed
Platform Guide5
Page 8
High-Availability Cluster Configurations
This section provides information about the supported operating systems, HBAs, and HBA drivers
for your cluster configuration.
NOTICE: All cluster nodes in a Cluster FE500W-IA64 solution must run the same operating system.
NOTICE: HBAs must be identical if they are installed in clusters using redundant paths. Cluster
configurations are tested and certified using identical HBAs installed in all of the cluster nodes.
Using dissimilar HBAs in your cluster nodes is not supported.
Service Pack Support
At the time this document was printed, a service pack was not available for the Windows
www.dell.com | support.dell.com
Server 2003 Enterprise Edition for 64-bit Itanium-based Systems operating system. However,
hotfix
KB818877
incorporated into Service Pack 1 when it is available.
See Knowledge Base articles
located on the Microsoft support website at
HBA Support for PowerEdge Cluster FE500W-IA64 Configurations
The Cluster FE500W-IA64 configuration supports the QLogic QLA2340 HBA. No additional
HBAs are supported.
See "Installing Peripheral Components in Your Cluster Node PCI Slots" for PCI slot recommendations.
Fibre Channel Switches
When you are configuring your Cluster FE500W-IA64 solution, use the following guidelines:
•Dual (redundant) fabric configurations are required.
•A maximum of 16 switches may be used in a SAN.
•A minimum of two and a maximum of eight Inter-Switch Links (ISLs) may exist between any
two directly communicating switches. A single ISL is permitted only when connecting to a
remote switch in a EMC
•A maximum of three hops (the number of ISLs each data frame must traverse) may exist
between a host and a storage system.
and the most recent StorPort hotfix are required. These hotfixes will be
KB818877
®
MirrorView™ configuration.
and
KB838894
www.microsoft.com
, the current StorPort hotfix at this time,
for more information.
6Platform Guide
Page 9
Rules and Guidelines
When configuring your cluster, all cluster nodes must contain identical versions of the following:
•Operating systems and service packs
•Hardware drivers, firmware, or BIOS for the NICs, HBAs, and any other peripheral
hardware components
•Systems management software, such as Dell OpenManage™ systems management software
and EMC Navisphere
Maximum Distance Between Cluster Nodes
Table 1-2 lists the maximum cable lengths that are used in a Cluster FE500W-IA64 configuration.
Table 1-2. Maximum Cable Lengths
Cable Configuration/TypeMaximum Cable Length
• HBA to a switch
• HBA to a storage system
• Fibre Channel switch to a
storage system
• Copper Gigabit Ethernet
Optical Gigabit Ethernet550 meters
®
storage management software
300 meters using multimode fiber at 2Gb/sec.
NOTE: You can increase the total distance between a server and a storage
system by using switch ISLs.
100 meters
NOTE: You can extend this distance by using switches and VLAN technology
Maximum Latency Between Cluster Nodes
The maximum latency for a round-trip network packet between nodes is 500 milliseconds.
Obtaining More Information
See the
installation instructions about hardware configurations running Windows Server 200, Enterprise
Edition for 64-bit Itanium-based Systems.
Dell PowerEdge Cluster FE500W-IA64 Installation and Troubleshooting Guide
for
Installing Peripheral Components in Your
Cluster Node PCI Slots
This section provides configuration information for adding HBAs, a DRAC III, and
RAID controllers into your cluster node PCI slots.
Table 1-3 provides configuration information for the supported PowerEdge cluster nodes.
CAUTION: Only trained service technicians are authorized to remove and access any of the
components inside the system. See your PowerEdge System Information Guide for complete information
about safety precautions, working inside the computer, and protecting against electrostatic discharge.
Platform Guide7
Page 10
Table 1-3. PCI Slot Assignments for PowerEdge Cluster Nodes
PowerEdge
System
7250PCI-X bus 0: PCI slot 1
www.dell.com | support.dell.com
Attaching Your Cluster Shared Storage System in a
Direct-Attach Configuration
This section provides the rules and guidelines for attaching your cluster nodes to the shared storage
system using a direct connection (without Fibre Channel switches).
In a direct-attached configuration, both cluster nodes are connected directly to the storage system.
Rules and Guidelines
The rules and guidelines described in Table 1-4 apply to direct-attached clusters.
Table 1-4. Direct-Attached Clusters Rules and Guidelines
Rule/GuidelineDescription
Primary storageWindows Server 2003, Enterprise Edition for 64-bit Itanium-based Systems
Fibre Channel
HBAs supported
QLogic driver versionStorport driver 9.00.17 or later.
Operating systemEach direct-attached cluster must run Windows Server 2003, Enterprise
Dell | EMC CX300
core software
Dell | EMC CX500
core software
PCI Buses/SegmentsHBADRAC IIIRAID Controller
through 3 are 64-bit, 100 MHz
PCI-X bus 1: PCI slots 4 and 5
are 64-bit, 100 MHz; PCI slot 6
is 64-bit, 133 MHz
PCI-X bus 2: PCI slots 7 and 8
are 64-bit, 133 MHz
can support more than 22 shared logical drives through mount points.
Only one storage system can be direct-attached to the cluster.
QLogic QLA2340.
Edition for 64-bit Itanium-based System.
2.06.300 or later.
2.06.500 or later.
For dual HBA
configurations,
install the HBAs
on separate 64-bit
PCI buses to
balance the load
on the system.
Install the new or
existing DRAC III
in slot 1.
Install the RAID
controller in PCI
slot 1, 2, 3, or 4.
8Platform Guide
Page 11
Table 1-4. Direct-Attached Clusters Rules and Guidelines (continued)
Rule/GuidelineDescription
Dell | EMC CX700
core software
Additional software
application programs
2.06.700 or later;
installed and enabled if two clusters or a mix of clustered and nonclustered
hosts are direct-attached to the CX700.
EMC Navisphere Agent 6.6 or later.
EMC Navisphere Manager 6.6 or later.
EMC PowerPath
EMC AdminSnap version 2.10.06 or later
(not supported on the Dell | EMC CX300).
QLogic SANsurfer SANblade Manager for Windows version 2.0.29 or later.
however, Access Logix™ Option 01.01.5 or later must be
®
3.0.6 or later.
Attaching Your Cluster Shared Storage System to a SAN
This section provides the rules and guidelines for attaching your PowerEdge cluster nodes to the
shared storage systems through a Dell | EMC SAN using redundant Fibre Channel switch fabrics.
Rules and Guidelines
The rules and guidelines described in Table 1-5 apply to SAN-attached clusters.
Table 1-5. SAN-Attached Clusters Rules and Guidelines
Rule/GuidelineDescription
Primary storageWindows Server 2003, Enterprise Edition for 64-bit Itanium-based Systems
can support more than 22 shared logical drives through mount points.
Up to four Dell | EMC Fibre Channel disk arrays are supported per cluster
in a SAN environment.
Fibre Channel
switch configuration
Fibre Channel
switch zoning
Fibre Channel
switches supported
Fibre Channel
switch firmware
Fibre Channel
HBAs supported
QLogic driver versionStorport driver 9.00.17 or later.