Reproduction in any manner whatsoever without the written permission of Dell Computer Corporation is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and Dell OpenManage are trademarks of Dell
Computer Corporation; Microsoft and Windows are registered trademarks of Microsoft Corporation.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and
names or their products. Dell Computer Corporation disclaims any proprietary interest in trademarks and trade names
other than its own.
Configurations Running Windows 2000
Advanced Server
. . . . . . . . . . . . . . . . 1-2
Table 1-3. Supported HBAs for Cluster FE200
Configurations Running Windows 2000
Advanced Server
. . . . . . . . . . . . . . . . 1-4
Table 1-4.PCI Slot Assignments for PowerEdge
Cluster Nodes
. . . . . . . . . . . . . . . . . . 1-5
Table 1-5.SAN-Attached Clusters Rules and
Guidelines
. . . . . . . . . . . . . . . . . . . . 1-9
Table 1-6.Cluster Consolidation Rules and
Guidelines
. . . . . . . . . . . . . . . . . . . 1-12
4Contents
This document provides information for installing and connecting peripheral hardware,
storage, and SAN components to your Dell™ PowerEdge™ Cluster FE200 system. The
configuration information in this document is specific to Microsoft
®
Windows® 2000
Advanced Server and Windows Server 2003, Enterprise Edition operating systems.
This document covers the following topics:
•Configuration information for installing peripheral hardware components, such as
HBAs, network adapters, and PCI adapter cards into Cluster FE200 configurations
•SAN-attached configuration rules and guidelines
•Cluster consolidation configuration rules and guidelines
NOTE: Configurations not listed in this document may not be certified or supported by
Dell or Microsoft.
NOTE: In this guide and in other cluster documentation, Microsoft Cluster Service
(for Windows 2000 Advanced Server and Windows Server 2003, Enterprise Edition)
are also referred to as MSCS.
•Incorrect TimeOutValue setting in the registry
Supported Cluster Configurations
This section provides information about supported cluster configurations for your cluster
configuration.
Table 1-1 provides a list of supported configurations for Cluster FE200 solutions running
Windows 2000 Advanced Server and Windows Server 2003, Enterprise Edition operating
systems.
NOTE: Two-node clusters must use the same system. For example, a two-node cluster
configuration can contain two PowerEdge 6650 systems.
Supported Cluster Interconnect
HBA (for the Private Network)
Any Ethernet network adapter
supported by the system.
NOTE: Both cluster nodes must use
homogeneous (identical) Ethernet
network adapters for the cluster
interconnect.
Platform Guide1-1
Obtaining More Information
See the Dell PowerEdge Cluster FE200 System Installation and Troubleshooting Guide
included with your cluster configuration for a detailed list of related documentation.
Windows 2000 Advanced Server Cluster
Configurations
This section provides information about the Windows 2000 Advanced Server service pack
and supported QLogic HBAs and HBA drivers for your cluster configuration.
www.dell.com | support.dell.com
NOTE: HBAs installed in clusters must be identical for redundant paths. Cluster
configurations are tested and certified using identical QLogic HBAs installed in all of
the cluster nodes. Using dissimilar HBAs in your cluster nodes is not supported.
Windows 2000 Advanced Server Service Pack Support
Microsoft Windows 2000 Service Pack 4 or later is recommended for Cluster FE200
systems.
You can download the latest service pack from the Microsoft website located at
www.microsoft.com.
QLogic HBA Support for Cluster FE200 Configurations
Table 1-2 lists the PowerEdge systems and the QLogic HBAs that are supported for Cluster
FE200 configurations running Windows 2000 Advanced Server.
See "Installing Peripheral Components in Your Cluster Node PCI Slots" for PCI slot
recommendations.
Table 1-2.Supported HBAs for Cluster FE200 Configurations Running
Windows 2000 Advanced Server
PowerEdge SystemQLA-2200 33 MHzQLA-2200 66 MHz
1550x
1650x
2500/2550x
2600x
2650x
4400xx
1-2Platform Guide
Table 1-2.Supported HBAs for Cluster FE200 Configurations Running
Windows 2000 Advanced Server (continued)
PowerEdge SystemQLA-2200 33 MHzQLA-2200 66 MHz
4600x
6400/6450xx
6600/6650x
8450xx
HBA Connectors
Both optical and copper HBA connectors are supported in a SAN-attached and SAN
appliance-attached configuration. Optical HBA connectors are not supported in a directattached configuration.
Guidelines
When configuring your cluster, both cluster nodes must contain identical versions of the
following:
•Operating systems and service packs
•Hardware drivers for the network adapters, HBAs, and any other peripheral hardware
components
•Management utilities, such as Dell OpenManage™ systems management software
•Fibre Channel HBA BIOS
Obtaining More Information
See the Dell PowerEdge Cluster FE200 Systems Installation and Troubleshooting Guide
included with your cluster configuration or installing hardware configurations running
Windows 2000 Advanced Server.
Windows Server 2003, Enterprise Edition
Cluster Configurations
This section provides information about the Windows Server 2003, Enterprise Edition
service pack and supported QLogic HBAs and HBA drivers for your cluster configuration.
Platform Guide1-3
NOTE: HBAs installed in clusters must be identical for redundant paths. Cluster
configurations are tested and certified using identical QLogic HBAs installed in all of
the cluster nodes. Using dissimilar HBAs in your cluster nodes is not supported.
QLogic HBA Support for Cluster FE200 Configurations
Table 1-3 lists the systems and the QLogic HBAs that are supported for PowerEdge Cluster
FE200 configurations running Windows Server 2003, Enterprise Edition.
See "Installing Peripheral Components in Your Cluster Node PCI Slots" for PCI slot
recommendations.
www.dell.com | support.dell.com
Table 1-3.Supported HBAs for Cluster FE200 Configurations Running
Windows 2000 Advanced Server
PowerEdge SystemQLA-2200 33 MHzQLA-2200 66 MHz
1550x
1650x
2500/2550x
2600x
2650x
4400xx
4600x
6400/6450xx
6600/6650x
8450xx
HBA Connectors
Both optical and copper HBA connectors are supported in a SAN-attached and SAN
appliance-attached configuration. Optical HBA connectors are not supported in a directattached configuration.
1-4Platform Guide
Guidelines
When configuring your cluster, both cluster nodes must contain identical versions of the
following:
•Operating systems and service packs
•Hardware drivers for the network adapters, HBAs, and any other peripheral hardware
components
•Management utilities, such as Dell OpenManage systems management software
•Fibre Channel HBA BIOS
Obtaining More Information
See the Dell PowerEdge Cluster FE200 Systems Installation and Troubleshooting Guide
included with your cluster configuration or installing hardware configurations running
Windows Server 2003, Enterprise Edition.
Installing Peripheral Components in Your
Cluster Node PCI Slots
This section provides configuration information for adding HBAs, a DRAC II or III, and
RAID controllers into your cluster node PCI slots.
Table 1-4 provides configuration information for the PowerEdge 1550, 1650, 2500, 2550,
2600, 2650, 4400, 4600, 6400, 6450, 6600, 6650, and 8450 cluster nodes.
CAUTION: Hardware installation should be performed only by trained service
technicians. See the safety instructions in your System InformationGuide before
working inside the system to avoid a situation that could cause serious injury or
death.
Table 1-4.PCI Slot Assignments for PowerEdge Cluster Nodes
PowerEdge
System
1550PCI bus 1: PCI slot 1 is 64-bit,
PCI BusesHBADRAC II or IIIRAID Controller
66 MHz
PCI bus 2: PCI slot 2 is 64-bit,
66 MHz
Install HBAs in any PCI
slot.
N/AN/A
Platform Guide1-5
Table 1-4.PCI Slot Assignments for PowerEdge Cluster Nodes (continued)
PowerEdge
PCI BusesHBADRAC II or IIIRAID Controller
System
1650Standard riser board:
PCI bus 2: PCI slot 1 is 64-bit,
66 MHz
PCI bus 2: PCI slot 2 is 64-bit,
66 MHz
Optional riser board:
PCI bus 0: PCI slot 1 is 32-bit,
33 MHz
www.dell.com | support.dell.com
PCI bus 2: PCI slot 2 is 64-bit,
66 MHz
2500PCI bus 1: PCI slots 6 and 7 are
32-bit, 33 MHz
PCI bus 2: PCI slots 3, 4 and 5
are 64-bit, 33 MHz
PCI bus 3: PCI slots 1 and 2 are
64-bit, 66 MHz
2550PCI bus 0: PCI slots 1 through 3
are 64-bit, 33-MHz
2600PCI bus 0: PCI slot 1 is 64-bit,
33 MHz
PCI bus 2: PCI slot 7 is 64-bit,
33–133 MHz
PCI bus 3: PCI slot 6 is 64-bit,
33–133 MHz
PCI bus 4: PCI slots 4 and 5 are
64-bit, 33–100 MHz
PCI bus 5: PCI slots 2 and 3 are
64-bit, 33–100 MHz
NOTE: If you are installing
expansion cards of different
operating speeds, install the
fastest card in slot 7 and the
slowest card in slot 1.
Install HBA in any PCI
slot.
For dual HBA
configurations, install the
HBAs on separate 64-bit
PCI buses to balance the
load on the system.
Install HBAs in any PCI
slot.
For dual HBA
configurations, install the
HBAs on separate PCI
buses to balance the load
on the system.
Install new or existing
DRAC III in PCI slot 1
Install in any
available PCI slot.
on the optional riser
board.
Install new or existing
DRAC II in PCI slot 7.
Install in any
available PCI slot.
N/AN/A
N/AAn integrated
RAID controller is
available on the
system board.
NOTE: To activate
the integrated
RAID controller,
you must install a
RAID battery and
key.
1-6Platform Guide
Table 1-4.PCI Slot Assignments for PowerEdge Cluster Nodes (continued)
PowerEdge
PCI BusesHBADRAC II or IIIRAID Controller
System
2650PCI/PCI-X bus 1: PCI slot 1 is
64-bit, 33–100 MHz
PCI/PCI-X bus 1: PCI slot 2 is
64-bit, 33–133 MHz
PCI/PCI-X bus 2: PCI slot 3 is
64-bit, 33–133 MHz
NOTE: PCI/PCI-X slot 1 must
be empty for PCI/PCI-X slot 2 to
attain an operating speed of
133 MHz.
4400PCI bus 0: PCI slots 1 and 2 are
64-bit, 33/66-MHz
PCI bus 1: PCI slots 3 through 6
are 64-bit, 33-MHz
PCI bus 2: PCI slot 7 is 32-bit,
33-MHz
4600PCI bus 0: PCI slot 1 is 32-bit,
33 MHz
PCI/PCI-X bus 1: PCI slots 2
and 3 are 64-bit, 66–100 MHz
PCI/PCI-X bus 2: PCI slots 4
and 5 are 64-bit, 66–100 MHz
PCI/PCI-X bus 3: PCI slots 6
and 7 are 64-bit, 66–100 MHz
6400
6450
PCI bus 0: PCI slot 1 is 32-bit,
33-MHz
PCI bus 1: PCI slots 2 through 5
are 64-bit, 33-MHz
PCI bus 2: PCI slots 6 and 7 are
64-bit, 33/66-MHz
For dual HBA
configurations, install the
HBAs on separate PCI
buses to balance the load
on the system.
For dual HBA
configurations, install the
HBAs on separate PCI
buses (PCI buses 1
and 2) to balance the
load on the system.
For dual HBA
configurations, install the
HBAs on separate PCI
buses to balance the load
on the system.
For dual HBA
configurations, install the
HBAs on separate PCI
buses (PCI buses 1
and 2) to balance the
load on the system.
N/AAn integrated
RAID controller is
available on the
system board.
NOTE: To activate
the integrated
RAID controller,
you must install a
RAID battery and
key.
Install new or existing
N/A
DRAC II in PCI slot 7.
Install new or existing
DRAC III in PCI slot 1.
An integrated
RAID controller is
available on the
system board.
NOTE: To activate
the integrated
RAID controller,
you must install a
RAID battery and
key.
Install new or existing
N/A
DRAC II in PCI slot 3.
Platform Guide1-7
Table 1-4.PCI Slot Assignments for PowerEdge Cluster Nodes (continued)
PowerEdge
PCI BusesHBADRAC II or IIIRAID Controller
System
6600PCI bus 0: PCI slot 1 is 32-bit,
33 MHz
PCI/PCI-X bus 1: PCI slot 2
and 3 are 64-bit, 33–100 MHz
PCI/PCI-X bus 2: PCI slot 4
and 5 are 64-bit, 33–100 MHz
PCI/PCI-X bus 3: PCI slot 6
and 7 are 64-bit, 33–100 MHz
www.dell.com | support.dell.com
PCI/PCI-X bus 4: PCI slot 8
and 9 are 64-bit, 33–100 MHz
PCI/PCI-X bus 5: PCI slot 10
and 11 are 64-bit, 33–100 MHz
6650PCI bus 0: PCI slot 1 is 32-bit,
33 MHz
PCI/PCI-X bus 1: PCI slot 2
and 3 are 64-bit, 33–100 MHz
PCI/PCI-X bus 2: PCI slot 4
and 5 are 64-bit, 33–100 MHz
PCI/PCI-X bus 3: PCI slot 6 is
64-bit, 33–100 MHz
PCI/PCI-X bus 4: PCI slot 7 is
64-bit, 33–100 MHz
PCI/PCI-X bus 5: PCI slot 8 is
64-bit, 33–100 MHz
8450PCI bus 0: PCI slots 1 and 2 are
64-bit, 33-MHz
PCI bus 1: PCI slots 3 through 6
are 64-bit, 33-MHz
PCI bus 2: PCI slots 7 and 8 are
64-bit, 33/66-MHz
PCI bus 3: PCI slots 9 and 10 are
64-bit, 33/66-MHz
For dual HBA
configurations, install the
HBAs on separate PCI
buses to balance the load
on the system.
For dual HBA
configurations, install the
HBAs on separate PCI
buses to balance the load
on the system.
For dual HBA
configurations, install the
HBAs on separate PCI
buses (PCI buses 2
and 3) to balance the
load on the system.
Install new or existing
DRAC III in slot 1.
Install new or existing
DRAC III in slot 1.
Install new or existing
DRAC II in PCI slot 2.
Install the RAID
controller in PCI
slot 2 or 3.
Install the RAID
controller in PCI
slot 2 or 3.
Install the RAID
controller for the
system’s internal
drives in PCI slot 1.
1-8Platform Guide
Attaching Your Cluster Shared Storage
Systems to a SAN
This section provides the rules and guidelines for attaching your cluster nodes to the shared
storage system(s) using a SAN in a Fibre Channel switch fabric.
The following SAN configurations are supported:
•SAN-attached
•Cluster consolidation
•SAN appliance-attached
NOTE: You can configure a SAN with up to 20 PowerEdge systems and eight storage
systems.
SAN-Attached Cluster Configurations
In a SAN-attached cluster configuration, both cluster nodes are attached to a single storage
system or to multiple storage systems through a PowerVault SAN using a redundant Fibre
Channel switch fabric.
Rules and Guidelines
The following rules and requirements described in Table 1-5 apply to SAN-attached
clusters.
See the Dell PowerVault Fibre Channel Update Version 5.3 CD for the specific version levels
of your SAN components.
.
Table 1-5.SAN-Attached Clusters Rules and Guidelines
Rule/GuidelineDescription
Number of supported
systems
Up to 10 two-node clusters attached to a SAN.
NOTE: Combinations of stand-alone systems and cluster pairs must
not exceed 20 PowerEdge systems.
Platform Guide1-9
Table 1-5.SAN-Attached Clusters Rules and Guidelines (continued)
Rule/GuidelineDescription
Cluster pair supportAll homogeneous and heterogeneous cluster configurations supported
www.dell.com | support.dell.com
Primary storageEach Windows 2000 and Windows Server 2003, Enterprise Edition
Secondary storage Supports up to four storage devices. These storage devices include:
Dell OpenManage
Storage Consolidation
(StorageC)
Fibre Channel switch
configuration
Fibre Channel switch zoning Required whenever a cluster shares a SAN with other cluster(s) or
Fibre Channel switches
supported
in direct-attach configurations are supported in SAN-attached
configurations.
See "Windows 2000 Advanced Server Cluster Configurations" or
"Windows Server 2003, Enterprise Edition Cluster Configurations" for
more information about supported cluster pairs.
NOTE: The Windows Server 2003, Enterprise Edition supports up to
eight cluster nodes. However, Cluster FE200 configurations can only
support up to two nodes.
cluster can support up to 22 unique drive letters for shared logical
drives. Windows Server 2003 can support additional physical drives
through mount points.
Up to a total of eight primary and secondary storage devices are
supported.
• PowerVault 136T tape library.
• PowerVault 128T tape library.
• PowerVault 35F bridge.
A PowerVault 35F bridge can be connected to up to four
PowerVault 120T tape autoloaders or two PowerVault 130T DLT
tape libraries.
Any system attached to the SAN can share these devices.
NOTE: Up to eight primary and secondary storage devices can be
connected to a SAN.
Not required unless cluster nodes are sharing storage systems with
other PowerEdge systems in the SAN, including other cluster system
nodes.
Redundant switch fabrics are required.
stand-alone systems.
PowerVault 51F and 56F.
1-10Platform Guide
Table 1-5.SAN-Attached Clusters Rules and Guidelines (continued)
Rule/GuidelineDescription
Fibre Channel HBAs
supported
NOTE: Supports both
optical and copper HBAs.
Operating systemEach cluster attached to the SAN can run either Windows 2000
Service packWindows 2000 Advanced Server configurations require Service Pack 4
Additional software
application programs
QLogic 2200/33 MHz.
QLogic 2200/66 MHz.
NOTE: HBAs within a single cluster must be the same.
Advanced Server or Windows Server 2003, Enterprise Edition.
or later.
Windows Server 2003 configurations require hotfix KB818877 (or
Service Pack 1 if available).
QLogic QLDirect.
Dell OpenManage Array Manager.
QLogic Management Suite for Java (QMSJ).
Obtaining More Information
See the Cluster FE200 Systems Installation and Troubleshooting Guide included with your
cluster configuration for more information about SAN-attached clusters.
See the Dell PowerVault Systems Storage Area Network (SAN) Administrator’s Guide
included with your cluster configuration for information about installing QLogic driver,
QLDirect, and QMSJ in SAN-attached cluster configurations and information about
general SAN rules and guidelines.
See the Dell PowerVault SAN Revision Compatibility Guide included with your cluster
configuration and the Dell Support website at support.dell.com for the latest firmware and
software revision requirements and the SAN compatibility rules.
Cluster Consolidation Configurations
In a cluster consolidation configuration, multiple clusters and stand-alone PowerEdge
systems are attached to a single storage system through a PowerVault SAN using a
redundant Fibre Channel switch fabric and switch zoning.
Rules and Guidelines
Table 1-6 describes the requirements for cluster consolidation configurations.
Platform Guide1-11
See the Dell PowerVault Fibre Channel Update Version 5.3 CD for the specific version levels
of your SAN components.
.
Table 1-6.Cluster Consolidation Rules and Guidelines
Rule/GuidelineDescription
Number of supported
PowerEdge systems
Cluster pair supportAny supported homogeneous system pair with the following HBAs:
www.dell.com | support.dell.com
Primary storageWindows Server 2003, Enterprise Edition cluster can support up to
Secondary storage Supports up to four storage devices. These storage devices include:
Dell OpenManage Storage
Consolidation (StorageC)
Fibre Channel switch
configuration
Up to 10 two-node clusters attached to a SAN. Combinations of
stand-alone systems and cluster pairs not to exceed 20 systems.
• QLogic 2200/33 MHz.
• QLogic 2200/66 MHz.
22 unique drive letters for shared logical drives. Windows Server
2003 can support additional physical drives through mount points.
Up to a total of eight primary and secondary storage devices are
supported.
• PowerVault 136T tape library.
• PowerVault 128T tape library.
• PowerVault 35F bridge.
A PowerVault 35F bridge can be connected to up to four
PowerVault 120T tape autoloaders or two PowerVault 130T DLT
tape libraries.
Any system attached to the SAN can share these devices.
NOTE: Up to eight primary and secondary storage devices can be
connected to a SAN.
Required.
Redundant switch fabrics are required.
1-12Platform Guide
Table 1-6.Cluster Consolidation Rules and Guidelines (continued)
Rule/GuidelineDescription
Fibre Channel switch zoningEach cluster must have its own zone, plus one zone for the stand-
alone systems.
The zone for each cluster should include the following hardware
components:
• One cluster with two nodes.
• One storage system.
• One or more Fibre Channel-to-SCSI bridges (if applicable).
The zone for the stand-alone systems should include the following
hardware components:
• All nonclustered PowerEdge systems.
• One storage system.
• One or more Fibre Channel-to-SCSI bridges (if applicable).
Fibre Channel switches
supported
Fibre Channel HBAs
supported
Operating systemAll clusters and systems attached to a PowerVault storage system
Service packWindows 2000 Advanced Server configurations require Service
DisksEach cluster or stand-alone system has its own set of assigned disks
SAN supportA cluster consolidation configuration consists of no more than 10
PowerVault 51F and 56F.
QLogic 2200/33 MHz HBA.
QLogic 2200/66 MHz HBA.
must be running either Windows 2000 Advanced Server or Windows
Server 2003, Enterprise Edition.
NOTE: Both systems in a cluster must be running the same
operating system. However, each cluster can run either
Windows 2000 Advanced Server or Windows Server 2003,
Enterprise Edition.
Pack 4 or later.
Windows Server 2003 configurations require hotfix KB818877 (or
Service Pack 1 if available). See "Incorrect TimeOutValue Setting in
the Registry" for additional information.
within the PowerVault Fibre Channel disk array.
clusters or 20 individual PowerEdge systems in several combinations.
For example, you can have a configuration consisting of five clusters
(10 systems) and 10 stand-alone systems for a total of 20 systems.
Platform Guide1-13
Table 1-6.Cluster Consolidation Rules and Guidelines (continued)
Rule/GuidelineDescription
Additional software
application programs
Obtaining More Information
See the Dell PowerEdge Cluster FFE200 Systems Installation and Troubleshooting Guide
included with your cluster configuration for more information about cluster consolidation
configurations.
www.dell.com | support.dell.com
See the Dell PowerEdge Cluster SAN Revision Compatibility Guide included with your
cluster configuration and the Dell Support website at support.dell.com for the latest
firmware and software revision requirements.
See the Dell PowerVault Systems Storage Area Network (SAN) Administrator’s Guide
included with your cluster configuration for information about installing QLogic driver,
QLDirect, QMSJ, and Dell OpenManage Storage Consolidation and for information about
general SAN rules and guidelines.
Incorrect TimeOutValue Setting in the Registry
Dell OpenManage Array Manager.
QLogic QLDirect.
QMSJ.
When you run the Cluster Configuration wizard on a cluster solution running Windows
Server 2003, the wizard modifies the following registry value:
The disk TimeOutValue setting is the timeout value set by Windows for storage system I/O
operations. The Dell | EMC Fibre Channel storage environment requires 60 seconds for
I/O operations. When you run the Cluster Configuration wizard, the wizard sets the
TimeOutValue setting to 20 seconds, which may not be sufficient for complex I/O
operations. Consequently, storage system I/O operations may continually time out.
Microsoft has confirmed a problem with the wizard and has implemented Quick Fix
Executable (QFE) file KB818877 to resolve this issue. See Microsoft Knowledge Base article
KB818877 on the Microsoft website at www.microsoft.com for more information. To resolve
this issue, read the Knowledge Base article for instructions about how to obtain the required
QFE file from Microsoft. Download and apply the QFE as soon as possible.
1-14Platform Guide
If you have not configured your cluster, apply the QFE (or Service Pack 1 when available) to
all of the cluster nodes.
If you have configured your cluster, perform one of the following procedures and then
reboot each cluster node, one at a time:
•Manually change the registry TimeOutValue setting to 60 on each cluster node.
•Download the Cluster Disk Timeout Fix utility from the Dell Support website at
support.dell.com and run the utility on your cluster.
When prompted, type the name of your cluster in the Cluster name field and type
Dell | EMC in the Storage System Type field. The utility locates the cluster nodes
associated with the cluster name and sets the TimeOutValue setting on each node to the
correct setting.
Platform Guide1-15
www.dell.com | support.dell.com
1-16Platform Guide
Index
C
cluster configurations
supported, 1-1
using Windows 2000 Advanced
Server, 1-2, 1-3
cluster consolidation
configurations
rules and guidelines, 1-11
connectors, 1-3, 1-4
P
peripheral components
for PowerEdge 1550, 1-5
for PowerEdge 1650, 1-6
for PowerEdge 2500, 1-6
for PowerEdge 2550, 1-6
for PowerEdge 2600, 1-6
for PowerEdge 2650, 1-7
for PowerEdge 4400, 1-7
for PowerEdge 4600, 1-7
for PowerEdge 6400, 1-7
for PowerEdge 6450, 1-7
for PowerEdge 6600, 1-8
for PowerEdge 6650, 1-8
for PowerEdge 8450, 1-8