Dell FE2000 User Manual

Dell™ PowerEdge™ Cluster FE200 Systems

Platform Guide

www.dell.com | support.dell.com
Notes, Notices, and Cautions
NOTE: A NOTE indicates important information that helps you make better use of your computer.
NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you
how to avoid the problem.
CAUTION: A CAUTION indicates a potential for property damage, personal injury,
____________________
Information in this document is subject to change without notice. © 2000–2003 Dell Computer Corporation. All rights reserved.
Reproduction in any manner whatsoever without the written permission of Dell Computer Corporation is strictly forbidden. Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and Dell OpenManage are trademarks of Dell
Computer Corporation; Microsoft and Windows are registered trademarks of Microsoft Corporation. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and
names or their products. Dell Computer Corporation disclaims any proprietary interest in trademarks and trade names other than its own.
July 2003 P/N 6C403 Rev. A09
Contents
Supported Cluster Configurations . . . . . . . . . . . . . . . . 1-1
Windows 2000 Advanced Server Cluster Configurations
Windows 2000 Advanced Server Service Pack Support
QLogic HBA Support for Cluster FE200 Configurations
HBA Connectors
Guidelines
. . . . . . . . . . . . . . . . . . . . . . . . 1-3
. . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Windows Server 2003, Enterprise Edition Cluster Configurations
. . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
QLogic HBA Support for Cluster FE200 Configurations
HBA Connectors
Guidelines
. . . . . . . . . . . . . . . . . . . . . . . . 1-4
. . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Installing Peripheral Components in Your Cluster Node PCI Slots
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-5
Attaching Your Cluster Shared Storage Systems to a SAN
SAN-Attached Cluster Configurations
Rules and Guidelines
. . . . . . . . . . . . . . . . . . . 1-9
Cluster Consolidation Configurations
Rules and Guidelines
. . . . . . . . . . . . . . . . . . 1-11
Incorrect TimeOutValue Setting in the Registry
. . . . . . . . . . . . . 1-9
. . . . . . . . . . . . . 1-11
. . . . . . . . 1-14
. . . . 1-2
. . . . 1-2
. . . . 1-2
. . . . 1-4
. . . 1-9
Index
Contents 3
Tables
Table 1-1. Supported Cluster Configurations . . . . . . . . 1-1
Table 1-2. Supported HBAs for Cluster FE200
Configurations Running Windows 2000 Advanced Server
. . . . . . . . . . . . . . . . 1-2
Table 1-3. Supported HBAs for Cluster FE200
Configurations Running Windows 2000 Advanced Server
. . . . . . . . . . . . . . . . 1-4
Table 1-4. PCI Slot Assignments for PowerEdge
Cluster Nodes
. . . . . . . . . . . . . . . . . . 1-5
Table 1-5. SAN-Attached Clusters Rules and
Guidelines
. . . . . . . . . . . . . . . . . . . . 1-9
Table 1-6. Cluster Consolidation Rules and
Guidelines
. . . . . . . . . . . . . . . . . . . 1-12
4 Contents
This document provides information for installing and connecting peripheral hardware, storage, and SAN components to your Dell™ PowerEdge™ Cluster FE200 system. The configuration information in this document is specific to Microsoft
®
Windows® 2000
Advanced Server and Windows Server 2003, Enterprise Edition operating systems.
This document covers the following topics:
Configuration information for installing peripheral hardware components, such as HBAs, network adapters, and PCI adapter cards into Cluster FE200 configurations
SAN-attached configuration rules and guidelines
Cluster consolidation configuration rules and guidelines
NOTE: Configurations not listed in this document may not be certified or supported by
Dell or Microsoft.
NOTE: In this guide and in other cluster documentation, Microsoft Cluster Service
(for Windows 2000 Advanced Server and Windows Server 2003, Enterprise Edition) are also referred to as MSCS.
Incorrect TimeOutValue setting in the registry

Supported Cluster Configurations

This section provides information about supported cluster configurations for your cluster configuration.
Table 1-1 provides a list of supported configurations for Cluster FE200 solutions running Windows 2000 Advanced Server and Windows Server 2003, Enterprise Edition operating systems.
NOTE: Two-node clusters must use the same system. For example, a two-node cluster
configuration can contain two PowerEdge 6650 systems.
Table 1-1. Supported Cluster Configurations
Supported PowerEdge Systems Supported Storage
System
1550, 1650, 2500, 2550, 2600, 2650, 4400, 4600, 6400, 6450, 6600, 6650, and 8450
Dell PowerVault™ 660F/224F
Supported Cluster Interconnect HBA (for the Private Network)
Any Ethernet network adapter supported by the system.
NOTE: Both cluster nodes must use homogeneous (identical) Ethernet network adapters for the cluster interconnect.
Platform Guide 1-1
Obtaining More Information
See the Dell PowerEdge Cluster FE200 System Installation and Troubleshooting Guide included with your cluster configuration for a detailed list of related documentation.

Windows 2000 Advanced Server Cluster Configurations

This section provides information about the Windows 2000 Advanced Server service pack and supported QLogic HBAs and HBA drivers for your cluster configuration.
www.dell.com | support.dell.com
NOTE: HBAs installed in clusters must be identical for redundant paths. Cluster
configurations are tested and certified using identical QLogic HBAs installed in all of the cluster nodes. Using dissimilar HBAs in your cluster nodes is not supported.

Windows 2000 Advanced Server Service Pack Support

Microsoft Windows 2000 Service Pack 4 or later is recommended for Cluster FE200 systems.
You can download the latest service pack from the Microsoft website located at www.microsoft.com.

QLogic HBA Support for Cluster FE200 Configurations

Table 1-2 lists the PowerEdge systems and the QLogic HBAs that are supported for Cluster FE200 configurations running Windows 2000 Advanced Server.
See "Installing Peripheral Components in Your Cluster Node PCI Slots" for PCI slot recommendations.
Table 1-2. Supported HBAs for Cluster FE200 Configurations Running Windows 2000 Advanced Server
PowerEdge System QLA-2200 33 MHz QLA-2200 66 MHz
1550 x
1650 x
2500/2550 x
2600 x
2650 x
4400 x x
1-2 Platform Guide
Table 1-2. Supported HBAs for Cluster FE200 Configurations Running Windows 2000 Advanced Server (continued)
PowerEdge System QLA-2200 33 MHz QLA-2200 66 MHz
4600 x
6400/6450 x x
6600/6650 x
8450 x x

HBA Connectors

Both optical and copper HBA connectors are supported in a SAN-attached and SAN appliance-attached configuration. Optical HBA connectors are not supported in a direct­attached configuration.

Guidelines

When configuring your cluster, both cluster nodes must contain identical versions of the following:
Operating systems and service packs
Hardware drivers for the network adapters, HBAs, and any other peripheral hardware components
Management utilities, such as Dell OpenManage™ systems management software
Fibre Channel HBA BIOS
Obtaining More Information
See the Dell PowerEdge Cluster FE200 Systems Installation and Troubleshooting Guide included with your cluster configuration or installing hardware configurations running Windows 2000 Advanced Server.

Windows Server 2003, Enterprise Edition Cluster Configurations

This section provides information about the Windows Server 2003, Enterprise Edition service pack and supported QLogic HBAs and HBA drivers for your cluster configuration.
Platform Guide 1-3
NOTE: HBAs installed in clusters must be identical for redundant paths. Cluster
configurations are tested and certified using identical QLogic HBAs installed in all of the cluster nodes. Using dissimilar HBAs in your cluster nodes is not supported.

QLogic HBA Support for Cluster FE200 Configurations

Table 1-3 lists the systems and the QLogic HBAs that are supported for PowerEdge Cluster FE200 configurations running Windows Server 2003, Enterprise Edition.
See "Installing Peripheral Components in Your Cluster Node PCI Slots" for PCI slot recommendations.
www.dell.com | support.dell.com
Table 1-3. Supported HBAs for Cluster FE200 Configurations Running Windows 2000 Advanced Server
PowerEdge System QLA-2200 33 MHz QLA-2200 66 MHz
1550 x
1650 x
2500/2550 x
2600 x
2650 x
4400 x x
4600 x
6400/6450 x x
6600/6650 x
8450 x x

HBA Connectors

Both optical and copper HBA connectors are supported in a SAN-attached and SAN appliance-attached configuration. Optical HBA connectors are not supported in a direct­attached configuration.
1-4 Platform Guide

Guidelines

When configuring your cluster, both cluster nodes must contain identical versions of the following:
Operating systems and service packs
Hardware drivers for the network adapters, HBAs, and any other peripheral hardware components
Management utilities, such as Dell OpenManage systems management software
Fibre Channel HBA BIOS
Obtaining More Information
See the Dell PowerEdge Cluster FE200 Systems Installation and Troubleshooting Guide included with your cluster configuration or installing hardware configurations running Windows Server 2003, Enterprise Edition.

Installing Peripheral Components in Your Cluster Node PCI Slots

This section provides configuration information for adding HBAs, a DRAC II or III, and RAID controllers into your cluster node PCI slots.
Table 1-4 provides configuration information for the PowerEdge 1550, 1650, 2500, 2550, 2600, 2650, 4400, 4600, 6400, 6450, 6600, 6650, and 8450 cluster nodes.
CAUTION: Hardware installation should be performed only by trained service
technicians. See the safety instructions in your System Information Guide before working inside the system to avoid a situation that could cause serious injury or death.
Table 1-4. PCI Slot Assignments for PowerEdge Cluster Nodes
PowerEdge System
1550 PCI bus 1: PCI slot 1 is 64-bit,
PCI Buses HBA DRAC II or III RAID Controller
66 MHz
PCI bus 2: PCI slot 2 is 64-bit, 66 MHz
Install HBAs in any PCI slot.
N/A N/A
Platform Guide 1-5
Table 1-4. PCI Slot Assignments for PowerEdge Cluster Nodes (continued)
PowerEdge
PCI Buses HBA DRAC II or III RAID Controller
System
1650 Standard riser board:
PCI bus 2: PCI slot 1 is 64-bit, 66 MHz
PCI bus 2: PCI slot 2 is 64-bit, 66 MHz
Optional riser board:
PCI bus 0: PCI slot 1 is 32-bit, 33 MHz
www.dell.com | support.dell.com
PCI bus 2: PCI slot 2 is 64-bit, 66 MHz
2500 PCI bus 1: PCI slots 6 and 7 are
32-bit, 33 MHz
PCI bus 2: PCI slots 3, 4 and 5 are 64-bit, 33 MHz
PCI bus 3: PCI slots 1 and 2 are 64-bit, 66 MHz
2550 PCI bus 0: PCI slots 1 through 3
are 64-bit, 33-MHz
2600 PCI bus 0: PCI slot 1 is 64-bit,
33 MHz
PCI bus 2: PCI slot 7 is 64-bit, 33–133 MHz
PCI bus 3: PCI slot 6 is 64-bit, 33–133 MHz
PCI bus 4: PCI slots 4 and 5 are 64-bit, 33–100 MHz
PCI bus 5: PCI slots 2 and 3 are 64-bit, 33–100 MHz
NOTE: If you are installing expansion cards of different operating speeds, install the fastest card in slot 7 and the slowest card in slot 1.
Install HBA in any PCI slot.
For dual HBA configurations, install the HBAs on separate 64-bit PCI buses to balance the load on the system.
Install HBAs in any PCI slot.
For dual HBA configurations, install the HBAs on separate PCI buses to balance the load on the system.
Install new or existing DRAC III in PCI slot 1
Install in any
available PCI slot. on the optional riser board.
Install new or existing DRAC II in PCI slot 7.
Install in any
available PCI slot.
N/A N/A
N/A An integrated
RAID controller is
available on the
system board.
NOTE: To activate
the integrated
RAID controller,
you must install a
RAID battery and
key.
1-6 Platform Guide
Table 1-4. PCI Slot Assignments for PowerEdge Cluster Nodes (continued)
PowerEdge
PCI Buses HBA DRAC II or III RAID Controller
System
2650 PCI/PCI-X bus 1: PCI slot 1 is
64-bit, 33–100 MHz
PCI/PCI-X bus 1: PCI slot 2 is 64-bit, 33–133 MHz
PCI/PCI-X bus 2: PCI slot 3 is 64-bit, 33–133 MHz
NOTE: PCI/PCI-X slot 1 must be empty for PCI/PCI-X slot 2 to attain an operating speed of 133 MHz.
4400 PCI bus 0: PCI slots 1 and 2 are
64-bit, 33/66-MHz
PCI bus 1: PCI slots 3 through 6 are 64-bit, 33-MHz
PCI bus 2: PCI slot 7 is 32-bit, 33-MHz
4600 PCI bus 0: PCI slot 1 is 32-bit,
33 MHz
PCI/PCI-X bus 1: PCI slots 2 and 3 are 64-bit, 66–100 MHz
PCI/PCI-X bus 2: PCI slots 4 and 5 are 64-bit, 66–100 MHz
PCI/PCI-X bus 3: PCI slots 6 and 7 are 64-bit, 66–100 MHz
6400 6450
PCI bus 0: PCI slot 1 is 32-bit, 33-MHz
PCI bus 1: PCI slots 2 through 5 are 64-bit, 33-MHz
PCI bus 2: PCI slots 6 and 7 are 64-bit, 33/66-MHz
For dual HBA configurations, install the HBAs on separate PCI buses to balance the load on the system.
For dual HBA configurations, install the HBAs on separate PCI buses (PCI buses 1 and 2) to balance the load on the system.
For dual HBA configurations, install the HBAs on separate PCI buses to balance the load on the system.
For dual HBA configurations, install the HBAs on separate PCI buses (PCI buses 1 and 2) to balance the load on the system.
N/A An integrated
RAID controller is available on the system board.
NOTE: To activate the integrated RAID controller, you must install a RAID battery and key.
Install new or existing
N/A
DRAC II in PCI slot 7.
Install new or existing DRAC III in PCI slot 1.
An integrated RAID controller is available on the system board.
NOTE: To activate the integrated RAID controller, you must install a RAID battery and key.
Install new or existing
N/A
DRAC II in PCI slot 3.
Platform Guide 1-7
Table 1-4. PCI Slot Assignments for PowerEdge Cluster Nodes (continued)
PowerEdge
PCI Buses HBA DRAC II or III RAID Controller
System
6600 PCI bus 0: PCI slot 1 is 32-bit,
33 MHz
PCI/PCI-X bus 1: PCI slot 2 and 3 are 64-bit, 33–100 MHz
PCI/PCI-X bus 2: PCI slot 4 and 5 are 64-bit, 33–100 MHz
PCI/PCI-X bus 3: PCI slot 6 and 7 are 64-bit, 33–100 MHz
www.dell.com | support.dell.com
PCI/PCI-X bus 4: PCI slot 8 and 9 are 64-bit, 33–100 MHz
PCI/PCI-X bus 5: PCI slot 10 and 11 are 64-bit, 33–100 MHz
6650 PCI bus 0: PCI slot 1 is 32-bit,
33 MHz
PCI/PCI-X bus 1: PCI slot 2 and 3 are 64-bit, 33–100 MHz
PCI/PCI-X bus 2: PCI slot 4 and 5 are 64-bit, 33–100 MHz
PCI/PCI-X bus 3: PCI slot 6 is 64-bit, 33–100 MHz
PCI/PCI-X bus 4: PCI slot 7 is 64-bit, 33–100 MHz
PCI/PCI-X bus 5: PCI slot 8 is 64-bit, 33–100 MHz
8450 PCI bus 0: PCI slots 1 and 2 are
64-bit, 33-MHz
PCI bus 1: PCI slots 3 through 6 are 64-bit, 33-MHz
PCI bus 2: PCI slots 7 and 8 are 64-bit, 33/66-MHz
PCI bus 3: PCI slots 9 and 10 are 64-bit, 33/66-MHz
For dual HBA configurations, install the HBAs on separate PCI buses to balance the load on the system.
For dual HBA configurations, install the HBAs on separate PCI buses to balance the load on the system.
For dual HBA configurations, install the HBAs on separate PCI buses (PCI buses 2 and 3) to balance the load on the system.
Install new or existing DRAC III in slot 1.
Install new or existing DRAC III in slot 1.
Install new or existing DRAC II in PCI slot 2.
Install the RAID
controller in PCI
slot 2 or 3.
Install the RAID
controller in PCI
slot 2 or 3.
Install the RAID
controller for the
system’s internal
drives in PCI slot 1.
1-8 Platform Guide

Attaching Your Cluster Shared Storage Systems to a SAN

This section provides the rules and guidelines for attaching your cluster nodes to the shared storage system(s) using a SAN in a Fibre Channel switch fabric.
The following SAN configurations are supported:
SAN-attached
Cluster consolidation
SAN appliance-attached
NOTE: You can configure a SAN with up to 20 PowerEdge systems and eight storage
systems.

SAN-Attached Cluster Configurations

In a SAN-attached cluster configuration, both cluster nodes are attached to a single storage system or to multiple storage systems through a PowerVault SAN using a redundant Fibre Channel switch fabric.
Rules and Guidelines
The following rules and requirements described in Table 1-5 apply to SAN-attached clusters.
See the Dell PowerVault Fibre Channel Update Version 5.3 CD for the specific version levels of your SAN components.
.
Table 1-5. SAN-Attached Clusters Rules and Guidelines
Rule/Guideline Description
Number of supported systems
Up to 10 two-node clusters attached to a SAN.
NOTE: Combinations of stand-alone systems and cluster pairs must not exceed 20 PowerEdge systems.
Platform Guide 1-9
Table 1-5. SAN-Attached Clusters Rules and Guidelines (continued)
Rule/Guideline Description
Cluster pair support All homogeneous and heterogeneous cluster configurations supported
www.dell.com | support.dell.com
Primary storage Each Windows 2000 and Windows Server 2003, Enterprise Edition
Secondary storage Supports up to four storage devices. These storage devices include:
Dell OpenManage Storage Consolidation (StorageC)
Fibre Channel switch configuration
Fibre Channel switch zoning Required whenever a cluster shares a SAN with other cluster(s) or
Fibre Channel switches supported
in direct-attach configurations are supported in SAN-attached configurations.
See "Windows 2000 Advanced Server Cluster Configurations" or "Windows Server 2003, Enterprise Edition Cluster Configurations" for more information about supported cluster pairs.
NOTE: The Windows Server 2003, Enterprise Edition supports up to eight cluster nodes. However, Cluster FE200 configurations can only support up to two nodes.
cluster can support up to 22 unique drive letters for shared logical drives. Windows Server 2003 can support additional physical drives through mount points.
Up to a total of eight primary and secondary storage devices are supported.
• PowerVault 136T tape library.
• PowerVault 128T tape library.
• PowerVault 35F bridge. A PowerVault 35F bridge can be connected to up to four
PowerVault 120T tape autoloaders or two PowerVault 130T DLT tape libraries.
Any system attached to the SAN can share these devices.
NOTE: Up to eight primary and secondary storage devices can be connected to a SAN.
Not required unless cluster nodes are sharing storage systems with other PowerEdge systems in the SAN, including other cluster system nodes.
Redundant switch fabrics are required.
stand-alone systems.
PowerVault 51F and 56F.
1-10 Platform Guide
Table 1-5. SAN-Attached Clusters Rules and Guidelines (continued)
Rule/Guideline Description
Fibre Channel HBAs supported
NOTE: Supports both optical and copper HBAs.
Operating system Each cluster attached to the SAN can run either Windows 2000
Service pack Windows 2000 Advanced Server configurations require Service Pack 4
Additional software application programs
QLogic 2200/33 MHz.
QLogic 2200/66 MHz.
NOTE: HBAs within a single cluster must be the same.
Advanced Server or Windows Server 2003, Enterprise Edition.
or later.
Windows Server 2003 configurations require hotfix KB818877 (or Service Pack 1 if available).
QLogic QLDirect.
Dell OpenManage Array Manager.
QLogic Management Suite for Java (QMSJ).
Obtaining More Information
See the Cluster FE200 Systems Installation and Troubleshooting Guide included with your cluster configuration for more information about SAN-attached clusters.
See the Dell PowerVault Systems Storage Area Network (SAN) Administrator’s Guide included with your cluster configuration for information about installing QLogic driver, QLDirect, and QMSJ in SAN-attached cluster configurations and information about general SAN rules and guidelines.
See the Dell PowerVault SAN Revision Compatibility Guide included with your cluster configuration and the Dell Support website at support.dell.com for the latest firmware and software revision requirements and the SAN compatibility rules.

Cluster Consolidation Configurations

In a cluster consolidation configuration, multiple clusters and stand-alone PowerEdge systems are attached to a single storage system through a PowerVault SAN using a redundant Fibre Channel switch fabric and switch zoning.
Rules and Guidelines
Table 1-6 describes the requirements for cluster consolidation configurations.
Platform Guide 1-11
See the Dell PowerVault Fibre Channel Update Version 5.3 CD for the specific version levels of your SAN components.
.
Table 1-6. Cluster Consolidation Rules and Guidelines
Rule/Guideline Description
Number of supported PowerEdge systems
Cluster pair support Any supported homogeneous system pair with the following HBAs:
www.dell.com | support.dell.com
Primary storage Windows Server 2003, Enterprise Edition cluster can support up to
Secondary storage Supports up to four storage devices. These storage devices include:
Dell OpenManage Storage Consolidation (StorageC)
Fibre Channel switch configuration
Up to 10 two-node clusters attached to a SAN. Combinations of stand-alone systems and cluster pairs not to exceed 20 systems.
• QLogic 2200/33 MHz.
• QLogic 2200/66 MHz.
22 unique drive letters for shared logical drives. Windows Server 2003 can support additional physical drives through mount points.
Up to a total of eight primary and secondary storage devices are supported.
• PowerVault 136T tape library.
• PowerVault 128T tape library.
• PowerVault 35F bridge. A PowerVault 35F bridge can be connected to up to four
PowerVault 120T tape autoloaders or two PowerVault 130T DLT tape libraries.
Any system attached to the SAN can share these devices.
NOTE: Up to eight primary and secondary storage devices can be connected to a SAN.
Required.
Redundant switch fabrics are required.
1-12 Platform Guide
Table 1-6. Cluster Consolidation Rules and Guidelines (continued)
Rule/Guideline Description
Fibre Channel switch zoning Each cluster must have its own zone, plus one zone for the stand-
alone systems.
The zone for each cluster should include the following hardware components:
• One cluster with two nodes.
• One storage system.
• One or more Fibre Channel-to-SCSI bridges (if applicable).
The zone for the stand-alone systems should include the following hardware components:
• All nonclustered PowerEdge systems.
• One storage system.
• One or more Fibre Channel-to-SCSI bridges (if applicable).
Fibre Channel switches supported
Fibre Channel HBAs supported
Operating system All clusters and systems attached to a PowerVault storage system
Service pack Windows 2000 Advanced Server configurations require Service
Disks Each cluster or stand-alone system has its own set of assigned disks
SAN support A cluster consolidation configuration consists of no more than 10
PowerVault 51F and 56F.
QLogic 2200/33 MHz HBA.
QLogic 2200/66 MHz HBA.
must be running either Windows 2000 Advanced Server or Windows Server 2003, Enterprise Edition.
NOTE: Both systems in a cluster must be running the same operating system. However, each cluster can run either Windows 2000 Advanced Server or Windows Server 2003, Enterprise Edition.
Pack 4 or later.
Windows Server 2003 configurations require hotfix KB818877 (or Service Pack 1 if available). See "Incorrect TimeOutValue Setting in the Registry" for additional information.
within the PowerVault Fibre Channel disk array.
clusters or 20 individual PowerEdge systems in several combinations. For example, you can have a configuration consisting of five clusters (10 systems) and 10 stand-alone systems for a total of 20 systems.
Platform Guide 1-13
Table 1-6. Cluster Consolidation Rules and Guidelines (continued)
Rule/Guideline Description
Additional software application programs
Obtaining More Information
See the Dell PowerEdge Cluster FFE200 Systems Installation and Troubleshooting Guide included with your cluster configuration for more information about cluster consolidation configurations.
www.dell.com | support.dell.com
See the Dell PowerEdge Cluster SAN Revision Compatibility Guide included with your cluster configuration and the Dell Support website at support.dell.com for the latest firmware and software revision requirements.
See the Dell PowerVault Systems Storage Area Network (SAN) Administrator’s Guide included with your cluster configuration for information about installing QLogic driver, QLDirect, QMSJ, and Dell OpenManage Storage Consolidation and for information about general SAN rules and guidelines.

Incorrect TimeOutValue Setting in the Registry

Dell OpenManage Array Manager.
QLogic QLDirect.
QMSJ.
When you run the Cluster Configuration wizard on a cluster solution running Windows Server 2003, the wizard modifies the following registry value:
HKLM\System\CurrentControlSet\Services\Disk\TimeOutValue
The disk TimeOutValue setting is the timeout value set by Windows for storage system I/O operations. The Dell | EMC Fibre Channel storage environment requires 60 seconds for I/O operations. When you run the Cluster Configuration wizard, the wizard sets the TimeOutValue setting to 20 seconds, which may not be sufficient for complex I/O operations. Consequently, storage system I/O operations may continually time out.
Microsoft has confirmed a problem with the wizard and has implemented Quick Fix Executable (QFE) file KB818877 to resolve this issue. See Microsoft Knowledge Base article KB818877 on the Microsoft website at www.microsoft.com for more information. To resolve this issue, read the Knowledge Base article for instructions about how to obtain the required QFE file from Microsoft. Download and apply the QFE as soon as possible.
1-14 Platform Guide
If you have not configured your cluster, apply the QFE (or Service Pack 1 when available) to all of the cluster nodes.
If you have configured your cluster, perform one of the following procedures and then reboot each cluster node, one at a time:
Manually change the registry TimeOutValue setting to 60 on each cluster node.
Download the Cluster Disk Timeout Fix utility from the Dell Support website at support.dell.com and run the utility on your cluster.
When prompted, type the name of your cluster in the Cluster name field and type Dell | EMC in the Storage System Type field. The utility locates the cluster nodes associated with the cluster name and sets the TimeOutValue setting on each node to the correct setting.
Platform Guide 1-15
www.dell.com | support.dell.com
1-16 Platform Guide
Index
C
cluster configurations
supported, 1-1 using Windows 2000 Advanced
Server, 1-2, 1-3
cluster consolidation
configurations
rules and guidelines, 1-11
connectors, 1-3, 1-4
P
peripheral components
for PowerEdge 1550, 1-5 for PowerEdge 1650, 1-6 for PowerEdge 2500, 1-6 for PowerEdge 2550, 1-6 for PowerEdge 2600, 1-6 for PowerEdge 2650, 1-7 for PowerEdge 4400, 1-7 for PowerEdge 4600, 1-7 for PowerEdge 6400, 1-7 for PowerEdge 6450, 1-7 for PowerEdge 6600, 1-8 for PowerEdge 6650, 1-8 for PowerEdge 8450, 1-8
Q
QLogic host bus adapters
connectors, 1-3, 1-4 installing in PCI slots, 1-5
S
SAN
attaching your shared storage
systems, 1-9
cluster consolidation
configurations, 1-11
SAN-attached cluster
configurations, 1-9
SAN appliance-attached
host bus adapter
connectors, 1-3, 1-4
SAN-attached cluster
configurations
rules and guidelines, 1-9
W
Windows 2000 Advanced
Server
cluster configurations, 1-2,
1-3
configuring your PowerEdge
cluster, 1-3, 1-5
service pack support, 1-2
Windows 2003, Enterprise
Edition
configuring your PowerEdge
cluster, 1-3
cluster configuration, 1-3
Index 1
2 Index
Dell™ PowerEdge™ Cluster FE200 系统

平台指南

www.dell.com | support.dell.com
注、注意和警告
注: 注表示可以帮助您更好地使用计算机的重要信息。
注意: 注意表示可能会损坏硬件或导致数据丢失,并告诉您如何避免此类问题。
警告: 警告表示可能会导致财产损失、人身伤害甚至死亡。
____________________
本说明文件中的信息如有更改,恕不另行通知。 © 2000–2003 Dell Computer Corporation。 保留所有权利。
未经 Dell Computer Corporation 书面许可,严禁以任何形式进行复制。 本文件中使用的商标:DellDELL 徽标、 PowerEdgePowerVault 和 Dell OpenManage 为 Dell Computer
Corporation 的商标; Microsoft Windows Microsoft Corporation 的注册商标。 本说明文件中述及的其它商标和商品名称是指拥有该些商标和名称的公司或其制造的产品。 Dell Computer
Corporation 对不属于自己的商标和商品名称不拥有任何专利权。
2003 年 7 月 P/N 6C403 修订版 A09
目录
支持的群集配置 . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1
索引
Windows 2000 高级服务器群集配置
. . . . . . . . . . . . . . . 2-2
Windows 2000 高级服务器服务软件包支持 QLogic HBA 支持 Cluster FE200 配置 HBA 连接器
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
指导
Windows 服务器 2003, 企业版群集配置
. . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
. . . . . . . . . . . . . 2-3
QLogic HBA 支持 Cluster FE200 配置 HBA 连接器
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
指导
. . . . . . . . . . . . . . . . . . . . . . . . . . 2-4
在您的群集节点 PCI 插槽中安装外围组件
将您的群集共享存储系统与一个 SAN 相连
SAN 相连的群集配置
规则与原则
群集合并配置
规则与原则
. . . . . . . . . . . . . . . . . . . . . . . 2-9
. . . . . . . . . . . . . . . . . . . . . . . . 2-11
. . . . . . . . . . . . . . . . . . . . . . 2-11
注册中的错误超时设定值
. . . . . . . . . . . . . . . . . . . 2-9
. . . . . . . . . . . . . . . . . . . . 2-13
. . . . . . . . . 2-2
. . . . . . . . . . . . 2-2
. . . . . . . . . . . . 2-4
. . . . . . . . . . . . 2-5
. . . . . . . . . . . . 2-8
目录 3
2-1. 支持的群集配置 . . . . . . . . . . . . . . . . 2-1
2-2. Windows 2000 高级服务器下支持
Cluster FE200 配置的 HBA
. . . . . . . . . . . . . . . 2-2
2-3. 在 Windows 2000 高级服务器下支持
Cluster FE200 配置的 HAB
2-4. PowerEdge 群集节点的 PCI 插槽分配
2-5. SAN 连接群集规则与原则
2-6. 群集合并规则与原则
. . . . . . . . . . . . . . 2-4
. . . . . 2-5
. . . . . . . . . . . 2-9
. . . . . . . . . . . . . 2-11
4 目录
本说明文件提供将外围硬件、存储设备及 SAN 组件安装及连接到您的 Dell™ PowerEdge™ Cluster FE200 系统的信息。说明文件中述及的配置信息专为 Microsoft Windows® 2000 高级服务器及 Windows 服务器 2003 企业版操作系统而制作。
该说明文件包括以下主题:
有关将外围硬件组件,如 HBA、网络适配器、及 PCI 适配卡安装到 Cluster FE200 的配置信息。
•SAN-连接配置规则与原则
群集合并配置规则与原则
注:本说明文件中未列出的配置可能未经 Dell Microsoft 认证,或者可能不受其支持。
注:在本指南及其它群集说明文件中提及的 Microsoft Cluster Service (面向 Windows 2000
高级服务器和 Windows 服务器 2003 企业版)也称之为 MSCS
注册中的超时溢出值设置错误。

支持的群集配置

本节提供有关您的群集配置之支持群集配置的信息。
2-1 提供 Windows 2000 高级服务器及 Windows 服务器 2003 企业版操作系统下 Cluster FE200 解决方案之支持配置列表。
注:两个节点的群集必须使用同一系统。 例如,一个双节点群集配置可以包含两个
PowerEdge 6650 系统。
®
2-1. 支持的群集配置
支持的 PowerEdge 系统 支持的存储系统 支持的群集互连 HBA (用于专用
网络)
1550, 1650, 2500, 2550, 2600, 2650, 4400, 4600, 6400, 6450, 6600, 6650,
8450
Dell PowerVault™ 660F/224F
任何本系统支持的以太网卡。
注:群集互连的两个群集节点必须 采用同类的 (同样的)以太网网络 适配器。
平台指南 2-1
获得详细信息
请参阅与您的群集配置随附的 Dell PowerEdge Cluster FE200 系统安装与故障排除指 南》以获得相关说明文件的详细清单。

Windows 2000 高级服务器群集配置

本节提供有关您的群集配置的 Windows 2000 高级服务器服务软件包及支持的 QLogic HBA HBA 驱动程序的信息。
注:群集中安装的 HBA 的冗余路径必须相同。群集配置必须使用安装在所有群集节点上的
相同的 QLogic HBA 检测和认证。不支持在群集节点中使用不同的 HBA
www.dell.com | support.dell.com

Windows 2000 高级服务器服务软件包支持

Cluster FE200 系统建议使用 Microsoft Windows 2000 服务软件包 4 或更高版本。
您可以从 Microsoft Web 站点 www.microsoft.com 下载最新的服务软件包。

QLogic HBA 支持 Cluster FE200 配置

2-2 提供了一个 Windows 2000 高级服务器下支持 Cluster FE200 配置的 PowerEdge 系统与 QLogic HBA 的列表。
有关 PCI 插槽的建议,请参阅在您的群集节点 PCI 插槽中安装外围组件
2-2. Windows 2000 高级服务器下支持 Cluster FE200 配置的 HBA
PowerEdge 系统
1550 x
1650 x
2550/2500 x
2600 x
2650 x
4400 x x
4600 x
6400/6450 x x
6600/6650 x
8450 x x
2-2 平台指南
QLA-2200 33MHz QLA-2200 66MHz

HBA 连接器

在与 SAN 连接及与 SAN 设备连接配置中支持光纤和铜质 HBA 两种连接器。 在直接连 接配置中,不支持光纤 HBA 连接器。
指导
当配置您的群集时,两个群集节点都必须包含如下相同版本:
操作系统和服务软件包
网络适配卡、 HBA 及任何其它外围硬件组件的硬件驱动程序。
管理使用程序 , 例如 Dell OpenManage™ 系统管理软件。
光纤信道 HBA BIOS
获得更多信息
请参阅与您的群集配置随附的 Dell PowerEdge Cluster FE200 系统安装与故障排除》 或在 Windows 2000 高级服务器下安装硬件配置。

Windows 服务器 2003, 企业版群集配置

本节为您的群集配置提供有关 Windows 服务器 2003,企业版服务软件包以及支持的 QLogic HBA 和 HBA 驱动程序的信息。
注:群集中安装的 HBA 的冗余路径必须相同。群集配置必须使用安装在所有群集节点上的
相同的 QLogic HBA 检验和认可。不支持在群集节点中使用不同的 HBA
平台指南 2-3

QLogic HBA 支持 Cluster FE200 配置

2-3 提供了一个在 Windows 服务器 2003 企业版下支持 PowerEdge Cluster FE200 配 置的系统及 QLogic HBA 的列表。
请参阅 “在您的群集节点 PCI 插槽中安装外围组件”以获得有关 PCI 插槽的建议。
2-3. Windows 2000 高级服务器下支持 Cluster FE200 配置的 HAB
PowerEdge 系统
1550 x
1650 x
www.dell.com | support.dell.com
2500/2550 x
2600 x
2650 x
4400 x x
4600 x
6400/6450 x x
6600/6650 x
8450 x x

HBA 连接器

在与 SAN 相连及与 SAN 设备相连的配置中,支持使用光纤和铜质 HBA 两种连接器。 在直接连接配置中,不支持使用光纤 HBA 连接器。
指导
在配置您的群集时,两个群集节点必须包含以下相同版本:
操作系统和服务软件包
QLA-2200 33MHz QLA-2200 66MHz
获得更多信息
Windows 服务器 2003 企业版下,请参阅与您的群集配置随附的 《Dell PowerEdgeCluster FE200 系统安装与故障排除》
2-4 平台指南
网络适配器、 HBA 及任何其它外围硬件组件的硬件驱动程序
管理公用程序,例如 Dell OpenManage 系统管理软件
光纤信道 HBA BIOS
Loading...
+ 100 hidden pages