Dell FE400 User Manual

Dell™ PowerEdge™
Cluster FE400 Systems
Platform Guide
www.dell.com | support.dell.com
Notes, Notices, and Cautions
NOTE: A NOTE indicates important information that helps you make better use of your computer.
NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you how to avoid
CAUTION: A CAUTION indicates a potential for property damage, personal injury, or death.
____________________
Information in this document is subject to change without notice. © 2004 Dell Inc. All rights reserved.
Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. Trademarks used in this text: Dell, the DELL logo, Dell OpenManage, PowerEdge, and PowerVault are trademarks of Dell Inc.; Microsoft and
Windows are registered trademarks of Microsoft Corporation; EMC, Navispher e, and PowerPath are registered trademarks of EMC Corporation; Access Logix, MirrorView, and SnapView are trademarks of EMC Corporation.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
September 2004 P/N 6W455 Rev. A05
This document provides information for installing and connecting peripheral hardware, storage, and storage area network (SAN) components for your Dell™ PowerEdge™ Cluster FE400 solution. The configuration information in this document is specific to the Microsoft
®
Windows®2000
Advanced Server and Windows Server 2003, Enterprise Edition operating systems.
This document covers the following topics:
Configuration information for installing peripheral hardware components, such as HBAs, NICs, and PCI adapter cards into Cluster FE400 configurations
Configuration rules and guidelines for direct-attached or SAN-attached configurations
Best practices
NOTE: Configurations not listed in this document may not be certified or supported by Dell or Microsoft.
NOTE: In this guide and in other cluster documentation, the Microsoft Cluster Service (for Windows 2000
Advanced Server or Windows Server 2003, Enterprise Edition) is also referred to as MSCS.

Supported Cluster Configurations

This section provides information about supported cluster configurations for your PowerEdge cluster configuration.
The Cluster FE400 solution supports the following configurations:
®
Two-node clusters running the Microsoft system.
Clusters with up to eight nodes running the Windows Server 2003, Enterprise Edition operating system.
Table 1-1 provides a list of supported cluster configurations for the Cluster FE400 systems running Windows 2000 Advanced Server or Windows Server 2003, Enterprise Edition.
Windows® 2000 Advanced Server operating
NOTE: Each cluster node must be of the same system model and have two or more processors.
Table 1-1. Supported Cluster Configurations
PowerEdge Cluster
FE400 1550, 1650, 1750,
Supported PowerEdge Systems
1800, 1850, 2500, 2550, 2600, 2650, 2800, 2850, 4400, 4600, 6400, 6450, 6600, 6650, and 8450
Supported Storage Systems Supported Cluster Interconnect (for the
Private Network)
Dell | EMC CX600
Dell | EMC CX400
Dell | EMC CX200
Any NIC supported by the system.
NOTE: All nodes in the same cluster must
use homogeneous (identical) NICs for the cluster interconnect.
Platform Guide 3
Obtaining More Information
See the related documentation.
Dell PowerEdge Cluster FE400 Installation and Troubleshooting Guide

High-Availability Cluster Configurations

This section provides information about the supported operating systems, HBAs, and HBA drivers for your cluster configuration.
NOTICE: All cluster nodes in a Cluster FE400 solution must run the same operating system.
Mixing Windows 2000 Advanced Server and Windows Server 2003, Enterprise Edition in the same cluster is not supported except during a rolling upgrade.
www.dell.com | support.dell.com
NOTICE: HBAs installed in clusters using redundant paths must be identical. Cluster configurations are
tested and certified using identical HBAs installed in all of the cluster nodes. Using dissimilar HBAs in your cluster nodes is not supported.

Service Pack Support

Windows 2000 Advanced Server
Microsoft Windows 2000 Service Pack 4 or later is required for Cluster FE400 systems that use Windows 2000 Advanced Server.
You can download the latest service pack from the Microsoft Support website at
support.microsoft.com
for a detailed list of
.
Windows Server 2003, Enterprise Edition
At the time this document was printed, a service pack was not available for Windows Server 2003, Enterprise Edition. However, hotfix KB818877 and the most recent hotfix for storport.sys are required.
See Knowledge Base article KB818877 on the Microsoft Support website at for more information. At the time this document was printed, KB838894 was the most recent hotfix for storport.sys. You can also go to the Dell Support website at recent information on this issue.

HBA Support for PowerEdge Cluster FE400 Configurations

Table 1-2 lists the systems and the HBAs that are supported for Cluster FE400 configurations running Windows 2000 Advanced Server or Windows Server 2003, Enterprise Edition.
See "Installing Peripheral Components in Your Cluster Node PCI Slots" for PCI slot recommendations.
4 Platform Guide
support.microsoft.com
support.dell.com
for the most
Table 1-2. Supported HBAs for Cluster FE400 Configurations
PowerEdge System
1550 X X
1650 X X
1750 X X
1800 X X X
1850 X* X* X**
2500 X X
2550 X
2600/2650 X X
2800 X X X
2850 X* X* X**
4400 X X
4600 X X
6400/6450 X X
6600/6650 X X
8450 X X
* The PowerEdge system must have a PCI-X riser installed in order to use this HBA. ** The PowerEdge system must have a PCIe riser installed in order to use this HBA.
Emulex LP982 or LP9802 (PCI-X) HBA
QLogic QLA2340 (PCI-X) HBA
Emulex LP1050-EX (PCIe Express [PCIe]) HBA

Fibre Channel Switches

Dual (redundant) fabric configurations are required.
A maximum of 16 switches may be used in a SAN.
A minimum of two and a maximum of eight Inter-Switch Links (ISLs) may exist between any two directly communicating switches. A single ISL is permitted only when connecting to a remote switch in an EMC
®
MirrorView™ configuration.
A maximum of three hops (the number of ISLs each data frame must traverse) may exist between a host and a storage system.

Rules and Guidelines

When configuring your cluster, all cluster nodes must contain identical versions of the following:
Platform Guide 5
Loading...
+ 11 hidden pages