Dell SE500W User Manual 2

Dell™ PowerEdge™ Cluster
SE500W Systems
Installation and
Troubleshooting Guide
www.dell.com | support.dell.com
Notes, Notices, and Cautions
NOTE: A NOTE indicates important information that helps you make better use of your computer.
NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.
CAUTION: A CAUTION indicates a potential for property damage, personal injury, or death.
Abbreviations and Acronyms
For a complete list of abbreviations and acronyms, see "Abbreviations and Acronyms."
__________________
Information in this document is subject to change without notice. © 2005 Dell Inc. All rights reserved.
Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and Dell OpenManage are trademarks of Dell Inc.;
Microsoft, Windows, and Windows NT are registered trademarks of Microsoft Corporation, and Windows Server is a trademark of Microsoft Corporation.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
June 2005
Contents
1 Introduction
Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Obtaining Technical Assistance
Overview
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Virtual Servers and Resource Groups Quorum Resource
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Shared Storage Systems
PowerEdge™ Cluster SE500W Solution
Operating Systems Storage System
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Hardware and Software Technologies Cluster Components
System Requirements
Cluster Nodes Cluster storage
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Other Documents You May Need
. . . . . . . . . . . . . . . . . . . . . . . . . 10
. . . . . . . . . . . . . . . . . . . 10
. . . . . . . . . . . . . . . . . . . . . . . . . . 11
. . . . . . . . . . . . . . . . . . . . . 11
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
. . . . . . . . . . . . . . . . . . . 13
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
. . . . . . . . . . . . . . . . . . . . . . . . 16
2 Cabling Your Cluster Hardware
Cabling for the Cluster SE500W Solution . . . . . . . . . . . . . . . . . . . . 17
Cabling One PowerVault™ 22xS Shared Storage System
to a Cluster SE500W
Cabling Two PowerVault 22xS Storage Systems
to a Cluster SE500W
Cabling Three or Four PowerVault 22xS Storage Systems
to a Cluster SE500W
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Cabling Your Public and Private Networks
Cabling Your Public Network
. . . . . . . . . . . . . . . . . . . . . . . . 22
Cabling Your Private Network NIC Teaming
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Cabling the Mouse, Keyboard, and Monitor
. . . . . . . . . . . . . . . . . . . 21
. . . . . . . . . . . . . . . . . . . . . . . 23
. . . . . . . . . . . . . . . . . . . 24
Power Cabling for the PowerEdge Cluster SE500W Solution
. . . . . . . . . . 24
Contents 3
3 Preparing Your Systems for Clustering
Before You Begin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Installation Overview
Selecting a Domain Model
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
. . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Configuring the Cluster Nodes as Domain Controllers
Configuring the Internal Drives in Your Cluster Nodes
. . . . . . . . . . . . . 32
Installing and Configuring the Windows Operating System
Configuring Windows Networking
. . . . . . . . . . . . . . . . . . . . . . . 34
Assigning Static IP Addresses to Your Cluster Resources and Components
Configuring IP Addresses for the Private Network Verifying Cluster Network Communications Configuring the Internet Connection Firewall
Installing the PERC RAID Adapters
. . . . . . . . . . . . . . . . . . . . . . . . 34
. . . . . . . . . . . . . 35
. . . . . . . . . . . . . . . . 37
. . . . . . . . . . . . . . . 37
. . . . . . . . . . . . . . . . . . . . . . . 38
Updating the Windows Driver for the PERC RAID Adapter
Installing and Configuring the Shared Storage System
Clustering PowerVault Storage Systems
. . . . . . . . . . . . . . . . . . 39
. . . . . . . . . . . . . 39
Configuring the PowerVault 22xS Storage System
for Cluster Mode Preparing the PERC RAID Adapter for Clustering
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
. . . . . . . . . . . . . 43
Enabling the Cluster Mode Using the PERC RAID Adapter Setting the SCSI Host Adapter IDs Configuring and Managing Virtual Disks
. . . . . . . . . . . . . . . . . . . . . 44
. . . . . . . . . . . . . . . . . . 45
Windows 2000 and Windows Server 2003 Dynamic Disks and Volumes
. . . . . . . . . . . . . . . . . . . . . . . . 45
Naming and Formatting Drives on the Shared Storage System
. . . . . . . . . . . . . . . . . . . . . . . . . . . 45
. . . . . . . . . . . 31
. . . . . . . . . . 32
. . . . . . . . 38
. . . . . . . . . 44
4 Contents
Installing and Configuring MSCS . . . . . . . . . . . . . . . . . . . . . . . . 48
Verifying Cluster Readiness Creating the Quorum Resource Configuring Cluster Networks Running Windows 2000 Configuring Cluster Networks Running Windows Server 2003
. . . . . . . . . . . . . . . . . . . . . . . . 48
. . . . . . . . . . . . . . . . . . . . . . . 49
. . . . . . . . . . 49
. . . . . . . 50
Installing and Configuring Microsoft Windows 2000 Cluster Service (MSCS)
. . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Configuring Microsoft Windows Server 2003 Cluster Service (MSCS)
Adding Cluster Nodes Using the Advanced Configuration Option Verifying Cluster Functionality Verifying MSCS Operation Verifying Cluster Resource Availability Installing Applications in the Cluster Group Troubleshooting Failed Resources
. . . . . . . . . . . . . . . . . . . . . . . . . . . 51
. . . . . 52
. . . . . . . . . . . . . . . . . . . . . . . 53
. . . . . . . . . . . . . . . . . . . . . . . . . 53
. . . . . . . . . . . . . . . . . . . 53
. . . . . . . . . . . . . . . . 53
. . . . . . . . . . . . . . . . . . . . . 54
4 Installing Your Cluster Management Software
Microsoft Cluster Administrator . . . . . . . . . . . . . . . . . . . . . . . . . 55
Launching Cluster Administrator on a Cluster Node Running Cluster Administrator on a Remote Console Launching Cluster Administrator on a Remote Console Installing Cluster Administrator for Windows Clusters
on a Remote Console
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
. . . . . . . . . . . . 55
. . . . . . . . . . . 55
. . . . . . . . . . 56
5 Using MSCS
Cluster Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Cluster Networks
Preventing Network Failure Node-to-Node Communication
Network Interfaces
Cluster Nodes
Forming a New Cluster Joining an Existing Cluster
Groups
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
. . . . . . . . . . . . . . . . . . . . . . . . . 58
. . . . . . . . . . . . . . . . . . . . . . . 58
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
. . . . . . . . . . . . . . . . . . . . . . . . . . . 59
. . . . . . . . . . . . . . . . . . . . . . . . . 59
Contents 5
Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Dependent Resources Setting Resource Properties Configuring Resource Dependencies Setting Advanced Resource Properties Resource Parameters Quorum Disk (Quorum Resource) Resource Failure Creating a New Resource Deleting a Resource File Share Resource Type
. . . . . . . . . . . . . . . . . . . . . . . . . . . 61
. . . . . . . . . . . . . . . . . . . . . . . . 61
. . . . . . . . . . . . . . . . . . . 62
. . . . . . . . . . . . . . . . . . 62
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
. . . . . . . . . . . . . . . . . . . . . . 63
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
. . . . . . . . . . . . . . . . . . . . . . . . . 65
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
. . . . . . . . . . . . . . . . . . . . . . . . . . 66
Configuring Active and Passive Cluster Nodes
Failover and Failback
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Failover Failback
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
. . . . . . . . . . . . . . . . . 67
6 Upgrading Your System to a Cluster Configuration
Certification Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Before You Begin
Configurations Using Non-Dell Products
Completing the Upgrade
Upgrading Your Operating System
Performing a Rolling Upgrade
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
. . . . . . . . . . . . . . . . . . . . 72
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
. . . . . . . . . . . . . . . . . . . . . . . . 72
. . . . . . . . . . . . . . . . . . . . . . . 73
7 Maintaining Your Cluster
Adding a Network Adapter to a Cluster Node. . . . . . . . . . . . . . . . . . 75
Changing the IP Address of a Cluster Node on the Same IP Subnet
. . . . . . 77
6 Contents
Uninstalling MSCS From Clusters Running Windows 2000 Advanced Server
. . . . . . . . . . . . . . . . . . . . . . . . 77
Removing Nodes From Clusters Running Windows Server 2003
Running chkdsk /f on a Quorum Disk
Recovering From a Corrupt Quorum Disk
. . . . . . . . . . . . . . . . . . . . . . 78
. . . . . . . . . . . . . . . . . . . . 79
. . . . . . . . 78
Replacing a Cluster-Enabled Dell PERC RAID Adapter . . . . . . . . . . . . . 80
Replacing a Cluster Node
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Changing the Cluster Service Account Password in Windows Server 2003
Reformatting a Cluster Disk
Adding New Physical Drives to an Existing Shared Storage System
Rebuilding Operation in Dell OpenManage Utilities
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
. . . . . . . . . . . . . . . . . . . . . . . . . . . 82
. . . . . . 83
. . . . . . . . . . . . . . 84
Upgrading the PowerVault 22xS EMM Firmware Using Array Manager
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
A Troubleshooting
B Cluster Data Form
PowerEdge Cluster SE500W Solution Data Form . . . . . . . . . . . . . . . . 93
Abbreviations and Acronyms . . . . . . . . . . . . . . . . . . . . . . . . 97
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Figures
Figure 1-1. Maximum Configuration of the PowerEdge
Cluster SE500W Solution
Figure 2-1. PowerVault 22xS Back Panel
Figure 2-2. Cabling a Clustered System With One
PowerVault 22xS Storage System
Figure 2-3. Cabling Two PowerVault 22xS Storage Systems
to a PERC RAID adapter
Figure 2-4. Example of Network Cabling Connection
Figure 2-5. Power Cabling Example With Three
Power Supplies in the Systems
. . . . . . . . . . . . . 14
. . . . . . . . . . . 18
. . . . . . . . . 19
. . . . . . . . . . . . . . 20
. . . . . . 22
. . . . . . . . . . 25
Contents 7
Figure 2-6. Power Cabling Example With One Power Supply
in the Systems
. . . . . . . . . . . . . . . . . . 26
Figure 2-7. Power Cabling Example With Two Power
Supplies in the Systems
. . . . . . . . . . . . . . 27
Tables
Figure 3-1. Back-Panel Module Features and Indicators
Figure 3-2. Bus Configuration Switch Modes
Figure 3-3. Important System Warning
. . . . . . . . . 41
. . . . . . . . . . . . 43
. . . . 40
Table 1-1. Windows Operating System Features . . . . . . 12
Table 1-2. Cluster Storage Requirements
Table 1-3. Cluster Node Requirements
Table 2-1. Cluster Cabling Components
Table 2-2. Network Connections
. . . . . . . . . . 12
. . . . . . . . . . . 15
. . . . . . . . . . . 17
. . . . . . . . . . . . . . 21
Table 2-3. Private Network Hardware Components
and Connections
. . . . . . . . . . . . . . . . . 23
Table 3-1. Applications and Hardware Requiring
IP Address Assignments
Table 3-2. Examples of IP Address Assignments
Table 3-3. Split-bus Module Modes and Functions
Table 5-1. Node States and Definitions
. . . . . . . . . . . . . 34
. . . . . . 35
. . . . . 42
. . . . . . . . . . . 59
8 Contents
Table 5-2. Cluster Resources and Required
Dependencies
. . . . . . . . . . . . . . . . . . 62
Table 5-3. Resources and Configurable Parameters
Table A-1. General Cluster Troubleshooting
. . . . . . . . . 87
. . . . . 63

Introduction

This guide provides information for installing a Dell™ PowerEdge™ Cluster SE500W solution in a corporate enterprise.
The information in this guide includes:
Basic SCSI cluster installation procedures, which include:
Preparing server and storage systems for clustering
Cabling the cluster configuration
®
Installing the Microsoft
operating system in your cluster configuration
NOTE: Reference to Windows Server 2003 in this guide implies reference to both Windows
Server 2003 Enterprise and Windows Server 2003 Enterprise x64 Editions, unless explicitly stated.
Configuring the cluster peripherals, including PERC 4/DC or PERC 4e/DC cards and
network adapters
Installation procedures for installing a SCSI cluster configuration in your corporate network
Cluster upgrade and maintenance procedures
Information about MSCS, the clustering software built into the Windows 2000 Advanced Server, and Windows Server 2003 operating systems
Troubleshooting procedures
Data form for recording critical cluster configuration information
Windows® 2000 Advanced Server or Windows Server™ 2003

Intended Audience

This guide was developed for experienced IT professionals who need to install, cable, and configure a PowerEdge Cluster SE500W solution in an enterprise environment and for trained service technicians who perform cluster upgrade and maintenance procedures.
Obtaining More Information
See "Obtaining Technical Assistance" and "Overview" for a general description of PowerEdge clusters and clustering technology.
See "Using MSCS" for an overview of the clustering software built into the Windows 2000 Advanced Server, Windows Server 2003 Enterprise Edition, and Windows Server 2003 x64 Edition operating systems.
Introduction 9

Obtaining Technical Assistance

Dell Enterprise Training and Certification is available; see www.dell.com/training for more information. This service may not be offered in all locations.

Overview

Clustering uses specific hardware and software to join multiple systems together to function as a single system and provide an automatic failover solution. If one of the clustered systems (also known as cluster nodes, or nodes) fails, resources running on the failed system are moved (or failed over) to one or more systems in the cluster by the Microsoft software. MSCS is the failover software component in specific versions of the Windows
www.dell.com | support.dell.com
operating system.
When the failed system is repaired and brought back online, resources automatically transfer back (or fail back) to the repaired system or remain on the failover system, depending on how MSCS is configured. See "Failover and Failback" for more information.
NOTE: Reference to Windows Server 2003 in this guide implies reference to both Windows Server 2003
Enterprise and Windows Server 2003 Enterprise x64 Editions, unless explicitly stated.

Virtual Servers and Resource Groups

In a cluster environment, you do not access a physical server; you access a virtual server, which is managed by MSCS. Each virtual server has its own IP address, name, and hard drive(s) in the shared storage system. MSCS manages the virtual server as a resource group, which contains the cluster resources. Ownership of virtual servers and resource groups is transparent to users. See "Groups" for more information on resource groups.
When MSCS detects a failed node or failed application, MSCS moves the entire resource group to another node and remaps the virtual server to the new network connection. Users of an application in the virtual server experience only a momentary delay in accessing resources while MSCS re-establishes a network connection to the virtual server and restarts the application.
®
Cluster Service (MSCS)

Quorum Resource

A single disk, which is designated as the quorum resource, maintains the configuration data (including all the changes that have been applied to a cluster database) necessary for recovery when a node fails.
The quorum resource can be any resource with the following attributes:
Enables a single node to gain and defend its physical control of the quorum resource
Provides physical storage that is accessible by any node in the cluster
Uses the Microsoft Windows NT
10 Introduction
®
file system (NTFS)
See "Quorum Disk (Quorum Resource)" and the MSCS online documentation for more information.
NOTE: PowerEdge Cluster SE500W solutions do not support the Majority Node Set (MNS) Quorum
resource type.

Shared Storage Systems

Cluster nodes can share access to external storage systems; however, only one of the nodes can own any RAID volume in the external storage system at any time. MSCS controls which node has access to each RAID volume in the shared storage system.
Dell OpenManage™ Array Manager or Dell OpenManage enhanced Storage Manager provides storage management and monitoring for SCSI storage components. See your Array Manager or OMSM documentation for more information.

PowerEdge Cluster SE500W Solution

The PowerEdge Cluster SE500W solution implements two-node clustering technology based on the MSCS software incorporated within the Windows 2000 Advanced Server, and Windows Server 2003 operating systems. This cluster solution provides the following benefits to meet the needs of mission-critical network application programs:
High availability of system services and resources to network clients
Redundant storage for application program data
Failure recovery for cluster-aware applications
Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a cluster node without taking the entire cluster offline
Each cluster node is configured with software, storage, and network resources that enable it to monitor and interact with the other node to provide mutually redundant operation. If a cluster node fails for any reason, virtual servers and resource groups are failed over to the healthy cluster node. When the failed node is repaired and brought back online, the virtual servers and resource groups are failed back to the repaired node (if desired).
The cluster nodes, therefore, operate as a single resource, rather than a collection of individual systems. Because the cluster nodes interact in this way, each virtual server appears as a single system to the network clients.

Operating Systems

The PowerEdge Cluster SE500W solution supports two-node cluster configurations that vary in size and performance. Table 1-1 provides an overview of the supported Windows operating systems.
See your operating system documentation for a complete list of features.
Introduction 11
Table 1-1. Windows Operating System Features
Windows 2000 Advanced Server Windows Server 2003 Enterprise
Supports two-node SCSI clusters Supports two-node SCSI clusters Supports two-node SCSI clusters
Supports up to 8 GB of RAM per node
Cluster configuration and management using Control Panel utilities
www.dell.com | support.dell.com
NOTE: The amount of RAM supported per node also depends on your cluster platform.
NOTE: Running different operating systems in a cluster is supported only during a rolling upgrade. You
cannot upgrade to Windows Server 2003 Enterprise x64 Edition. Only a new installation is permitted for Windows Server 2003 Enterprise x64 Edition.
NOTE: MSCS and Network Load Balancing (NLB) features cannot coexist on the same node, but can be
used together in a multitiered cluster. For more information, see the Dell PowerEdge Clusters website at www.dell.com/ha or the Microsoft website at www.microsoft.com.

Storage System

Table 1-2 provides a list of supported storage components and the configuration requirements for the cluster nodes and stand-alone systems connected to the storage systems.
Windows Server 2003 Enterprise
Edition
Supports up to 32 GB of RAM per node
Cluster configuration and management using Configure Your Server (CYS) and Manage Your Server (MYS) wizards
Metadirectory Services Metadirectory Services
x64 Edition
Supports up to 1 TB RAM per node
Cluster configuration and management using Configure Your Server (CYS) and Manage Your Server (MYS) wizards
Table 1-2. Cluster Storage Requirements
Hardware Components Minimum Requirement
Supported storage systems Up to four Dell PowerVault™ 22xS storage systems (for the shared disk resource)
Enclosure management modules
Power and cooling requirements
Hard drives At least two SCSI hard drives in each PowerVault 22xS enclosure to support
Cables Two 1-, 4-, 8-, or 12-m SCSI cables for each PowerVault 22xS storage system
12 Introduction
Two enclosure management modules (EMMs) per PowerVault 22xS enclosure
Redundant configuration: two fans and dual power supplies per PowerVault 22xS enclosure
hardware-based RAID functionality
NOTE: The minimum recommendation for an active/passive cluster
configuration is two virtual disks. If you are going to use an active/active configuration, at least three virtual disks are recommended.
in the cluster

Hardware and Software Technologies

The PowerEdge Cluster SE500W solution implements the following hardware and software technologies:
Clustering technology based on the MSCS software in the Windows 2000 Advanced Server, and Windows Server 2003 operating systems
PERC 4/DC card or PERC 4e/DC card
Storage management software
Clustering Technology
Clustering is the process of connecting multiple servers together to achieve higher availability and performance. MSCS is a software component in Windows 2000, Advanced Server and an integrated service in Windows Server 2003, which provides failover support for applications and services running on each node.
See "Using MSCS" for more information on the Cluster Service.
NOTE: MSCS and network load balancing (NLB) features cannot coexist on the same cluster node, but
can be used together in a multi-tiered cluster configuration. For more information, see the PowerEdge Clusters website located at www.dell.com/ha or the Microsoft website located at www.microsoft.com.

Cluster Components

Two PowerEdge systems in a homogeneous pair.
NOTE: Dell or Microsoft can support only the specific configurations described in the Dell
PowerEdge Cluster SE500W Platform Guide.
Up to four PowerVault 22xS storage systems, each with dual EMMs and split-bus module.
Supported network adapters for a public and private LANs.
Support for RAID 1, 5, 1+0, and 5+0 levels and hot spare drives. If you are using two enclosures, Dell recommends configuring RAID 1 or 1+0 across enclosures for additional redundancy.
NOTE: RAID 0 and independent drives are possible but are not recommended for a high-availability
system because they do not offer data redundancy if a disk failure occurs.
PERC 4/DC or PERC 4e/DC adapter(s) for the cluster’s shared storage.
NOTE: The PowerEdge Cluster SE500W supports up to two PERC 4/DC or PERC 4e/DC adapters in a
single cluster node. Dell does not support use of PERC 4/DC and PERC 4e/DC adapters together in the PowerEdge Cluster SE500W solution.
NOTE: PERC RAID adapter refers to a PERC 4/DC card or a PERC 4e/DC card.
Ethernet network adapter or integrated network adapter for node-to-node cluster communications.
Ethernet crossover or straight cable for node-to-node cluster interconnect.
Introduction 13
Figure 1-1 shows a sample configuration of the PowerEdge Cluster SE500W components and their interconnections. See the Dell PowerEdge Cluster SE500W Platform Guide for system-specific configuration information.
Figure 1-1. Maximum Configuration of the PowerEdge Cluster SE500W Solution
PowerEdge systems (2) (two PERC 4/DC or two PERC 4e/DC controllers per PowerEdge system)
www.dell.com | support.dell.com
PowerVault 22xS storage
systems (up to 4)

System Requirements

PowerEdge Cluster SE500W configurations require the following hardware and software components:
Cluster nodes
Storage
Interconnects (private network)
Client network connections (public network)
Operating system and storage management software
network cable
node-to-node cluster (heartbeat) interconnect
SCSI cables (8)
14 Introduction

Cluster Nodes

Table 1-3 lists the hardware requirements for the cluster nodes.
Table 1-3. Cluster Node Requirements
Component Minimum Requirement
Cluster nodes Two supported PowerEdge systems running Windows 2000 Advanced
Server operating system.
or
Two supported PowerEdge systems running Windows Server 2003 operating system.
Processors At least two processors for each cluster node.
RAM At least 256 MB of RAM installed on each cluster node for Windows
2000 Advanced Server and Windows Server 2003 Enterprise Edition.
At least 512 MB RAM installed on each cluster node for Windows Server 2003 Enterprise x64 Edition.
On-board hard drive controller
Clustered RAID adapter Up to two PERC 4/DC or two PERC 4e/DC cards for each cluster node.
Network adapters Minimum of two network interfaces: one for the public network (client
For the internal drives, you can use any RAID controller or SCSI adapter supported by the server platform.
Two disk drives are required for mirroring (RAID 1) and at least three disk drives are required for disk striping with parity (RAID 5).
NOTE: Dell strongly recommends that you use hardware-based RAID or
software-based disk-fault tolerance for the internal drives. At least 4 GB disk space is required for Windows Server 2003 Enterprise x64 Edition.
Dell recommends placing your RAID adapters on separate I/O busses to improve availability and performance.
See the Platform Guide for more information about supported PowerEdge systems.
LAN connections) and another for the private network (cluster interconnect).
The network adapters installed in each cluster node must be identical and supported by the server platform.

Cluster storage

PowerEdge Cluster SE500W configurations support up to four PowerVault 22xS storage systems per cluster.
Introduction 15

Other Documents You May Need

CAUTION: The Product Information Guide provides important safety and regulatory information.
Warranty information may be included within this document or as a separate document.
The
The
The
www.dell.com | support.dell.com
The
The
The PERC 4/DC or PERC 4/eDC documentation includes information on the
The Dell OpenManage Array Manager or Dell OMSM documentation provides instructions
Operating system documentation describes how to install (if necessary), configure, and use
Microsoft Windows 2000 or Windows Server 2003 Cluster Service documentation.
The
Documentation for any components you purchased separately provides information to
Updates are sometimes included with the system to describe changes to the system, software,
Dell PowerEdge Cluster SE500W Systems Platform Guide
systems that support the PowerEdge Cluster SE500W configuration.
Rack Installation Guide
system into a rack.
Setting Up Your System
your system.
Users Guide
technical specifications, SCSI drivers, the System Setup program (if applicable), software support, and the system configuration utility.
Installation and Troubleshooting Guide
describes how to troubleshoot the system and install or replace system components.
SCSI RAID controller.
for using the array management software to configure RAID systems.
the operating system software.
System Administrator’s Guide
configure and install these options.
and/or documentation.
provides information about the
included with your rack solution describes how to install your
document provides an overview of initially setting up
for your PowerEdge or PowerVault system describes system features and
for your PowerEdge or PowerVault system
provides system operation and management operation.
NOTE: Always read the updates first because they often supersede information in other documents.
Release notes or readme files may be included to provide last-minute updates to the system documentation or advanced technical reference material intended for experienced users or technicians.
16 Introduction

Cabling Your Cluster Hardware

Dell™ PowerEdge™ Cluster SE500W configurations require cabling for the storage systems, cluster interconnects, client network connections, and power connections.

Cabling for the Cluster SE500W Solution

The cluster systems and components are interconnected to provide four independent functions as listed in Table 2-1, each of which is described in more detail throughout this section.
Table 2-1. Cluster Cabling Components
Components Description
Shared storage system Connects the host-based, RAID controller(s) to the disk
enclosure(s). This interconnect is made from the PERC 4/DC or PERC 4e/DC card in each PowerEdge system to the Dell PowerVault™ 22xS EMMs, using Dell-qualified U320 SCSI cables.
Cluster interconnect (private network)
Network connection for public traffic (public network)
Power connection Provides a connection between the power source and the power
Connects the systems to each other to exchange information and status. This connection can be made by using a supported Ethernet network adapter and cabling that is connected to each cluster node.
See the Platform Guide for a list of supported network adapters for your configuration.
Provides a connection between each cluster node and the client network. This connection can be made using an Ethernet network adapter and cabling that is connected to the public network.
See the Platform Guide for a list of supported network adapters for your configuration.
supplies in your system. By using power strips or Power Distribution Units (PDUs) and separate AC circuits, the cluster can fully utilize the redundant power supplies.
Cabling Your Cluster Hardware 17

Cabling One PowerVault 22xS Shared Storage System to a Cluster SE500W

NOTE: See "Configuring the PowerVault 22xS Storage System for Cluster Mode" for more information
about configuring the storage systems.
NOTICE: Do not turn on the systems or the storage system(s) until the split-bus module on the back of
the PowerVault system has been set to cluster mode and all cabling is complete.
NOTICE: The asymmetrical, D-shaped SCSI connectors can be damaged if they are forced together
when they are not oriented to each other correctly. To avoid connector damage, match the connector shapes (long side-to-long side and short side-to-short side). Insert the cable connector straight (not at an angle) into the card connector.
1
Connect the SCSI connector A (see Figure 2-1) on the back of the PowerVault 22xS storage
www.dell.com | support.dell.com
system to the 68-pin connector on the first SCSI cable (see Figure 2-2), and then tighten the retaining screws.
NOTE: As viewed from the back of the system, the left EMM is the primary EMM and the right EMM
is the secondary EMM.
NOTE: See "Enabling the Cluster Mode Using the PERC RAID Adapter" for more information about
cluster-enabling the card.
2
Connect the very high-density connector interface (VHDCI) connector of the first SCSI cable to the channel 0 connector on the cluster-enabled PERC RAID adapter in the first PowerEdge system (see Figure 2-2), and then tighten the retaining screws.
Figure 2-1. PowerVault 22xS Back Panel
redundant cooling fans (2)
power supply
SCSI connector A
primary EMM
split bus module
NOTE: Ensure that you securely tighten the retaining screws on all SCSI connectors to ensure a
reliable connection.
18 Cabling Your Cluster Hardware
redundant power supply
SCSI connector B
secondary EMM
3
Connect the VHDCI connector of the second SCSI cable (see Figure 2-2) to the channel 0 connector on the cluster-enabled PERC RAID adapter in the second PowerEdge system, and then tighten the retaining screws.
4
Connect the SCSI connector B (see Figure 2-1) on the back of the PowerVault 22xS storage system to the 68-pin connector on the second SCSI cable (see Figure 2-2), and tighten the retaining screws.
NOTE: If the PowerVault 22xS storage system is disconnected from the cluster, it must be
reconnected to the same channel on the same PERC RAID adapter for proper operation.
Figure 2-2. Cabling a Clustered System With One PowerVault 22xS Storage System
VHDCI
PERC RAID adapter
node 1
shared storage connector A(68-pin connector)
node 2
SCSI connections from channel 0 on each cluster-enabled PERC RAID adapter
PowerVault 22xS
storage system shared storage connector B (68-pin connector)
Cabling Your Cluster Hardware 19

Cabling Two PowerVault 22xS Storage Systems to a Cluster SE500W

Connecting the cluster to two PowerVault 22xS storage systems is similar to connecting the cluster to a single PowerVault 22xS storage system. Connect channel 0 of the PERC RAID adapter in each node to the back of the first storage system, as described in "Cabling One PowerVault 22xS Shared Storage System to a Cluster SE500W." Repeat the process for channel 1 on the controller in each node using a second PowerVault 22xS storage system. See Figure 2-3.
With dual storage systems connected to a single PERC RAID adapter, mirroring disk drives from one storage system to another is supported through RAID 1 and 1+0. To protect the cluster applications and your data if an entire storage system fails, Dell strongly recommends using RAID 1 or 1+0.
www.dell.com | support.dell.com
VHDCI (channels 0 and 1)
PERC RAID adapter channel 0
NOTE: If you have dual cluster-enabled PERC RAID adapters (total of four channels) and only two shared
storage systems, you may want to connect one storage system to each controller. If you remove the cable connections, you must reconnect the cables as they were previously connected. To ensure that the cables are reconnected correctly, Dell recommends that you tag or color-code the cables.
Figure 2-3. Cabling Two PowerVault 22xS Storage Systems to a PERC RAID adapter
PERC RAID adapter channel 1
PERC RAID adapter
SCSI connections from channel 1 on each PERC RAID adapter
68-pin connectors (2)
20 Cabling Your Cluster Hardware
68-pin connectors (2)

Cabling Three or Four PowerVault 22xS Storage Systems to a Cluster SE500W

To connect the cluster to three or four PowerVault 22xS storage systems, repeat the process described in the preceding section for a second controller.
NOTICE: If you have dual storage systems that are attached to a second controller, Dell supports disk
mirroring between channels on the second controller. However, Dell does not support mirroring disks on one cluster-enabled PERC RAID adapter to disks on another cluster-enabled PERC RAID adapter.

Cabling Your Public and Private Networks

The network adapters in the cluster nodes provide at least two network connections for each node. These connections are described in Table 2-2.
Table 2-2. Network Connections
Network Connection Description
Public network All connections to the client LAN.
At least one public network must be configured for Mixed mode for private network failover.
Private network A dedicated connection for sharing cluster health and status
information between the cluster nodes.
Network adapters connected to the LAN can also provide redundancy at the communications level in case the cluster interconnect fails. See your MSCS documentation for more information on private network redundancy.
Cabling Your Cluster Hardware 21
Figure 2-4 shows an example of network adapter cabling in which dedicated network adapters in each node are connected to the public network and the remaining network adapters are connected to each other (for the private network).
Figure 2-4. Example of Network Cabling Connection
www.dell.com | support.dell.com
public network
public network adapter
cluster node 1
private network adapter
private network
cluster node 2

Cabling Your Public Network

The public network connection (client network) to the cluster nodes is provided by a network adapter that is installed in each node. Any network adapter supported by the system running TCP/IP may be used to connect to the public network segments. Additional network adapters may be installed to support additional separate public network segments or to provide redundancy for the public network.
Installing redundant network adapters provides your cluster with a failover connection to the public network. If the primary network adapter or a switch port fails, your cluster will be able to access the public network through the secondary network adapter until the faulty network adapter or switch port is repaired.
Using Dual-Port Network Adapters for Your Private Network
You can configure your cluster to use the public network as a failover for private network communications. However, if dual-port network adapters are used, do not use two ports simultaneously to support both the public and private networks.
22 Cabling Your Cluster Hardware

Cabling Your Private Network

The private network connection to the cluster nodes is provided by a second or subsequent network adapter that is installed in each node. This network is used for intracluster communications. Table 2-3 lists the required hardware components and connection method for three possible private network configurations.
Table 2-3. Private Network Hardware Components and Connections
Method Hardware Components Connection
Network switch Fast Ethernet or Gigabit
Ethernet network adapters and switches
Point-to-Point Fast Ethernet
Point-to-Point Gigabit Ethernet
NOTE: On certain Microsoft® Windows® 2000 Advanced Server, Windows Server™ 2003
configurations, using an Ethernet cable in a point-to-point connection can impact node-to-node communications. See Microsoft Knowledge Base articles 239924, 242430, 254651, and 258750 at www.microsoft.com for more information. This issue has been corrected in Windows Server 2003.
Fast Ethernet network adapters Connect a crossover Ethernet cable between
Copper Gigabit Ethernet network adapters
Connect standard Ethernet cables from the network adapters in both cluster nodes to a Fast Ethernet or Gigabit Ethernet switch.
the Fast Ethernet network adapters in both cluster nodes.
Connect a standard Ethernet cable between the Gigabit Ethernet network adapters in both cluster nodes.

NIC Teaming

Network Interface Card (NIC) teaming combines two or more NICs to provide load balancing and/or fault tolerance. Your cluster supports NIC teaming, but only in a public network; NIC teaming is not supported in a private network.
You should use the same brand of NICs in a team, and you cannot mix brands of teaming drivers.
Cabling Your Cluster Hardware 23

Cabling the Mouse, Keyboard, and Monitor

If you are installing a PowerEdge Cluster SE500W configuration in a Dell rack, your cluster will require a switch box to enable the mouse, keyboard, and monitor for your cluster nodes.
See your rack installation documentation included with your rack for instructions on cabling each cluster node’s Keyboard Video Mouse (KVM) to the mouse/keyboard/monitor switch box in the rack.

Power Cabling for the PowerEdge Cluster SE500W Solution

Observe the following cautions when connecting the power cables to the
www.dell.com | support.dell.com
PowerEdge Cluster SE500W.
CAUTION: Although each component of the PowerEdge Cluster SE500W meets leakage current safety
requirements, the total leakage current may exceed the maximum that is permitted when the components are used together. To meet safety requirements in the Americas (that is, the United States, Canada, and Latin America), you must use a Type B plug and socket connection for the cluster power to enable the appropriate level of ground protection. In Europe, you must use one or two PDUs or two Type B plug-and- socket connections wired and installed by a qualified electrician in accordance with the local wiring regulations.
CAUTION: Do not attempt to cable the PowerEdge Cluster SE500W to electrical power without first
planning the distribution of the cluster’s electrical load across available circuits. For operation in the Americas, the PowerEdge Cluster SE500W requires two AC circuits with a minimum capacity of 20 amperes (A) each to handle the electrical load of the system. Do not allow the electrical load of the system to exceed 16 A on either circuit.
CAUTION: For operation in Europe, the PowerEdge Cluster SE500W requires two circuits rated in
excess of the combined load of the attached systems. Refer to the ratings marked on the back of each cluster component when determining the total system’s electrical load.
See your system and storage system documentation for more information about the specific power requirements for your cluster system’s components.
Dell recommends the following guidelines to protect your cluster system from power-related failures:
For cluster nodes with multiple power supplies, plug each power supply into a separate AC circuit.
Use uninterruptible power supplies (UPS).
For some environments, you may consider having backup generators and power from separate electrical substations.
Figure 2-5, Figure 2-6, and Figure 2-7 illustrate the proper power cabling of Cluster SE500W components consisting of two PowerEdge systems and two PowerVault 22xS storage systems. Each component of the cluster must have power supplied by two or three separate AC circuits— one circuit to each component power supply. Therefore, the primary power supplies of all the PowerEdge Cluster SE500W components are grouped onto one or two circuits and the redundant power supplies are grouped onto a different circuit.
24 Cabling Your Cluster Hardware
Figure 2-5. Power Cabling Example With Three Power Supplies in the Systems
redundant power supplies on one primary power supplies on two AC power strips (or on two AC PDUs [not shown])
AC power strip (or on one AC PDU
[not shown])
CAUTION: The arrangement of the cluster components in this illustration is
intended only to demonstrate the power distribution of the components. Do not stack components as in the configuration shown.
Cabling Your Cluster Hardware 25
Figure 2-6. Power Cabling Example With One Power Supply in the Systems
www.dell.com | support.dell.com
CAUTION: The arrangement of the cluster components in this illustration is intended
only to demonstrate the power distribution of the components. Do not stack components as in the configuration shown.
26 Cabling Your Cluster Hardware
primary power supplies on one AC power strip (or on one AC PDU [not shown])
redundant power supplies on one AC power strip (or on one AC PDU [not shown])
Figure 2-7. Power Cabling Example With Two Power Supplies in the Systems
redundant power supplies on one primary power supplies on one AC power strip (or on one AC PDU [not shown])
AC power strip (or on one AC PDU
[not shown])
CAUTION: The arrangement of the cluster components in this illustration is intended
only to demonstrate the power distribution of the components. Do not stack components as in the configuration shown.
Cabling Your Cluster Hardware 27
www.dell.com | support.dell.com
28 Cabling Your Cluster Hardware

Preparing Your Systems for Clustering

This section provides information for preparing your server and storage system hardware for clustering in a Dell™ PowerEdge™ Cluster SE500W configuration.

Before You Begin

1
Ensure that your site can handle the power requirements of the cluster equipment.
Contact your sales representative for information about your region's power requirements.
CAUTION: Only trained service technicians are authorized to remove and access any of the
components inside the system. See your Product Information Guide for complete information about safety precautions, working inside the computer, and protecting against electrostatic discharge.
2
Ensure that the following components are installed in each PowerEdge system in the cluster:
Network adapters
PERC RAID adapters
SCSI hard drives
Any additional peripheral components
RAID controllers for internal drives (optional)
3
Ensure that the following components are installed in each Dell PowerVault™ 22xS system in the cluster. See "Installing and Configuring the Shared Storage System."
Two enclosure management modules (EMMs)
A split-bus module
SCSI hard drives
4
Cable the system hardware for clustering.
See "Cabling Your Cluster Hardware" for more information.
5
Configure the storage system(s) as described in your storage system documentation.
6
Configure the PERC RAID adapters as described in your PERC RAID adapter documentation.
7
If you are using hardware-based RAID for the internal SCSI hard drives, configure the hard drives using the controller's BIOS utility or Dell OpenManage™ Array Manager or Dell OMSM.
Preparing Your Systems for Clustering 29

Installation Overview

This section provides installation overview procedures for configuring your cluster running the Microsoft
1
2
3
www.dell.com | support.dell.com
4
5
®
Windows® 2000 Advanced Server, or Windows Server™ 2003 operating system.
Ensure that your cluster meets the requirements as described in "Before You Begin."
Select a domain model that is appropriate for your corporate network and operating system.
See "Selecting a Domain Model" for more information.
Reserve static IP addresses for your cluster resources and components.
The resources and components include:
Public network
Private network
Cluster virtual servers
You will use these IP addresses when you install the Windows operating system and MSCS.
See "Assigning Static IP Addresses to Your Cluster Resources and Components" for more information.
Configure the internal hard drives in your cluster nodes.
See "Configuring the Internal Drives in Your Cluster Nodes" for more information.
Install and configure the Windows operating system.
The Windows operating system must be installed on both of the cluster nodes. Each cluster node must have its own licensed copy of the Windows operating system and Certificate of Authenticity attached.
See "Installing and Configuring the Windows Operating System" for more information.
6
Install or update the PERC RAID adapter drivers.
The PERC RAID adapter drivers allow your cluster nodes to communicate with the shared storage systems.
See "Updating the Windows Driver for the PERC RAID Adapter" for more information.
7
Install and configure the storage management software.
See the documentation included with your Array Manager software or available at the Dell Support website (located at
8
Configure the hard drives on the shared storage system(s).
See "Configuring and Managing Virtual Disks" for more information.
30 Preparing Your Systems for Clustering
support.dell.com
) for more information.
Loading...
+ 78 hidden pages