Dell EMC AX4-5 Owner's Manual

Dell™ Failover Clusters With
Microsoft
Software Installation and
Troubleshooting Guide
®
Windows Server®2003
www.dell.com | support.dell.com
NOTE: A NOTE indicates important information that helps you make better use
of your computer.
NOTICE: A NOTICE indicates either potential damage to hardware or loss of
data and tells you how to avoid the problem.
CAUTION: A CAUTION indicates a potential for property damage, personal
injury, or death.
___________________
Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved.
Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and OpenManage are
trademarks of Dell Inc.; Active Directory , Microsoft, Windows, Windows Server , and Windows NT are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
April 2008 Rev A00
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 7
Virtual Servers and Resource Groups . . . . . . . . 7
Quorum Resource
. . . . . . . . . . . . . . . . . . 8
Cluster Solution
. . . . . . . . . . . . . . . . . . . . . . 8
Supported Cluster Configurations
Cluster Components and Requirements
Operating System
Cluster Nodes
Cluster Storage
Other Documents You May Need
. . . . . . . . . . . . . . . . . . . 9
. . . . . . . . . . . . . . . . . . . . 10
. . . . . . . . . . . . . . . . . . . 11
. . . . . . . . . . . . 12
2 Preparing Your Systems for
Clustering
Cluster Configuration Overview . . . . . . . . . . . . . 13
Installation Overview
Selecting a Domain Model
Configuring Internal Drives in the Cluster Nodes
Installing and Configuring the Microsoft Windows Operating System
Configuring Windows Networking
. . . . . . . . . . . . . . . . . . . . . . . . 13
. . . . . . . . . . . . . . . . . . 15
. . . . . . . . . . . . . . . 17
Configuring the Nodes as Domain Controllers
Assigning Static IP Addresses to Cluster Resources and Components
Configuring IP Addresses for the Private Network
. . . . . . . . . . . . . . . . . . 21
Verifying Communications Between Nodes
Configuring the Internet Connection Firewall
. . . . . . . . . . 8
. . . . . . . 9
. . . 17
. . . . 17
. . . . . . . . . 18
. . . . . . . . . . . 20
. . . . . . . . 20
. . . . 23
. . . 24
Contents 3
Installing the Storage Connection Ports and Drivers
. . . . . . . . . . . . . . . . . . . . . 24
Installing and Configuring the Shared Storage System
. . . . . . . . . . . . . . . . . 25
Assigning Drive Letters and Mount Points
Configuring Hard Drive Letters When Using Multiple Shared Storage Systems
Formatting and Assigning Drive Letters and Volume Labels to the Disks
. . . . . . . . . . . . . 28
. . . . . 25
. . . . . . 28
Configuring Your Failover Cluster
. . . . . . . . . . . . 29
Configuring Microsoft Cluster Service (MSCS) With Windows Server 2003
Verifying Cluster Readiness
. . . . . . . . . . . . . 30
. . . . . . . . . . . . . 32
Installing Applications in the Cluster Group
Installing the Quorum Resource
Creating a LUN for the Quorum Resource
. . . . . . . . . . 32
. . . . . 33
Configuring Cluster Networks Running Windows Server 2003
Verifying MSCS Operation
Verifying Cluster Functionality
Verifying Cluster Resource Availability
. . . . . . . . . . . . . . . . 33
. . . . . . . . . . . . . 34
. . . . . . . . . . . . . 34
. . . . . . . . . 34
3 Installing Your Cluster Management
Software
Microsoft Cluster Administrator . . . . . . . . . . . . . 35
. . . . . . . . . . . . . . . . . . . . . . . . . 35
Launching Cluster Administrator on a Cluster Node
. . . . . . . . . . . . . . . . . . . . 35
Running Cluster Administrator on a Remote Console
. . . . . . . . . . . . . . . . . . . 35
Launching Cluster Administrator on a Remote Console
. . . . . . . . . . . . . . . . . . . 36
. . . . 32
4 Contents
4 Understanding Your Failover Cluster . . . 37
Cluster Objects. . . . . . . . . . . . . . . . . . . . . . 37
Cluster Networks
Preventing Network Failure
Node-to-Node Communication
Network Interfaces
Cluster Nodes
Forming a New Cluster
Joining an Existing Cluster
Cluster Resources
Setting Resource Properties
Resource Dependencies
Setting Advanced Resource Properties
Resource Parameters
Quorum Resource
Resource Failure
Resource Dependencies
Creating a New Resource
Deleting a Resource
File Share Resource Type
. . . . . . . . . . . . . . . . . . . . 37
. . . . . . . . . . . . 37
. . . . . . . . . . . 38
. . . . . . . . . . . . . . . . . . . 38
. . . . . . . . . . . . . . . . . . . . . . 38
. . . . . . . . . . . . . . . 39
. . . . . . . . . . . . . 39
. . . . . . . . . . . . . . . . . . . . 39
. . . . . . . . . . . . 39
. . . . . . . . . . . . . . 40
. . . . . . 41
. . . . . . . . . . . . . . . 41
. . . . . . . . . . . . . . . . . 42
. . . . . . . . . . . . . . . . . . 42
. . . . . . . . . . . . . . 44
. . . . . . . . . . . . . 44
. . . . . . . . . . . . . . . . 45
. . . . . . . . . . . . . 46
Configuring Active and Passive Cluster Nodes
. . . . . 46
Failover Policies
Windows Server 2003 Cluster Configurations
Failover and Failback Capabilities
. . . . . . . . . . . . . . . . . . . . . 48
. . . 48
. . . . . . . . . 53
5 Maintaining Your Cluster . . . . . . . . . . . . . 55
Adding a Network Adapter to a Cluster Node . . . . . 55
Changing the IP Address of a Cluster Node on the Same IP Subnet
. . . . . . . . . . . . . . . . . . . . . 56
Contents 5
Removing Nodes From Clusters Running Microsoft Windows Server 2003
. . . . . . . . . . . . . 57
Running chkdsk /f on a Quorum Resource
Recovering From a Corrupt Quorum Disk
. . . . . . . 57
. . . . . . . . 58
Changing the MSCS Account Password in Windows Server 2003
Reformatting a Cluster Disk
. . . . . . . . . . . . . . . . . . 59
. . . . . . . . . . . . . . . 59
6 Upgrading to a Cluster
Configuration
Before You Begin. . . . . . . . . . . . . . . . . . . . . 61
Supported Cluster Configurations
Completing the Upgrade
. . . . . . . . . . . . . . . . . . . . . . 61
. . . . . . . . . . . . 61
. . . . . . . . . . . . . . . . . 62
A Troubleshooting . . . . . . . . . . . . . . . . . . . 63
Index
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6 Contents

Introduction

Clustering uses specific hardware and software to join multiple systems together to function as a single system and provide an automatic failover solution. If one of the clustered systems (also known as cluster nodes, or nodes) fails, resources running on the failed system are moved (or failed over) to one or more systems in the cluster by the Microsoft software. MSCS is the failover software component in specific versions of the Windows
When the failed system is repaired and brought back online, resources automatically transfer back (or fail back) to the repaired system or remain on the failover system, depending on how MSCS is configured. For more information, see "Configuring Active and Passive Cluster Nodes" on page 46.
®
operating system.
NOTE: Reference to Microsoft Windows Server® 2003 in this guide implies reference
to Windows Server 2003 Enterprise Edition, Windows Server 2003 R2 Enterprise Edition, Windows Server 2003 Enterprise x64 Edition, and Windows Server 2003 R2 Enterprise x64 Edition unless explicitly stated.

Virtual Servers and Resource Groups

In a cluster environment, users do not access a physical server; they access a virtual server, which is managed by MSCS. Each virtual server has its own IP address, name, and hard drive(s) in the shared storage system. MSCS manages the virtual server as a resource group, which contains the cluster resources. Ownership of virtual servers and resource groups is transparent to users. For more information on resource groups, see "Cluster Resources" on page 39.
When MSCS detects a failed application that cannot restart on the same server node or a failed server node, MSCS moves the failed resource group(s) to one or more server nodes and remaps the virtual server(s) to the new network connection(s). Users of an application in the virtual server experience only a momentary delay in accessing resources while MSCS re-establishes a network connection to the virtual server and restarts the application.
®
Cluster Service (MSCS)
Introduction 7

Quorum Resource

A single shared disk, which is designated as the quorum resource, maintains the configuration data (including all the changes that have been applied to a cluster database) necessary for recovery when a node fails.
The quorum resource can be any resource with the following attributes:
Enables a single node to gain and defend its physical control of the quorum resource
Provides physical storage that is accessible by any node in the cluster
Uses the Microsoft Windows NT
See "Quorum Resource" on page 42 and the MSCS online documentation for more information located at the Microsoft Support website at support.microsoft.com.
NOTE: Dell™ Windows Server Failover clusters do not support the Majority Node
Set Quorum resource type.
®
file system (NTFS)

Cluster Solution

The Windows Server 2003 failover cluster implements up to eight cluster nodes, depending on the storage array in use and provides the following features:
A shared storage bus featuring Fibre Channel, Serial Attached SCSI (SAS), or Internet Small Computer System Interface(iSCSI)technology
High availability of resources to network clients
Redundant paths to the shared storage
Failure recovery for applications and services
Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a node or storage system without taking the entire cluster offline

Supported Cluster Configurations

For the list of Dell-validated hardware, firmware, and software components for a Windows Server 2003 failover cluster environment, see Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at www.dell.com/ha.
8 Introduction

Cluster Components and Requirements

Your cluster requires the following components:
Operating System
Cluster nodes(servers)
Cluster Storage

Operating System

Table 1-1 provides an overview of the supported operating systems. See your operating system documentation for a complete list of features.
NOTE: Some of the core services are common to all the operating systems.
Table 1-1. Windows Operating System Features
Windows Server 2003 Enterprise Edition/Windows Server 2003 R2 Enterprise Edition
Supports up to eight nodes per cluster Supports up to eight nodes per cluster
Supports up to 64 GB of RAM per node Supports up to 1 TB RAM per node
Cluster configuration and management using Configure Your Server (CYS) and Manage Your Server (MYS) wizards
Metadirectory Services Metadirectory Services
NOTE: The amount of RAM supported per node also depends on your cluster platform.
NOTE: Running different operating systems in a cluster is supported only during a
rolling upgrade. You cannot upgrade to Windows Server 2003, Enterprise x64 Edition/Windows Server 2003 R2, Enterprise x64 Edition. Only a new installation is permitted for Windows Server 2003, Enterprise x64 Edition/Windows Server 2003 R2, Enterprise x64 Edition.
NOTE: MSCS and Network Load Balancing (NLB) features cannot coexist on the
same node, but can be used together in a multi-tiered cluster. For more information, see the Dell High Availability Clusters website at www.dell.com/ha or the Microsoft website at www.microsoft.com.
Windows Server 2003 Enterprise x64 Edition/Windows Server 2003 R2 Enterprise x64 Edition
Cluster configuration and management using CYS and MYS wizards
Introduction 9

Cluster Nodes

Table 1-2 lists the hardware requirements for the cluster nodes.
Table 1-2. Cluster Node Requirements
Component Minimum Requirement
Cluster nodes Two to eight Dell PowerEdge™ systems running the
Windows Server 2003 operating system.
RAM At least 256 MB of RAM installed on each cluster node for
Windows Server 2003, Enterprise Edition or Windows Server 2003 R2, Enterprise Edition.
At least 512 MB of RAM installed on each cluster node for Windows Server 2003, Enterprise x64 Edition, or Windows Server 2003 R2, Enterprise x64 Edition.
NICs At least two NICs: one NIC for the public network and
another NIC for the private network.
NOTE: It is recommended that the NICs on each public network
are identical, and that the NICs on each private network are identical.
Internal disk controller
HBA ports
One controller connected to at least two internal hard drives for each node. Use any supported RAID controller or disk controller.
Two hard drives are required for mirroring (RAID 1) and at least three are required for disk striping with parity (RAID 5).
NOTE: It is strongly recommended that you use hardware-based
RAID or software-based disk-fault tolerance for the internal drives.
• For clusters with Fibre Channel storage, two Fibre Channel HBAs per node, unless the server employs an integrated or supported dual-port Fibre Channel HBA.
• For clusters with SAS storage, one or two SAS 5/E HBAs per node.
NOTE: Where possible, place the HBAs on separate PCI buses
to improve availability and performance. For information about supported systems and HBAs, see Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at www.dell.com/ha.
10 Introduction
Table 1-2. Cluster Node Requirements (continued)
Component Minimum Requirement
iSCSI Initiator and NICs for iSCSI Access
For clusters with iSCSI storage, install the Microsoft iSCSI Software Initiator (including iSCSI port driver and Initiator Service) on each cluster node.
Two iSCSI NICs or Gigabit Ethernet NIC ports per node. NICs with a TCP/IP Off-load Engine (TOE) or iSCSI Off-load capability may also be used for iSCSI traffic.
NOTE: Where possible, place the NICs on separate PCI buses
to improve availability and performance. For information about supported systems and HBAs, see Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at www.dell.com/ha.

Cluster Storage

You must attach all the nodes to a common shared system for your Dell failover cluster solutions with Windows Server 2003. The type of storage array and topology in which the array is deployed can influence the design of your cluster. For example, a direct-attached SAS storage array may offer support for two cluster nodes whereas a SAN-attached Fibre Channel or iSCSI array has the ability to support eight cluster nodes.
A shared storage array enables data for clustered applications and services to be stored in a common location that is accessible by each cluster node. Although only one node can access or control a given disk volume at a particular point in time, the shared storage array enables other nodes to gain control of these volumes in the event that a node failure occurs. This also helps facilitate the ability of other cluster resources, which may depend upon the disk volume to failover to the remaining nodes.
Additionally, it is recommended that you attach each node to the shared storage array using redundant paths. Providing multiple connections (or paths) between the node and the storage array reduces the number of single points of failure that could otherwise impact the availability of the clustered applications or services.
For details and recommendations related to deploying a Dell Windows Server failover cluster solution with a particular storage array, see "Cabling Your Cluster Hardware" section in the Dell Failover Cluster Hardware Installation and Troubleshooting Guide for the specific storage array on the Dell Support website at support.dell.com.
Introduction 11

Other Documents You May Need

CAUTION: The safety information that is shipped with your system provides
important safety and regulatory information. Warranty information may be included within this document or as a separate document.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
The Dell Windows Server Failover Cluster Hardware Installation and Troubleshooting Guide provides information on specific configuration
tasks that enable you to deploy the shared storage for your cluster.
The Dell Cluster Configuration Support Matrices lists the Dell-validated hardware, firmware, and software components for a Windows Server 2003 failover cluster environment.
•The
•The
The HBA documentation provides installation instructions for the HBAs.
Systems management software documentation describes the features,
Operating system documentation describes how to install (if necessary),
Documentation for any components you purchased separately provides
The Dell PowerVault™ tape library documentation provides information
Any other documentation that came with your server and storage system.
Updates are sometimes included with the system to describe changes to
Rack Installation Guide
included with your rack solution describes
how to install your system into a rack.
Getting Started Guide
provides an overview to initially set up your system.
requirements, installation, and basic operation of the software.
configure, and use the operating system software.
information to configure and install those options.
for installing, troubleshooting, and upgrading the tape library.
the system, software, and/or documentation.
NOTE: Always read the updates first because they often supersede
information in other documents.
Release notes or readme files may be included to provide last-minute updates to the system or documentation, or advanced technical reference material intended for experienced users or technicians.
12 Introduction

Preparing Your Systems for Clustering

CAUTION: Only trained service technicians are authorized to remove and access
any of the components inside the system. See the safety information shipped with your system for complete information about safety precautions, working inside the computer, and protecting against electrostatic discharge.

Cluster Configuration Overview

NOTE: For more information on step 1, step 2 and step 9, see "Preparing Your
Systems for Clustering" section of the Dell Failover Hardware Installation and Troubleshooting Guide for the specific storage array on the Dell Support website at support.dell.com. For more information on step 3 to step 7 and step 10 to step 13, see this chapter.
1
Ensure that your site can handle the cluster’s power requirements.
Contact your sales representative for information about your region's power requirements.
2
Install the servers, the shared storage array(s), and the interconnect switches (example: in an equipment rack), and ensure that all these components are powered on.
3
Deploy the operating system (including any relevant service pack and hotfixes), network adapter drivers, and storage adapter drivers (including MPIO drivers) on each of the servers that will become cluster nodes. Depending on the deployment method that is used, it may be necessary to provide a network connection to successfully complete this step.
NOTE: You can record the Cluster configuration and Zoning configuration
(if relevant) to the Cluster Data Form and Zoning Configuration Form, respectively to help in planning and deployment of your cluster. For more information, see "Cluster Data Form" and "Zoning Configuration Form" of
Dell Failover Cluster Hardware Installation and Troubleshooting Guide
specific storage array on the Dell Support website at support.dell.com.
4
Establish the physical network topology and the TCP/IP settings for network adapters on each server node to provide access to the cluster public and private networks.
Preparing Your Systems for Clustering 13
for the
5
Configure each server node as a member server in the same Windows Active Directory Domain.
NOTE: It may also be possible to have cluster nodes serve as Domain
controllers. For more information, see “Selecting a Domain Model”.
6
Establish the physical storage topology and any required storage network settings to provide connectivity between the storage array and the servers that will be configured as cluster nodes. Configure the storage system(s) as described in your storage system documentation.
7
Use storage array management tools to create at least one logical unit number (LUN). The LUN is used as a cluster quorum disk for Windows Server 2003 Failover cluster and as a witness disk for Windows Server 2008 Failover cluster. Ensure that this LUN is presented to the servers that will be configured as cluster nodes.
NOTE: It is highly recommended that you configure the LUN on a single node,
for security reasons, as mentioned in step 8 when you are setting up the cluster. Later, you can configure the LUN as mentioned in step 9 so that other cluster nodes can access it.
8
Select one of the servers and form a new failover cluster by configuring the cluster name, cluster management IP, and quorum resource.
NOTE: For Windows Server 2008 Failover Clusters, run the Cluster Validation
Wizard to ensure that your system is ready to form the cluster.
9
Join the remaining node(s) to the failover cluster.
10
Configure roles for cluster networks. Take any network interfaces that are used for iSCSI storage (or for other purposes outside of the cluster) out of the control of the cluster.
11
Test the failover capabilities of your new cluster.
NOTE: For Windows Server 2008 Failover Clusters, the Cluster Validation
Wizard may also be used.
12
Configure highly-available applications and services on your failover cluster. Depending on your configuration, this may also require providing additional LUNs to the cluster or creating new cluster resource groups. Test the failover capabilities of the new resources.
13
Configure client systems to access the highly-available applications and services that are hosted on your failover cluster.
14 Preparing Your Systems for Clustering

Installation Overview

This section provides installation overview procedures for configuring a cluster running the Microsoft
NOTE: Storage management software may vary and use different terms than those
in this guide to refer to similar entities. For example, the terms "LUN" and "Virtual Disk" are often used interchangeably to designate an individual RAID volume that is provided to the cluster nodes by the storage array.
1
Ensure that the cluster meets the requirements as described in "Cluster
®
Windows Server® 2003 operating system.
Configuration Overview."
2
Select a domain model that is appropriate for the corporate network and operating system.
See "Selecting a Domain Model" on page 19.
3
Reserve static IP addresses for the cluster resources and components, including:
Public network
Private network
Cluster virtual servers
®
Use these IP addresses when you install the Windows
operating system
and MSCS.
4
Configure the internal hard drives.
See "Configuring Internal Drives in the Cluster Nodes" on page 20.
5
Install and configure the Windows operating system.
The Windows operating system must be installed on all of the nodes. Each node must have a licensed copy of the Windows operating system, and a Certificate of Authenticity.
See "Installing and Configuring the Microsoft Windows Operating System" on page 20.
Preparing Your Systems for Clustering 15
6
Install or update the storage connection drivers.
For more information on connecting your cluster nodes to a shared storage array, see "Preparing Your Systems for Clustering" in
the Dell Failover
Cluster Hardware Installation and Troubleshooting Guide that
corresponds to your storage array. For more information on the corresponding supported adapters and driver versions, see Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at www.dell.com/ha.
7
Install and configure the storage management software.
See the documentation included with your storage system or available at the Dell Support website at
8
Configure the hard drives on the shared storage system(s).
See "Preparing Your Systems for Clustering" in the
Hardware Installation and Troubleshooting Guide
support.dell.com
.
Dell Failover Cluster
corresponding to your
storage array.
9
Configure the MSCS software.
See "Configuring Your Failover Cluster" on page 29.
10
Verify cluster functionality. Ensure that:
The cluster components are communicating properly.
MSCS is started.
See "Verifying Cluster Functionality" on page 33.
11
Verify cluster resource availability.
Use Cluster Administrator to check the running state of each resource group. See "Verifying Cluster Resource Availability."
The following subsections provide detailed information about some steps in the "Installation Overview" that is specific to the Windows Server 2003 operating system.
16 Preparing Your Systems for Clustering

Selecting a Domain Model

On a cluster running the Microsoft Windows operating system, all nodes must belong to a common domain or directory model. The following configurations are supported:
All nodes are member servers in an Active Directory® domain.
All nodes are domain controllers in an Active Directory domain.
At least one node is a domain controller in an Active Directory and the remaining nodes are member servers.

Configuring the Nodes as Domain Controllers

If a node is configured as a domain controller, client system access to its cluster resources can continue even if the node cannot contact other domain controllers. However, domain controller functions can cause additional overhead, such as log on, authentication, and replication traffic.
If a node is not configured as a domain controller and the node cannot contact a domain controller, the node cannot authenticate client system requests.

Configuring Internal Drives in the Cluster Nodes

If your system uses a hardware-based RAID solution and you have added new internal hard drives to your system, or you are setting up the RAID configuration for the first time, you must configure the RAID array using the RAID controller’s BIOS configuration utility before installing the operating system.
For the best balance of fault tolerance and performance, use RAID 1. See the RAID controller documentation for more information on RAID configurations.
NOTE: If you are not using a hardware-based RAID solution, use the Microsoft
Windows Disk Management tool to provide software-based redundancy.
Preparing Your Systems for Clustering 17

Installing and Configuring the Microsoft Windows Operating System

NOTE: Windows standby mode and hibernation mode are not supported in cluster
configurations. Do not enable either mode.
1
Ensure that the cluster configuration meets the requirements listed in "Cluster Configuration Overview."
2
Cable the hardware.
NOTE: Do not connect the nodes to the shared storage systems yet.
For more information on cabling your cluster hardware and the storage array that you are using, see "Cabling Your Cluster Hardware" in
Dell Failover Cluster Hardware Installation and Troubleshooting Guide for the specific storage array on the Dell Support website at support.dell.com.
3
Install and configure the Windows Server 2003 operating system with the latest service pack on each node.
For more information about the latest supported service pack, see
Dell Cluster Configuration Support Matrices
Availability Clustering website at
4
Ensure that the latest supported version of network adapter drivers is installed on each cluster node.
5
Configure the public and private network adapter interconnects in each node, and place the interconnects on separate IP subnetworks using static IP addresses. See "Configuring Windows Networking" on page 22.
For information on required drivers, see
Support Matrices
at
www.dell.com/ha
located on the Dell High Availability Clustering website
.
www.dell.com/ha
located on the Dell High
.
Dell Cluster Configuration
the
6
Shut down both nodes and connect each node to the shared storage.
For more information on cabling your cluster hardware and the storage array that you are using, see "Cabling Your Cluster Hardware"
Dell Failover Cluster Hardware Installation and Troubleshooting Guide for the specific storage array on the Dell Support website at support.dell.com.
If required, configure the storage software.
7
18 Preparing Your Systems for Clustering
in the
8
Reboot node 1.
9
From node 1, write the disk signature and then partition, format, and assign drive letters and volume labels to the hard drives in the storage system using the Windows Disk Management application.
For more information, see "
Preparing Your Systems for Clustering" in the
Dell Failover Cluster Hardware Installation and Troubleshooting Guide for the specific storage array on the Dell Support website at support.dell.com.
10
On node 1, verify disk access and functionality on all shared disks.
11
Shut down node 1.
12
Verify disk access by performing the following steps on the other node:
a
Turn on the node.
b
Modify the drive letters to match the drive letters on node 1.
This procedure allows the Windows operating system to mount the volumes.
c
Close and reopen Disk Management.
d
Verify that Windows can see the file systems and the volume labels.
13
Turn on node 1.
14
Install and configure the Cluster Service.
See "Configuring Microsoft Cluster Service (MSCS) With Windows Server 2003" on page 29.
15
Install and set up the application programs (optional).
16
Enter the cluster configuration information on the provided as an Appendix in the
and Troubleshooting Guide
Dell Failover Cluster Hardware Installation
for your corresponding storage array (optional).
Preparing Your Systems for Clustering 19
Cluster Data Form

Configuring Windows Networking

You must configure the public and private networks in each node before you install MSCS. The following subsections introduce you to some procedures necessary for the networking prerequisites.

Assigning Static IP Addresses to Cluster Resources and Components

A static IP address is an Internet address that a network administrator assigns exclusively to a system or a resource. The address assignment remains in effect until it is changed by the network administrator.
The IP address assignments for the cluster’s public LAN segments depend on the environment’s configuration. Configurations running the Windows operating system require static IP addresses assigned to hardware and software applications in the cluster, as listed in Table 2-1.
Table 2-1. Applications and Hardware Requiring IP Address Assignments
Application/Hardware Description
Cluster IP address The cluster IP address is used for cluster management
and must correspond to the cluster name. Because each server has at least two network adapters, the minimum number of static IP addresses required for a cluster configuration is two (one for public network and one for the public network). Additional static IP addresses are required when MSCS is configured with application programs that require IP addresses, such as file sharing.
Cluster-aware applications running on the cluster
These applications include Microsoft SQL Server, Enterprise Edition, Microsoft Exchange Server, and Internet Information Server (IIS). For example, Microsoft SQL Server, Enterprise Edition requires at least one static IP address for the virtual server (Microsoft SQL Server does not use the cluster's IP address). Also, each IIS Virtual Root or IIS Server instance configured for failover needs a unique static IP address.
20 Preparing Your Systems for Clustering
Table 2-1. Applications and Hardware Requiring IP Address Assignments (continued)
Application/Hardware Description
Cluster node network adapters
For cluster operation, two network adapters are required: one for the public network (LAN/WAN) and another for the private network (sharing heartbeat information between the nodes).
For more information on cabling your cluster hardware and the storage array that you are using, see "
Cluster Hardware" in the Dell Failover Cluster Hardware Installation and Troubleshooting Guide for the specific
storage array on the Dell Support website at support.dell.com.
Cabling Your
NOTE: To ensure operation during a DHCP server failure,
use static IP addresses.

Configuring IP Addresses for the Private Network

Use the static IP address assignments for the network adapters used for the private network (cluster interconnect).
NOTE: The IP addresses in Table 2-2 are used as examples only.
Table 2-2. Examples of IP Address Assignments
Usage Cluster Node 1 Cluster Node 2
Public network static IP address (for client and domain controller communications)
Public network subnet mask 255.255.255.0 255.255.255.0
Default gateway
WINS servers
DNS servers
192.168.1.101 192.168.1.102
192.168.1.1 192.168.1.1
Primary
192.168.1.11
Secondary
192.168.1.12
Primary
192.168.1.21
Secondary
192.168.1.22
Primary
192.168.1.11
Secondary
192.168.1.12
Primary
192.168.1.21
Secondary
192.168.1.22
Preparing Your Systems for Clustering 21
Table 2-2. Examples of IP Address Assignments (continued)
Usage Cluster Node 1 Cluster Node 2
Private network static IP address cluster interconnect (for node-to-node communications)
Private network subnet mask 255.255.255.0 255.255.255.0
NOTE: Do not configure Default Gateway, NetBIOS, WINS, and DNS on the private
network. If you are running Windows Server 2003, disable NetBIOS on the private network.
10.0.0.1 10.0.0.2
If multiple cluster interconnect network adapters are connected to a network switch, ensure that all of the private network’s network adapters have a unique address. You can continue the IP address scheme in Table 2-2 with
10.0.0.3, 10.0.0.4, and so on for the private network’s network adapters or network adapter teams of the other clusters connected to the same switch.
You can improve fault tolerance by using network adapters that support adapter teaming or by having multiple LAN segments. To avoid communication problems, do not use dual-port network adapters for the cluster interconnect.
NOTE: NIC teaming is supported only on a public network, not on a private network.
Creating Separate Subnets for the Public and Private Networks
The public and private network’s network adapters installed in the same cluster node must reside on separate IP subnetworks. Therefore, the private network used to exchange heartbeat information between the nodes must have a separate IP subnet or a different network ID than the public network, which is used for client connections.
22 Preparing Your Systems for Clustering
Setting the Network Interface Binding Order for Clusters Running Windows Server 2003
1
Click the
Network Connections
2
Click the
The
3
In the the top of the list and followed by the
Start
button, select
.
Advanced
Advanced Settings
menu, and then click
window appears.
Adapters and Bindings
Control Panel
, and double-click
Advanced Settings
tab, ensure that the
Private
connection.
Public
connection is at
.
To change the connection order:
a
Click
Public
or
Private
.
b
Click the up-arrow or down-arrow to move the connection to the top or bottom of the
c
Click OK.
d
Close the
Dual-Port Network Adapters and Adapter Teams in the Private Network
Connections
box.
Network Connections
window.
Dual-port network adapters and network adapter teams are not supported in the private network. They are supported only in the public network.

Verifying Communications Between Nodes

1
Open a command prompt on each cluster node.
2
At the prompt, type:
ipconfig /all
3
Press <Enter>.
All known IP addresses for each local server appear on the screen.
4
Issue the
ping
command from each remote system.
Ensure that each local server responds to the ping command. If the IP assignments are not set up correctly, the nodes may not be able to communicate with the domain. For more information, see "Troubleshooting" on page 63.
Preparing Your Systems for Clustering 23
Loading...
+ 53 hidden pages