Reproduction of these materials in any manner whatsoever without the written permission of Dell
Inc. is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and OpenManage are
trademarks of Dell Inc.; Active Directory, Microsoft, Windows, Windows Server, and Windows NT
are either trademarks or registered trademarks of Microsoft Corporation in the United States and/
or other countries. EMC and Access Logix are registered trademarks are trademarks of EMC
Corporation
Other trademarks and trade names may be used in this document to refer to either the entities
claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in
trademarks and trade names other than its own.
Dell™ Failover Cluster is a group of systems working together to run a
common set of applications that presents a single logical system to client
applications. The systems (or nodes) in the cluster are physically connected
by either local area network (LAN) or wide area network (WAN) and are
configured with the cluster software. If a system or the network connections
in the cluster fail, the services on the active node failover to the passive node
in the cluster.
NOTE: In this document, Microsoft® Windows Server® 2008 refers to either
Microsoft Windows Server 2008 or Microsoft Windows Server 2008 R2. For the list
of Dell-validated operating systems for a Failover Cluster, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering
website at www.dell.com/ha.
Failover Clusters configured with Microsoft Windows Server 2008 operating
systems provide high availability and scalability for mission-critical
applications such as databases, messaging systems, file and print services, and
virtualized workloads. If a node in a cluster becomes unavailable (as a result of
failure or having been taken down for maintenance), another node in the
cluster provides the same service. Users accessing the service continue their
work and are unaware of any service disruption.
Windows Server 2008 includes functionality to simplify the cluster creation
and administration. You can create an entire cluster in one seamless step
through a wizard interface.
Features of Failover Clusters Running Windows
Server 2008
The Failover Cluster running Windows Server 2008 implements up to 16
nodes in a cluster, depending on the storage array used, and provides the
following features:
•A shared storage bus featuring Fibre Channel, Serial Attached SCSI (SAS),
or Internet Small Computer System Interface (iSCSI) technology
•High availability of resources to network clients
Introduction7
•Redundant paths to the shared storage
•Failure recovery for applications and services
•Flexible maintenance capabilities, allowing you to repair, maintain, or
upgrade a node or storage system without taking the entire cluster offline
The services and capabilities that are included with Failover Clusters running
Windows Server 2008 are:
•The Failover Cluster Management Interface — The Failover Cluster
Management Interface is a task-oriented tool. To access the management
interfaces,
Start→
•The
Server 2008 include the built-in cluster
Microsoft Management Console 3.0
Programs→
Validate a Configuration
Administrative Tools
Wizard — The cluster tools in Windows
and
cluadmin.msc
.
Validate a Configuration
, go to
wizard to
help detect the issue of a cluster failing due to configuration complexity.
The
Validate a Configuration
wizard runs a set of tests on the systems in a
cluster, and performs the following functions:
–Checks the software inventory
–Tests the network and attached storage
–Validates system configuration
•New method to create clusters — You can install the Failover Clustering
feature through the
the
Server Manager
uninstall clustering using
Windows Server 2008, you must use the
Initial Configurations Task
interface in
Administrative Tools
Server Manager
interface. For systems running
Add Feature
(ICT) interface or with
. You can also
Wizard to install the
Failover Clustering feature.
•Migrating legacy clusters — You can migrate your cluster that is running
the Windows Server 2003 operating system to the Windows Server 2008
operating system. To access the migration functionality in Windows Server
2008, see the
Migrate Services and Applications
Migrate Services and Applications
wizard, a report containing
wizard. After you run the
information about the migration tasks is created.
NOTE: You cannot configure nodes running the Windows Server 2003
operating system and nodes running the Windows Server 2008 operating
system in the same cluster. In addition, Failover Cluster nodes must be joined
to an Microsoft Active Directory
Windows NT 4.0-based domain.
8Introduction
®
based domain and not a
•Improvements in Scoping and Managing Shares — The process of creating
a highly-available share with Failover Cluster running Windows Server
2008 is very simple when you use the
also use the
Browse
button to quickly and reliably identify the folder you
Add a Shared Folder
wizard. You can
want to use for the highly-available share.
•Better Storage and Backup Support — The architecture of Failover Cluster
running Windows Server 2008 has undergone storage related changes to
improve stability and scalability.
•Enhanced Maintenance Mode — Use the
Maintenance
mode to perform
maintenance and administrative tasks; like Volume Snapshots, ChkDsk,
and so on; on the cluster disk resources. The
Maintenance
mode turns off
cluster health monitoring on the cluster disk for a period of time so that it
does not fail while maintenance is in-progress on the cluster disk.
•Superior Scalability — The Failover Cluster running Windows Server 2008
x64 can support 16 nodes. The Failover Cluster running Windows Server
2008 can also supports disks which use GUID Partition Table (GPT) disk
partitioning system. GPT disks allow for 128 primary partitions as opposed
to
4
in Master Boot Record (MBR) disks. Also, the partition size for GPT
disks can be more than 2 TB (the limit for an MBR disk).
•Quorum Model — The Windows Server 2008 Failover Clustering Quorum
model is redesigned to eliminate the single point of failure which existed
in previous versions. The four ways to establish a quorum are:
–No Majority - Disk Only (similar to Windows Server 2003 shared
disk quorum)
–Node Majority (similar to Windows Server 2003 Majority Node Set)
–Node and Disk Majority
–Node and File Share Majority
•Networking Capabilities — The Failover Cluster running Windows Server
2008 employs a new networking model which includes improved support
for:
–Geographically distributed clusters
–Ability to have cluster nodes on different subnets
–DHCP server to assign IP addresses to cluster interfaces
–Improved cluster heartbeat mechanism and support for IPv6
Introduction9
Supported Cluster Configurations
For the list of Dell-validated hardware, firmware, and software components
for a Failover Cluster running Windows Server 2008, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability
Clustering website at www.dell.com/ha.
Cluster Components and Requirements
Your cluster requires the following components:
•Operating System
•Cluster nodes (servers)
•Cluster Storage
Operating System
Dell Failover Clusters supports Windows Server 2008 with x64 bit Enterprise
Edition only. For a complete list of features, see the documentation for
Windows Server 2008, x64 bit, Enterprise Edition.
NOTE: Running different operating systems in a cluster is supported only during
a rolling upgrade. You cannot upgrade your Failover Cluster running a different
operating system to Windows Server 2008, Enterprise x64 Edition. Only a new
cluster installation is permitted for Windows Server 2008, Enterprise x64 Edition.
System Requirements
The following sections list the requirements for cluster nodes and storage
systems in a Failover Cluster running Windows Server 2008.
Cluster Nodes
Table 1-1 lists the hardware requirements for the cluster nodes.
Table 1-1. Cluster Node Requirements
ComponentMinimum Requirement
Cluster nodesAt least two and up to 16 PowerEdge systems running the
Windows Server 2008 operating system.
RAMAt least 512 MB of RAM installed on each cluster node.
10Introduction
Table 1-1. Cluster Node Requirements (continued)
ComponentMinimum Requirement
NICsAt least two NICs: one NIC for the public network and another
NIC for the private network.
NOTE: It is recommended that the NICs on each public network
are identical, and that the NICs on each private network are
identical.
Internal disk
controller
One controller connected to at least two internal hard drives for
each node. Use any supported RAID controller or disk controller.
Two hard drives are required for mirroring (RAID 1) and at least
three are required for disk striping with parity (RAID 5).
NOTE: It is strongly recommended that you use hardware-based
RAID or software-based disk-fault tolerance for the internal drives.
HBA ports
• For clusters with Fibre Channel storage, two Fibre Channel
HBAs per node, unless the server employs an integrated or
supported dual-port Fibre Channel HBA.
• For clusters with SAS storage, one or two SAS 5/E HBAs
per node.
NOTE: Where possible, place the HBAs on separate PCI buses to
improve availability and performance. For information about
supported systems and HBAs, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering
website at www.dell.com/ha.
iSCSI Initiator and
NICs for iSCSI
Access
For clusters with iSCSI storage, the iSCSI Software Initiator
(including iSCSI port driver and Initiator Service) is installed
with the operating system.
Two iSCSI NICs or Gigabit Ethernet NIC ports per node. NICs
with a TCP/IP Off-load Engine (TOE) or iSCSI Off-load
capability may also be used for iSCSI traffic.
NOTE: Where possible, place the NICs on separate PCI buses to
improve availability and performance. For information about
supported systems and HBAs, see Dell Cluster Configuration Support Matrices on the Dell High Availability Clustering website at
www.dell.com/ha.
Introduction11
Cluster Storage
While configuring your Dell Failover Cluster with Windows Server 2008,
attach all cluster nodes to a common shared storage. The type of storage array
and topology in which the array is deployed can influence the design of your
cluster. For example, a direct-attached SAS storage array may offer support for
two cluster nodes whereas a SAN-attached Fibre Channel or iSCSI array has
the ability to support sixteen cluster nodes.
A shared storage array enables data for clustered applications and services to
be stored in a common location that is accessible by each cluster node.
Although only one node can access or control a given disk volume at a point
in time, the shared storage array enables other nodes to gain control of these
volumes in the event that a node failure occurs. This also helps facilitate the
ability of other cluster resources, which may depend upon the disk volume,
to failover to the remaining nodes.
Additionally, it is recommended that you attach each node to the shared storage
array using redundant paths. Providing multiple connections (or paths) between
the node and the storage array reduces the number of single points of failure that
could otherwise impact the availability of the clustered applications or services.
For details and recommendations related to deploying a Dell Failover Cluster
solution with a storage array, see the "Cabling Your Cluster Hardware" section
in the Dell Failover Cluster Hardware Installation and Troubleshooting Guide
for the specific storage array on the Dell Support website at
support.dell.com/manuals.
Other Documents You May Need
WARNING: The safety information that is shipped with your system provides
important safety and regulatory information. Warranty information may be
included within this document or as a separate document.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document
located on the Dell Support website at support.dell.com/manuals.
•The Dell Windows Server Failover Cluster Hardware Installation and
Troubleshooting Guide provides information on specific configuration
tasks that enable you to deploy the shared storage for your cluster.
12Introduction
•The Dell Cluster Configuration Support Matrices list the Dell validated
hardware, firmware, and software components for a Failover Cluster
environment.
•The
Rack Installation Guide
included with your rack solution describes
how to install your system into a rack.
•The
Getting Started Guide
provides an overview of initially setting up
your system.
•The HBA documentation provides installation instructions for the HBAs.
•Systems management software documentation describes the features,
requirements, installation, and basic operation of the software.
•Operating system documentation describes how to install (if necessary),
configure, and use the operating system software.
•Documentation for any components you purchased separately provides
information to configure and install those options.
•The Dell PowerVault™ tape library documentation provides information
for installing, troubleshooting, and upgrading the tape library.
•Updates are sometimes included with the system to describe changes to
the system, software, and/or documentation.
NOTE: Always read the updates first because they often supersede
information in other documents.
•Release notes or readme files may be included to provide last-minute
updates to the system or documentation, or advanced technical reference
material intended for experienced users or technicians.
Introduction13
14Introduction
Preparing Your Systems for
Clustering
WARNING: Only trained service technicians are authorized to remove and
access any of the components inside the system. See your safety information
shipped with your system for complete information about safety precautions,
working inside the system, and protecting against electrostatic discharge.
Cluster Configuration Overview
NOTE: For more information on step 1, step 2, and step 9, see the "Preparing Your
Systems for Clustering" section of the Dell Failover Hardware Installation and Troubleshooting Guide for the specific storage array on the Dell Support site at
support.dell.com/manuals. For more information on step 3 to step 7 and step 10 to
step 14, see this chapter.
1
Ensure that your site can handle the cluster’s power requirements. For
more information.
Contact your sales representative for information about your region's
power requirements.
2
Install the systems, the shared storage array(s), and the interconnect
switches (example: in an equipment rack), and ensure that all these
components are powered on.
3
Deploy the operating system (including any relevant service pack and
hotfixes), network adapter drivers, and storage adapter drivers (including
MPIO drivers) on each of the systems that you want to configure as cluster
nodes. Depending on the deployment method that is used, it may be
necessary to provide a network connection to successfully complete this step.
NOTE: To help planning and deployment of your cluster, record the relevant
cluster configuration information in the "Cluster Data Form" and "Zoning
Configuration Form" of Dell Failover Cluster Hardware Installation and
Troubleshooting Guide for the specific storage array on the Dell Support
website at support.dell.com/manuals.
Preparing Your Systems for Clustering15
4
Establish the physical network topology and the TCP/IP settings for
network adapters on each cluster node to provide access to the cluster
public and private networks.
5
Configure each cluster node as a member in the same Microsoft® Active
Directory
6
Establish the physical storage topology and any required storage network
®
Domain.
settings to provide connectivity between the storage array and the systems
that are configured as cluster nodes. For more information on configuring
the storage system(s), see your storage system documentation.
7
Use storage array management tools to create at least one logical unit
number (LUN). The LUN is used as a witness disk for Microsoft Windows
®
Server
2008 Failover Cluster. Ensure that this LUN is presented to the
systems that are configured as cluster nodes.
NOTE: For security reasons, it is recommended that you configure the LUN on
a single node, as mentioned in step 8 when you are setting up the cluster.
Later you can configure the LUN as mentioned in step 9 so that other cluster
nodes can access it.
8
Select one of the systems and form a new Failover Cluster by configuring
the cluster name, cluster management IP, and quorum resource.
NOTE: For Failover Clusters configured with Windows Server 2008 operating
system, run the Cluster Validation Wizard to ensure that your system is ready
to form the cluster.
9
Join the remaining node(s) to the Failover Cluster.
10
Configure roles for cluster networks. Take any network interfaces that are
used for iSCSI storage (or for other purposes outside of the cluster) out of
the control of the cluster.
11
Test the failover capabilities of your new cluster.
NOTE: For Failover Clusters configured with the Windows Server 2008
operating system, you can also use the Cluster Validation Wizard.
12
Configure highly-available applications and services on your Failover
Cluster. Depending on your configuration, this may also require providing
additional LUNs to the cluster or creating new cluster resource groups.
13
Test the failover capabilities of the new resources.
14
Configure client systems to access the highly-available applications and
services that are hosted on your Failover Cluster.
16Preparing Your Systems for Clustering
Installation Overview
This section provides installation overview procedures for configuring a
cluster running the Windows Server
NOTE: The Storage management software may use different terms than those in
this guide to refer to similar entities. For example, the terms "LUN" and "Virtual Disk"
are often used interchangeably to designate an individual RAID volume that is
provided to the cluster nodes by the storage array.
1
Ensure that the cluster meets the requirements as described in "Cluster
2008 operating system.
Configuration Overview" on page 15.
2
Select a domain model that is appropriate for the corporate network and
operating system.
See "Selecting a Domain Model" on page 19.
3
Reserve static IP addresses for the cluster resources and components,
including:
•Public network
•Private network
•Cluster virtual servers
®
Use these IP addresses when you install the Windows
operating system
and Windows Server 2008 Failover Clustering (WSFC).
NOTE: WSFC supports configuring cluster IP address resources to obtain IP
address from a DHCP server in addition to through static entries. It is
recommended that you use static IP addresses.
4
Configure the internal hard drives.
See "Configuring Internal Drives in the Cluster Nodes" on page 19.
5
Install and configure the Windows operating system.
The Windows operating system must be installed on all the cluster nodes.
Each node must have a licensed copy of the Windows operating system,
and a Certificate of Authenticity.
See "Installing and Configuring the Windows Operating System" on
page 20.
Preparing Your Systems for Clustering17
6
Install or update the storage connection drivers.
For more information on connecting your cluster nodes to a shared storage
array, see "Preparing Your Systems for Clustering" in
the Dell Failover
Cluster Hardware Installation and Troubleshooting Guide that
corresponds to your storage array on the Dell Support website at
support.dell.com/manuals.
For more information on the corresponding supported adapters and
driver versions, see the Dell Cluster Configuration Support Matrices
located on the Dell High Availability Clustering website at
www.dell.com/ha.
7
Install and configure the storage management software.
See the documentation included with your storage system or available at
the Dell Support website at
8
Configure the hard drives on the shared storage system(s).
support.dell.com/manuals
.
See "Configuring and Managing LUNs" in the Dell Failover Cluster
Hardware Installation and Troubleshooting Guide
storage array
9
Install and configure Failover Clustering feature.
on the Dell Support website at support.dell.com/manuals
corresponding to your
See "Configuring Your Failover Cluster" on page 29.
10
Verify cluster functionality. Ensure that:
•The cluster components are communicating properly.
•Cluster Service is started.
11
Verify cluster resource availability.
Use the Failover Cluster MMC to check the running state of each
resource group.
.
The following subsections provide detailed information for the steps in the
"Installation Overview" on page 17 that is specific to the Windows Server
2008 operating system.
18Preparing Your Systems for Clustering
Selecting a Domain Model
On a cluster running the Microsoft Windows operating system, all nodes
must belong to a common domain or directory model. The following
configurations are supported:
•It is recommended that all nodes of High Availability applications are
member systems in an Microsoft Active Directory
•All nodes are domain controllers in an Active Directory domain.
•At least one node is a domain controller in an Active Directory and the
remaining nodes are member systems.
NOTE: If a node is configured as a domain controller, client system access to its
cluster resources can continue even if the node cannot contact other domain
controllers. However, domain controller functions can cause additional overhead,
such as log on, authentication, and replication traffic. If a node is not configured as
a domain controller and the node cannot contact a domain controller, the node
cannot authenticate client system requests.
®
domain.
Configuring Internal Drives in the Cluster Nodes
If your system uses a hardware-based RAID solution and you have added new
internal hard drives to your system or you are setting up the RAID configuration
for the first time, you must configure the RAID array using the RAID
controller’s BIOS configuration utility before installing the operating system.
For the best balance of fault tolerance and performance, use RAID 1. See the
RAID controller documentation for more information on RAID
configurations.
NOTE: It is strongly recommended that you use hardware based RAID solution.
Alternately you can use the Microsoft Windows Disk Management tool to provide
software-based redundancy.
Preparing Your Systems for Clustering19
Installing and Configuring the Windows
Operating System
CAUTION: Windows standby mode and hibernation mode are not supported in
cluster configurations. Do not enable either mode.
1
Ensure that the cluster configuration meets the requirements listed in
"Cluster Configuration Overview" on page 15.
2
Cable the hardware.
NOTE: Do not connect the nodes to the shared storage systems at this time.
For more information on cabling your cluster hardware and the storage
array that you are using, see "Cabling Your Cluster Hardware" in
Failover Cluster Hardware Installation and Troubleshooting Guide for
the specific storage array on the Dell Support website at
support.dell.com/manuals.
3
Install and configure the Windows Server 2008 operating system on
each node.
4
Ensure that the latest supported version of network adapter drivers is
installed on each cluster node.
5
Configure the public and private network adapter interconnects in each
node, and place the interconnects on separate IP subnetworks using static
IP addresses. See "Configuring Windows Networking" on page 21.
For information on required drivers, see
Support Matrices
www.dell.com/ha
located on the Dell High Availability website at
.
Dell Cluster Configuration
the Dell
6
Turn off all the cluster nodes and connect each cluster node to the shared
storage.
For more information on cabling your cluster hardware and the storage
array that you are using, see "Cabling Your Cluster Hardware"
Failover Cluster Hardware Installation and Troubleshooting Guide for
the specific storage array on the Dell Support website at
support.dell.com/manuals.
7
If required, configure the storage software.
8
Reboot node 1.
20Preparing Your Systems for Clustering
in the Dell
9
From node 1, go to the
disk signature, partition the disk, format the disk, and assign drive letters
and volume labels to the hard drives in the storage system.
For more information, see "
Dell Failover Cluster Hardware Installation and Troubleshooting Guide
for the specific storage array on the Dell Support website at
support.dell.com/manuals.
10
On node 1, verify disk accessibility and functionality on all shared disks.
Verify disk access by performing the following steps on the second node:
a
Turn on the node.
b
Modify the drive letters to match the drive letters on node 1.
This procedure allows the Windows operating system to mount the
volumes.
c
Close and reopen
d
Verify that the Windows operating system can access the file systems
and the volume labels.
11
Install and configure the Failover Clustering feature from the
Manager
12
If required, install and setup the application programs.
13
Enter the cluster configuration information on the
the Dell Failover Cluster Hardware Installation and Troubleshooting
Guide for your corresponding storage array
support.dell.com/manuals
.
Windows Disk Management
Preparing Your Systems for Clustering" in the
Disk Management
(optional).
.
on the Dell Support website at
application, write the
Server
Cluster Data Form
in
Configuring Windows Networking
You must configure the public and private networks in each node before you
install Failover Clustering on the nodes. The following subsections introduce
you to some principles and procedures necessary for the networking
prerequisites.
Windows Server 2008 also introduces IPv6 support for clustering. You can
have both node-to-node (private) as well as node-to-client (public)
communication over IPv6. For more details on using IPv6, see "Configuring
IPv6 addresses for Cluster Nodes" on page 24.
Preparing Your Systems for Clustering21
Assigning Static IP Addresses to Cluster Resources and Components
NOTE: WSFC supports configuring cluster IP address resources to obtain
IP address from a DHCP server in addition to through static entries. It is
recommended that you use static IP addresses.
A static IP address is an Internet address that a network administrator assigns
exclusively to a system or a resource. The address assignment remains in
effect until it is changed by the network administrator.
The IP address assignments for the cluster’s public LAN segments depend on
the environment’s configuration. Configurations running the Windows
operating system require static IP addresses assigned to hardware and
software applications in the cluster, as listed in Table 2-1.
Table 2-1. Applications and Hardware Requiring IP Address Assignments
Application/Hardware Description
Cluster IP addressThe cluster IP address is used for cluster management and
must correspond to the cluster name. Because each server
has at least two network adapters, the minimum number of
static IP addresses required for a cluster configuration is five
(one for each network adapter and one for the cluster).
Additional IP addresses are required when WSFC is
configured with application programs that require IP
addresses, such as file sharing.
Cluster-aware
applications running
on the cluster
These applications include Microsoft SQL Server Enterprise
Edition, Microsoft Exchange Server, and Internet
Information Server (IIS). For example, SQL Server
Enterprise Edition requires at least one static IP address for
the virtual server as SQL Server does not use the cluster's IP
address. Also, each IIS Virtual Root or IIS Server instance
configured for failover needs a unique static IP address.
22Preparing Your Systems for Clustering
Loading...
+ 50 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.