Reproduction of these materials in any manner whatsoever without the written permission of Dell Inc.
is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, PowerEdge, and PowerVault are trademarks of
Dell Inc.; Active Directory , Microsoft, W indows, W indows Server , Windows XP and Windows NT
are either trademarks or registered trademarks of Microsoft Corporation in the United States and/
or other countries.; EMC, Navisphere, and PowerPath are re gistered trademarks and MirrorView, SAN Copy, and SnapView are trademarks of EMC Corporation.
Other trademarks and trade names may be used in this document to refer to either the entities
claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in
trademarks and trade names other than its own.
A Dell™ Failover Cluster combines specific hardware and software components
to provide enhanced availability for applications and services that are run on the
cluster. A Failover Cluster is designed to reduce the possibility of any single
point of failure within the system that can cause the clustered applications or
services to become unavailable. It is recommended that you use redundant
components like server and storage power supplies, connections between the
nodes and the storage array(s), and connections to client systems or other
servers in a multi-tier enterprise application architecture in your cluster.
This document provides information to configure your Dell/EMC CX4-series
fibre channel storage arrays with one or more Failover Clusters. It provides
specific configuration tasks that enable you to deploy the shared storage for
your cluster.
For more information on deploying your cluster with Microsoft
®
Server
Windows Server 2003 Installation and Troubleshooting Guide located on the
Dell Support website at support.dell.com. For more information on
deploying your cluster with Windows Server 2008 operating systems, see the
Dell Failover Clusters with Microsoft Windows Server 2008 Installation and
Troubleshooting Guide located on the Dell Support website at
support.dell.com.
For a list of recommended operating systems, hardware components, and
driver or firmware versions for your Dell Failover Cluster, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at
www.dell.com/ha.
2003 operating systems, see the Dell Failover Clusters with Microsoft
®
Windows
Introduction7
Cluster Solution
Your cluster implements a minimum of two nodes to a maximum of either
eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server
2008) and provides the following features:
•8-Gbps and 4-Gbps Fibre Channel technology
•High availability of resources to network clients
•Redundant paths to the shared storage
•Failure recovery for applications and services
•Flexible maintenance capabilities, allowing you to repair, maintain, or
upgrade a node or storage system without taking the entire cluster offline
Implementing Fibre Channel technology in a cluster provides the following
advantages:
•
Flexibility
switches without degrading the signal.
•
Availability
providing multiple data paths and greater availability for clients.
•
Connectivity
Small Computer System Interface (SCSI). Because Fibre Channel devices
are hot-pluggable, you can add or remove devices from the nodes without
taking the entire cluster offline.
— Fibre Channel allows a distance of up to 10 km between
— Fibre Channel components use redundant connections
— Fibre Channel allows more device connections than
Cluster Hardware Requirements
Your cluster requires the following hardware components:
•Cluster nodes
•Cluster storage
8Introduction
Cluster Nodes
Table 1-1 lists the hardware requirements for the cluster nodes.
Table 1-1. Cluster Node Requirements
ComponentMinimum Requirement
Cluster nodesA minimum of two identical PowerEdge servers are required.
The maximum number of nodes that are supported depend
on the variant of the Windows Server operating system used
in your cluster, and on the physical topology in which the
storage system and nodes are interconnected.
RAMThe variant of the Windows Server operating system that is
installed on your cluster nodes determines the minimum
RAM required.
Host Bus Adapter
(HBA) ports
NICsAt least two NICs: one NIC for the public network and
Internal disk
controller
Two Fibre Channel HBAs per node, unless the server employs
an integrated or supported dual-port Fibre Channel HBA.
Where possible, place the HBAs on separate PCI buses to
improve availability and performance.
another NIC for the private network.
NOTE: It is recommended that the NICs on each public network
are identical, and that the NICs on each private network are
identical.
One controller connected to at least two internal hard drives
for each node. Use any supported RAID controller or disk
controller.
Two hard drives are required for mirroring (RAID 1) and at
least three are required for disk striping with parity (RAID 5).
NOTE: It is strongly recommended that you use
hardware-based RAID or software-based disk-fault tolerance
for the internal drives.
NOTE: For more information about supported systems, HBAs and operating system
variants, see the Dell Cluster Configuration Support Matrix on the Dell High
Availability website at www.dell.com/ha.
Introduction9
Cluster Storage
Table 1-2 lists supported storage systems and the configuration requirements
for the cluster nodes and stand-alone systems connected to the storage systems.
Table 1-2. Cluster Storage Requirements
Hardware Components Requirement
Supported storage
systems
Cluster nodesAll nodes must be directly attached to a single storage system
Multiple clusters and
stand-alone systems
Table 1-3 lists hardware requirements for the storage processor enclosures (SPE),
disk array enclosures (DAE), and standby power supplies (SPS).
Table 1-3. Dell/EMC Storage System Requirements
One to four supported Dell/EMC storage systems. See
Table 1-3 for specific storage system requirements.
or attached to one or more storage systems through a SAN.
Can share one or more supported storage systems. See
"Installing and Configuring the Shared Storage System" on
page 45.
Processor
Enclosure
CX4-120 One DAE-OS with at
CX4-240 One DAE-OS with at
CX4-480One DAE-OS with at
CX4-960One DAE-OS with at
NOTE: The DAE-OS is the first DAE enclosure that is connected to the CX4-series
(including all of the storage systems listed above). Core software is preinstalled on
the first five hard drives of the DAE-OS.
Minimum StoragePossible Storage
Expansion
Up to seven DAE’s with
least five and up to 15
hard drives
least five and up to 15
hard drives
least five and up to 15
hard drives
least five and up to 15
hard drives
a maximum of 15 hard
drives each
Up to fifteen DAE’s with
a maximum of 15 hard
drives each
Up to thirty one DAE’s
with a maximum of 15
hard drives each
Up to sixty three DAE’s
with a maximum of 15
hard drives each
SPS
Two for SPE and
DAE-OS
Two for SPE and
DAE-OS
Two for SPE and
DAE-OS
Two for SPE and
DAE-OS
10Introduction
Each storage system in the cluster is centrally managed by one host system
(also called a management station) running EMC Navisphere
®
Manager—a
centralized storage management application used to configure Dell/EMC
storage systems. Using a graphical user interface (GUI), you can select a
specific view of your storage arrays, as shown in Table 1-4.
Table 1-4. Navisphere Manager Storage Views
ViewDescription
StorageShows the logical storage components and their relationships to each
other and identifies hardware faults.
HostsShows the host system's storage group and attached logical unit
numbers (LUNs).
MonitorsShows all Event Monitor configurations, including centralized and
distributed monitoring configurations.
You can use Navisphere Manager to perform tasks such as creating RAID
arrays, binding LUNs, and downloading firmware. Optional software for the
shared storage systems include:
•EMC MirrorView™ — Provides synchronous or asynchronous mirroring
between two storage systems.
•EMC SnapView™
— Captures point-in-time images of a LUN for backups
or testing without affecting the contents of the source LUN.
•EMC SAN Copy™ — Moves data between Dell/EMC storage systems
without using host CPU cycles or local area network (LAN) bandwidth.
For more information about Navisphere Manager, MirrorView, SnapView, and
SAN Copy, see "Installing and Configuring the Shared Storage System" on
page 45.
Introduction11
Supported Cluster Configurations
The following sections describe the supported cluster configurations.
Direct-Attached Cluster
In a direct-attached cluster, all the nodes of the cluster are directly attached
to a single storage system. In this configuration, the RAID controllers (or
storage processors) on the storage system are connected by cables directly to
the Fibre Channel HBA ports in the nodes.
Figure 1-1 shows a basic direct-attached, single-cluster configuration.
EMC PowerPath Limitations in a Direct-Attached Cluster
private network
storage system
cluster node
Fibre Channel
connections
EMC PowerPath® provides failover capabilities, multiple path detection, and
dynamic load balancing between multiple ports on the same storage
processor. However, the direct-attached clusters supported by Dell connect to
a single port on each storage processor in the storage system. Because of the
single port limitation, PowerPath can provide only failover protection, not
load balancing, in a direct-attached configuration.
12Introduction
SAN-Attached Cluster
In a SAN-attached cluster, all nodes are attached to a single storage system or
to multiple storage systems through a SAN using redundant switch fabrics.
SAN-attached clusters are superior to direct-attached clusters in
configuration flexibility, expandability, and performance.
Figure 1-2 shows a SAN-attached cluster.
Figure 1-2. SAN-Attached Cluster
public network
cluster node
private network
Fibre Channel
connections
Fibre Channel switch
storage system
cluster node
Fibre Channel
connections
Fibre Channel switch
Other Documents You May Need
WARNING: The safety information that shipped with your system provides
important safety and regulatory information. Warranty information may be
included within this document or as a separate document.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document
located on the Dell Support website at support.dell.com.
•The
Rack Installation Guide
included with your rack solution describes
how to install your system into a rack.
Introduction13
•The
Getting Started Guide
provides an overview of initially setting up your
system.
•For more information on deploying your cluster with Windows Server 2003
operating systems, see the
Server 2003 Installation and Troubleshooting Guide
Dell Failover Clusters with Microsoft Windows
.
•For more information on deploying your cluster with Windows Server 2008
operating systems, see the
Server 2008 Installation and Troubleshooting Guide
Dell Failover Clusters with Microsoft Windows
.
•The HBA documentation provides installation instructions for the HBAs.
•Systems management software documentation describes the features,
requirements, installation, and basic operation of the software.
•Operating system documentation describes how to install (if necessary),
configure, and use the operating system software.
•Documentation for any components you purchased separately provides
information to configure and install those options.
•The Dell PowerVault™ tape library documentation provides information
for installing, troubleshooting, and upgrading the tape library.
•Any other documentation that came with your server or storage system.
•The EMC PowerPath documentation that came with your HBA kit(s) and
Dell/EMC Storage Enclosure User’s Guides.
•Updates are sometimes included with the system to describe changes to
the system, software, and/or documentation.
NOTE: Always read the updates first because they often supersede
information in other documents.
•Release notes or readme files may be included to provide last-minute
updates to the system or documentation, or advanced technical reference
material intended for experienced users or technicians.
14Introduction
Cabling Your Cluster Hardware
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document
located on the Dell Support website at support.dell.com.
Cabling the Mouse, Keyboard, and Monitor
When installing a cluster configuration in a rack, you must include a switch
box to connect the mouse, keyboard, and monitor to the nodes. See the
documentation included with your rack for instructions on cabling
connections of each node to the switch box.
Cabling the Power Supplies
See the documentation for each component in your cluster solution and
ensure that the specific power requirements are satisfied.
The following guidelines are recommended to protect your cluster solution
from power-related failures:
•For nodes with multiple power supplies, plug each power supply into a
separate AC circuit.
•Use uninterruptible power supplies (UPS).
•For some environments, consider having backup generators and power
from separate electrical substations.
Figure 2-1 and Figure 2-2 illustrate recommended methods for power cabling
for a cluster solution consisting of two PowerEdge systems and two storage
systems. To ensure redundancy, the primary power supplies of all the
components are grouped into one or two circuits and the redundant power
supplies are grouped into a different circuit.
Cabling Your Cluster Hardware15
Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems
primary power supplies
on one AC power strip
(or on one AC Power
Distribution Unit [not
01
0123
01
0123
redundant power
supplies on one AC
power strip (or on one
AC PDU [not shown])
shown])
NOTE: This illustration is intended only to demonstrate the power
distribution of the components.
16Cabling Your Cluster Hardware
Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge Systems
primary power supplies
on one AC power strip
(or on one AC PDU [not
shown])
01
0123
01
0123
redundant power supplies
on one AC power strip (or
on one AC PDU [not
shown])
NOTE: This illustration is intended only to demonstrate the power
distribution of the components.
Cabling Your Cluster for Public and Private
Networks
The network adapters in the cluster nodes provide at least two network
connections for each node, as described in Table 2-1.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document
located on the Dell Support website at support.dell.com.
Cabling Your Cluster Hardware17
Table 2-1. Network Connections
Network ConnectionDescription
Public networkAll connections to the client LAN.
At least one public network must be configured for Mixed mode for private network failover.
Private networkA dedicated connection for sharing cluster health and
status information only.
Figure 2-3 shows an example of cabling in which dedicated network adapters
in each node are connected to each other (for the private network) and the
remaining network adapters are connected to the public network.
Figure 2-3. Example of Network Cabling Connection
public network
public
network
adapter
cluster node 1
private network
adapter
private network
cluster node 2
Cabling the Public Network
Any network adapter supported by a system running TCP/IP may be used to
connect to the public network segments. You can install additional network
adapters to support additional public network segments or to provide
redundancy in the event of a faulty primary network adapter or switch port.
18Cabling Your Cluster Hardware
Cabling the Private Network
The private network connection to the nodes is provided by a different
network adapter in each node. This network is used for intra-cluster
communications. Table 2-2 describes three possible private network
configurations.
Table 2-2. Private Network Hardware Components and Connections
NOTE: Throughout this document, Gigabit Ethernet is used to refer to either Gigabit
Ethernet or 10 Gigabit Ethernet.
Using Dual-Port Network Adapters
You can configure your cluster to use the public network as a failover for
private network communications. If you are using dual-port network adapters,
do not configure both ports simultaneously to support both public and
private networks.
Gigabit Ethernet network
adapters and switches
Copper Gigabit Ethernet
network adapters
Connect standard Ethernet cables
from the network adapters in the
nodes to a Gigabit Ethernet switch.
Connect a standard Ethernet cable
between the Gigabit Ethernet network
adapters in both nodes.
NIC Teaming
NIC teaming combines two or more NICs to provide load balancing and fault
tolerance. Your cluster supports NIC teaming, only in a public network. NIC
teaming is not supported in a private network.
Use the same brand of NICs in a team. Do not mix brands in NIC teaming.
Cabling the Storage Systems
This section provides information on cabling your cluster to a storage system
in a direct-attached configuration or to one or more storage systems in a SANattached configuration.
Cabling Your Cluster Hardware19
Cabling Storage for Your Direct-Attached Cluster
A direct-attached cluster configuration consists of redundant Fibre Channel
host bus adapter (HBA) ports cabled directly to a Dell/EMC storage system.
Figure 2-4 shows an example of a direct-attached, single cluster configuration
with redundant HBA ports installed in each cluster node.
Figure 2-4. Direct-Attached Cluster Configuration
public network
cluster node
Fibre Channel
connections
cluster node
private network
Fibre
Channel
connections
storage system
20Cabling Your Cluster Hardware
Cabling a Cluster to a Dell/EMC Storage System
Each cluster node attaches to the storage system using two Fibre optic cables
with duplex local connector (LC) multimode connectors that attach to the
HBA ports in the cluster nodes and the storage processor (SP) ports in the
Dell/EMC storage system. These connectors consist of two individual Fibre
optic connectors with indexed tabs that must be aligned properly into the
HBA ports and SP ports.
CAUTION: Do not remove the connector covers until you are ready to insert the
connectors into the HBA port, SP port, or tape library port.
Cabling a Two-Node Cluster to a Dell/EMC Storage System
NOTE: The Dell/EMC storage system requires at least 2 front-end fibre channel
ports available on each storage processor.
1
Connect cluster node 1 to the storage system:
a
Install a cable from cluster node 1 HBA port 0 to the first front-end
fibre channel port on SP-A.
b
Install a cable from cluster node 1 HBA port 1 to the first front-end
fibre channel port on SP-B.
2
Connect cluster node 2 to the storage system:
a
Install a cable from cluster node 2 HBA port 0 to the second front-end
fibre channel port on SP-A.
b
Install a cable from cluster node 2 HBA port 1 to the second front-end
fibre channel port on SP-B.
Cabling Your Cluster Hardware21
Loading...
+ 47 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.