Reproduction of these materials in any manner whatsoever without the written permission of Dell Inc.
is strictly forbidden.
Trademarks used in this text: Dell™, the DELL logo, PowerEdge™, and PowerVault™ are trademarks
of Dell Inc. Microsoft
Corporation in the United States and/or other countries.
Other trademarks and trade names may be used in this publication to refer to either the entities claiming
the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and
trade names other than its own.
2011 - 07Rev. A00
®
, Windows®, and Windows Server® are registered trademarks of Microsoft
This document provides information for installing and managing your cluster
solution using Dell PowerVault MD3600f and MD3620f storage systems. It is
intended for experienced IT professionals who configure a cluster solution
and for trained service technicians who perform upgrade and maintenance
procedures. This document also addresses readers who are new to clustering.
Overview
The Dell PowerEdge Cluster with Microsoft Windows Server Failover
Clustering combines specific hardware and software components to provide
enhanced availability for applications and services that run on the cluster. A
failover cluster is designed to reduce the possibility of any single point of
failure within the system that can cause the clustered applications or services
to become unavailable. It is recommended that you use redundant
components like system and storage power supplies, connections between the
nodes and the storage array(s), connections to client systems, or other
systems in the multi-tier enterprise application architecture in your cluster.
This guide addresses the configuration of your Dell MD3600f and MD3620f
Fibre
Channel storage arrays for use with one or more Windows Server
failover clusters. It provides information and specific configuration tasks that
enable you to deploy the shared storage for your cluster.
For more information on deploying your cluster, see the Dell Failover Clusters with Microsoft Windows Server Installation and Troubleshooting Guide at
support.dell.com/manuals.
NOTE: Throughout this document:
•Windows Server 2008 refers to Microsoft Windows Server 2008 x64 Enterprise
Edition or Microsoft Windows Server 2008 R2 x64 Enterprise Edition.
•Dell PowerVault MD36x0f storage array refers to both Dell PowerVault
MD3600f and Dell PowerVault MD3620f storage arrays.
For a list of recommended operating systems, hardware components, and
driver or firmware versions for your Dell Windows Server Failover Cluster, see
the Dell Cluster Configuration Support Matrices at dell.com/ha.
Introduction7
Cluster Solution
Your Fibre Channel cluster implements a minimum of two-node clustering
and a maximum of sixteen-node clustering and provides the following
features:
•8 Gbps and 4 Gbps Fibre Channel technology.
•High availability of system services and resources to network clients.
•Redundant paths to the shared storage.
•Failure recovery for applications and services.
•Flexible maintenance capabilities, allowing you to repair, maintain, or
upgrade a cluster node without taking the entire cluster offline.
Implementing
advantages:
•
Flexibility
switches without degrading the signal.
•
Availability
providing multiple data paths and greater availability for clients.
•
Connectivity
Computer System Interface (SCSI) or Serial Attached SCSI (SAS).
Because Fibre Channel devices are hot swappable, you can add or remove
devices from the nodes without bringing down the cluster.
Fibre
Channel technology in a cluster provides the following
—Fibre Channel allows a distance of up to 10 km between
—Fibre Channel components use redundant connections,
—Fibre Channel allows more device connections than Small
Cluster Requirements
Your cluster requires the following components:
•Servers (cluster nodes)
•Storage and storage management software
8Introduction
Cluster Nodes
Table 1-1 lists hardware requirements for the cluster nodes.
Table 1-1. Cluster Node Requirements
ComponentMinimum Requirement
ProcessorAt least one processor for each cluster node.
Cluster NodesA minimum of two identical PowerEdge systems.
RAMAt least 1 GB RAM on each cluster node.
Host Bus Adapter
(HBA) ports
Network
Interface Cards
(NICs)
(public and
private)
Internal Disk
Controller
Two Fibre Channel HBAs per node, unless the server employs an
integrated or supported dual-port Fibre Channel HBA.
At least two NICs—one NIC for the public network and another
NIC for the private network.
NOTE: It is recommended that the NICs on each public network are
identical and that the NICs on each private network are identical.
One controller connected to internal disks for each node. Use any
supported Redundant Array of Independent Disks (RAID)
controller or disk controller.
Two physical disks are required for mirroring (RAID 1) and at least
three are required for disk striping with parity (RAID 5).
NOTE: It is recommended that you use hardware-based RAID or
software-based disk-fault tolerance for the internal drives.
NOTE: For more information about supported systems, HBAs, and operating system
versions, see the Dell Cluster Configuration Support Matrices at dell.com/ha.
Introduction9
Cluster Storage
Table 1-2 provides the configuration requirements for the shared storage
system.
Table 1-2. Cluster Storage Requirements
Hardware
Components
Supported storage
systems
Minimum Requirement
• One Dell PowerVault MD3600f or MD3620f RAID enclosure.
• Any combination of up to seven Dell PowerVault MD1200
and/or MD1220 expansion enclosures.
NOTE: The number of hard drives must not exceed 96.
Cluster nodesAll nodes must be directly attached to a single storage system
or attached to one or more storage systems through a SAN.
Switch and cableAt least two 8 Gbps Fibre Channel switches in a SAN-attached
environment.
Power and cooling
requirements
Physical disksAt least two physical disks in the PowerVault MD3600f or
Multiple clusters and
stand-alone systems
NOTE: RAID 0 and independent disks can be used but are not recommended for a
high-availability system because they do not offer data redundancy if a disk
failure occurs.
Two integrated hot-swappable power supply/cooling fan
modules.
MD3620f RAID enclosure.
In a switch-attached configuration, clusters and stand-alone
systems can share one or more PowerVault MD3600f or
MD3620f systems.
Cluster Storage Management Software
Dell PowerVault Modular Disk Storage Manager
The software runs on the management station or any host attached to the
array to centrally manage the PowerVault MD3600f and MD3620f RAID
enclosures. You can use Dell PowerVault Modular Disk Storage Manager
(MDSM) to perform tasks such as creating disk groups, creating and mapping
virtual disks, monitoring the enclosure status, and downloading firmware.
10Introduction
MDSM is a graphical user interface (GUI) with wizard-guided tools and a
task-based structure. MDSM is designed to:
•Reduce the complexity when you install, configure, manage, and perform
diagnostic tasks for the storage arrays.
•Contain an event monitoring service that is used to send alerts when a
critical problem with the storage array occurs.
•Provide a command line interface (CLI) to run commands from an
operating system prompt.
Modular Disk Storage Manager Agent
This software resides on each cluster node to collect system-based topology
data that can be managed by the MDSM.
Multipath I/O (MPIO) Software
Multipath I/O software (also referred to as the failover driver) is installed on
each cluster node. The software manages the redundant data path between
the system and the RAID enclosure. For the MPIO software to correctly
manage a redundant path, the configuration must provide for redundant
HBAs and cabling.
The MPIO software identifies the existence of multiple paths to a virtual disk
and establishes a preferred path to that disk. If any component in the
preferred path fails, the MPIO software automatically re-routes I/O requests
to the alternate path so that the storage array continues to operate without
interruption.
Introduction11
Advanced Features
Advanced features for the PowerVault MD3600f and MD3620f RAID storage
systems include:
•
Snapshot Virtual Disk
for backup, testing, or data processing without affecting the contents of
the source virtual disk.
•
Virtual Disk Copy
disk to the target virtual disk in a storage array. You can use Virtual Disk
Copy to back up data, copy data from disk groups that use smaller-capacity
physical disks to disk groups using greater capacity physical disks, or restore
snapshot virtual disk data to the source virtual disk.
•
Upgrading to High-Performance Tier
system beyond that of a MD36
performance level.
•
Remote Replication
storage arrays in separate locations.
NOTE: For more information on deploying the correct options in the cluster
environment, see "Using Advanced (Premium) PowerVault Modular Disk
Storage Manager Features" on page 63.
—Captures point-in-time images of a virtual disk
—Generates a full copy of data from the source virtual
—Increases the performance of the
x
0f series array operating at the standard
—Enables real-time replication of data between two
Supported Cluster Configurations
The following sections describe the supported cluster configurations.
Direct-Attached Cluster
In a direct-attached cluster, all the nodes of the cluster are directly attached
to a single storage system. In this configuration, the RAID controllers on the
storage system are connected by cables directly to the Fibre Channel HBA
ports in the nodes.
Figure 1-1 shows a basic direct-attached, single-cluster configuration.
NOTE: The configuration can have up to 4 nodes. The nodes can be:
•One cluster (up to 4 nodes)
•Two two-node clusters
•One cluster and stand-alone server(s)
SAN-Attached Cluster
In a SAN-attached cluster, all nodes are attached to a single storage system or
to multiple storage systems through a SAN using redundant switch fabrics.
SAN-attached clusters are superior to direct-attached clusters in
configuration flexibility, expandability, and performance.
Figure 1-2 shows a SAN-attached cluster.
Introduction13
Figure 1-2. SAN-Attached Cluster
cluster node
private network
Fibre Channel
connections
storage system
Fibre Channel switch
Fibre Channel switch
public network
Fibre Channel
connections
cluster node
NOTE: The configuration can have up to 64 nodes. The nodes can be:
•One cluster (up to 16 nodes)
•Multiple clusters
•Multiple clusters and stand-alone server(s)
Configuration Order for Direct-Attached and
SAN-Attached Connections
This section describes the configuration steps for both direct-attached and
SAN-attached connections. These steps assume that you are setting up a
Fibre Channel storage for the first time.
NOTE: If you are adding a Fibre Channel storage array or if your host server is
already configured to access Fibre Channel storage, some of the steps in this
section may not apply. Before proceeding, see the Dell Cluster ConfigurationSupport Matrices at dell.com/ha to confirm that your existing hardware
components and Host Bus Adapter (HBA) firmware and BIOS levels are supported.
14Introduction
Direct-Attached Configuration Order
1
Install the supported HBAs on your cluster nodes. See "Installing
Supported Fibre Channel HBAs" on page 44.
2
Cable the cluster nodes to the storage array. See "Cabling Storage for Your
Direct-Attached Cluster" on page 24.
3
Install the required HBA drivers and firmware versions listed in the
Cluster Configuration Support Matrices
4
Install and configure the MD Storage Manager software (included with
at
dell.com/ha
.
your storage array) on your cluster nodes. See "Installing the Storage
Management Software" on page 47.
5
Using MD Storage Manager (MDSM), configure the host servers, storage
arrays, and virtual disks. See "Configuring the Shared Storage System" on
page 48.
6
Activate and configure premium features (if applicable).
SAN-Attached Configuration Order
NOTE: A SAN-attached configuration is required to use the Remote Replication
premium feature. Remote Replication is not supported in direct-attached
configurations.
1
Install the supported HBAs on your cluster nodes. See "Installing
Supported Fibre Channel HBAs" on page 44.
2
Cable the cluster nodes to the Fibre Channel switches. See "Cabling a
SAN-Attached Cluster to an MD36x0f Storage System" on page 33 and
"Remote Replication" on page 65.
3
Install the required HBA drivers and firmware versions listed in the
Cluster Configuration Support Matrices
4
Install and configure the MD Storage Manager software (included with
your storage array) on your cluster nodes. See "Installing the Storage
Management Software" on page 47.
5
Cable the storage array to the Fibre Channel switches. See "Cabling
Storage for Your SAN-Attached Cluster" on page 30.
6
Configure zoning on all Fibre Channel switches. See "Setting Up Zoning
on the Fibre Channel Switch Hardware" on page 47.
at
dell.com/ha
.
Dell
Dell
Introduction15
NOTE: All equipment attached to the switch must be powered on before
establishing zoning. For additional switch hardware requirements, see the
manufacturer’s documentation.
7
Using MDSM, configure the cluster nodes, storage arrays, and virtual
disks. See "Configuring the Shared Storage System" on page 48.
8
Activate and configure premium features (if applicable).
Other Documents You May Need
CAUTION: The safety information that shipped with your system provides
important safety and regulatory information. Warranty information may be
included within this document or as a separate document.
•The
•The
•The
•The
•The operating system documentation describes how to install (if
•Documentation for any components you purchased separately provides
•The Dell PowerVault tape library documentation provides information
•Updates are sometimes included with the system to describe changes to
•The User's Guide for your PowerEdge system describes system features
Rack Installation Guide
how to install your system into a rack.
Getting Started Guide
your system.
Dell Failover Clusters with Microsoft Windows Server 2008 Installation
and Troubleshooting Guide
your cluster.
Dell Cluster Configuration Support Matrices
recommended operating systems, hardware components, and driver or
firmware versions for your Dell Windows Server Failover Cluster.
necessary), configure, and use the operating system software.
information to configure and install those options.
about installing, troubleshooting, and upgrading the tape library.
the system, software, and/or documentation.
and technical specifications, the System Setup program (if applicable),
software support, and the system configuration utility.
included with your rack solution describes
provides an overview to initially set up
provides more information about deploying
provides a list of
16Introduction
•The
Dell PowerVault MD3600f and MD3620f Storage Arrays Configuring
Fibre Channel With Dell MD3600f Series Storage Arrays
document
provides information about configurations, HBA installation, and zoning.
•The
Dell PowerVault MD3600f and MD3620f Storage Arrays Getting
Started Guide
provides an overview of setting up and cabling your storage
array.
•The
Dell PowerVault MD3600f and MD3620f Storage Arrays Owner's
Manual
provides information about system features and describes how to
troubleshoot the system and install or replace system components.
•The
Dell PowerVault MD3600f and MD3620f Storage Arrays Deployment
Guide
provides information about installing and configuring the software
and hardware.
•The
Dell PowerVault Modular Disk Storage Arrays CLI Guide
provides
information about using the command line interface (CLI) to configure
and manage your storage array.
•The
Dell PowerVault MD36x0f Resource
DVD provides documentation for
configuration and management tools, as well as the full documentation set
included here.
•The
Dell PowerVault MD Systems Support Matrix
provides information on
supported software and hardware for PowerVault MD systems.
NOTE: Always read the updates first because they often supersede
information in other documents.
•Release notes or readme files may be included to provide last-minute
updates to the system documentation or advance technical reference
material intended for experienced users or technicians.
Introduction17
18Introduction
2
Cabling Your Cluster Hardware
The following sections provide information on cabling various components of
your cluster.
Cabling the Mouse, Keyboard, and Monitor
When installing a cluster configuration in a rack, you must include a switch
box to connect the mouse, keyboard, and monitor to the nodes. See the
documentation included with your rack for instructions on cabling each
node's connections to the switch box.
Cabling the Power Supplies
To ensure that the specific power requirements are satisfied, see the
documentation for each component in your cluster solution.
It is recommended that you adhere to the following guidelines to protect your
cluster solution from power-related failures:
•For nodes with multiple power supplies, plug each power supply into a
separate AC circuit.
•Use uninterruptible power supplies (UPS).
•For some environments, consider having backup generators and power
from separate electrical substations.
Figure 2-1 illustrates a recommended method for power cabling of a cluster
solution consisting of two Dell PowerEdge systems and one storage system.
To ensure redundancy, the primary power supplies of all the components are
grouped onto one or two circuits and the redundant power supplies are
grouped onto a different circuit.
Cabling Your Cluster Hardware19
Figure 2-1. Power Cabling Example
primary power supplies on one AC
power strip [or one AC PDU (not
shown)]
redundant power supplies on one AC
power strip [or one AC PDU (not
shown)]
MD36x0f RAID
controller module 0
MD36x0f RAID
controller module 1
cluster node 1
cluster node 2
NOTE: This illustration is intended only to demonstrate the power distribution of the
components.
20Cabling Your Cluster Hardware
Cabling Your Public and Private Networks
The network adapters in the cluster nodes provide at least two network
connections for each node. These connections are described in Table 2-1.
Table 2-1. Network Connections
Network ConnectionDescription
Public NetworkAll connections to the client LAN.
At least one public network must be configured for
mixed mode (public mode and private mode) for
private network failover.
Private NetworkA dedicated connection for sharing cluster health and
status information between the cluster nodes.
Network adapters connected to the LAN can also
provide redundancy at the communications level in
case the cluster interconnect fails.
For more information on private network redundancy,
see your Microsoft Failover Clustering
documentation.
Figure 2-2 shows an example of network adapter cabling in which dedicated
network adapters in each node are connected to the public network and the
remaining network adapters are connected to each other (for the private
network).
Cabling Your Cluster Hardware21
Figure 2-2. Example of Network Cabling Connection
public network
p
u
b
l
i
c
n
e
t
w
o
r
k
a
d
a
p
t
e
r
private network
adapter
cluster node 1cluster node 2
private network
Cabling Your Public Network
Any network adapter supported by a system running TCP/IP may be used to
connect to the public network segments. You can install additional network
adapters to support additional public network segments or to provide
redundancy in the event of a faulty primary network adapter or switch port.
Cabling Your Private Network
The private network connection to the cluster nodes is provided by a second
or subsequent network adapter that is installed in each node. This network is
used for intra-cluster communications.
Table 2-2 lists the required hardware components and connection method for
three possible private network configurations.
22Cabling Your Cluster Hardware
Table 2-2. Private Network Hardware Components and Connections
MethodHardware ComponentsConnection
Network switchGigabit or 10 Gigabit
Ethernet network
adapters and switches
Point-to-Point (two node
cluster only)
NOTE: Throughout this document, Ethernet refers to either Gigabit Ethernet or
10 Gigabit Ethernet.
Copper Gigabit or 10
Gigabit Ethernet network
adapters with RJ-45
connectors
Copper 10 Gigabit
Ethernet network
adapters with SFP+
connectors
Optical Gigabit or 10
Gigabit Ethernet network
adapters with LC
connectors
Depending on the hardware,
connect the CAT5e or CAT6
cables, the multimode optical
cables with Local Connectors
(LCs), or the twinax cables
from the network adapters in
the nodes to a switch.
Connect a standard CAT5e or
CAT6 Ethernet cable between
the network adapters in both
nodes.
Connect a twinax cable
between the network adapters
in both nodes.
Connect a multi-mode optical
cable between the network
adapters in both nodes.
Using Dual-Port Network Adapters for Your Private Network
You can configure your cluster to use the public network as a failover for
private network communications. However, if dual-port network adapters are
used, do not use two ports simultaneously to support both the public and
private networks.
Cabling Your Cluster Hardware23
NIC Teaming
Network Interface Card (NIC) teaming combines two or more NICs to
provide load balancing and/or fault tolerance. Your cluster supports NIC
teaming, but only in a public network; NIC teaming is not supported in a
private network.
NOTE: Use the same brand of NICs in a team. You cannot mix brands of teaming
drivers.
Cabling the Storage Systems
This section provides information on cabling your cluster to a storage system
in a direct-attached configuration or to one or more storage systems in a
SAN-attached configuration.
Cabling Storage for Your Direct-Attached Cluster
A direct-attached cluster configuration consists of redundant Fibre Channel
host bus adapter (HBA) ports cabled directly to a Dell PowerVault MD36x0f
storage system. If a component (for example, port, cable, or the storage
controller) fails in the storage path, the MPIO software automatically
re-routes the I/O requests to the alternate path so that the storage array
continues to operate without interruption. The configuration with two
single-port HBA provides higher availability. An HBA failure does not cause
the failover cluster to move cluster resources to the other cluster node.
Figure 2-3 shows an example of a direct-attached, single cluster configuration
with redundant HBA ports installed in each cluster node.
24Cabling Your Cluster Hardware
Loading...
+ 56 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.