Dell AX4-5 User Manual

0 (0)
Dell|EMC AX4-5 Fibre Channel
Storage Arrays With
Microsoft
®
Windows Server
®
Failover Clusters
Hardware Installation and
www.dell.com | support.dell.com
Notes, Notices, and Cautions
NOTE: A NOTE indicates important information that helps you make better use
of your computer.
NOTICE: A NOTICE indicates either potential damage to hardware or loss of
data and tells you how to avoid the problem.
CAUTION: A CAUTION indicates a potential for property damage, personal
injury, or death.
___________________
Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved.
Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and OpenManage are
trademarks of Dell Inc.; Active Directory , Microsoft, Windows, Windows Server, and Windows NT are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries.; EMC, Navisphere, and PowerPath are registered trademarks and Access Logix, MirrorView, SAN Copy, and SnapView are trademarks of EMC Corporation.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
April 2008 Rev A00
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5
Cluster Solution . . . . . . . . . . . . . . . . . . . . . . 6
Cluster Hardware Requirements
Cluster Nodes
Cluster Storage
. . . . . . . . . . . . . . . . . . . . . 7
. . . . . . . . . . . . . . . . . . . . 8
Supported Cluster Configurations
Direct-Attached Cluster
SAN-Attached Cluster
Other Documents You May Need
. . . . . . . . . . . . . 6
. . . . . . . . . . . . . 9
. . . . . . . . . . . . . . . 9
. . . . . . . . . . . . . . . 10
. . . . . . . . . . . . 11
2 Cabling Your Cluster Hardware . . . . . . . . 13
Cabling the Mouse, Keyboard, and Monitor . . . . . . 13
Cabling the Power Supplies
Cabling Your Cluster for Public and Private Networks
. . . . . . . . . . . . . . . . . . . . 15
Cabling the Public Network
Cabling the Private Network
NIC Teaming
. . . . . . . . . . . . . . . . . . . . 18
Cabling the Storage Systems . . . . . . . . . . . . . . 18
Cabling Storage for Your Direct-Attached Cluster
Cabling Storage for Your SAN-Attached Cluster
Cabling a SAN-Attached Cluster to an AX4-5 Storage System
. . . . . . . . . . . . . . . 13
. . . . . . . . . . . . 17
. . . . . . . . . . . . 17
. . . . . . . . . . . . . . 18
. . . . . . . . . . . . . . . 21
. . . . . . . . . . . . . . . 23
Contents 3
3 Preparing Your Systems
for Clustering
Cluster Configuration Overview . . . . . . . . . . . . . 29
. . . . . . . . . . . . . . . . . . . . . 29
Installation Overview
Installing the Fibre Channel HBAs
. . . . . . . . . . . . . . . . . . 31
. . . . . . . . . . . . 32
Installing the Fibre Channel HBA Drivers
Installing EMC PowerPath
. . . . . . . . . . . . . . . . 32
Implementing Zoning on a Fibre Channel Switched Fabric
Using Worldwide Port Name Zoning
. . . . . . . . . . . . . 33
. . . . . . . . 33
Installing and Configuring the Shared Storage System
. . . . . . . . . . . . . . . . . 35
Installing Navisphere Storage System Initialization Utility
. . . . . . . . . . . . . . . . . 35
Installing the Expansion Pack Using Navisphere Express
Installing Navisphere Server Utility
. . . . . . . . . . . . . . . . . 36
. . . . . . . . . 37
Registering a Server With a Storage System
Assigning the Virtual Disks to Cluster Nodes
Advanced or Optional Storage Features
Installing and Configuring a Failover Cluster
. . . . . . 32
. . . . 37
. . . . 38
. . . . . . 38
. . . . . . 40
A Troubleshooting . . . . . . . . . . . . . . . . . . . 41
B Cluster Data Form
C Zoning Configuration Form
Index
4 Contents
. . . . . . . . . . . . . . . . . 49
. . . . . . . . . . 51
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Introduction

A failover cluster combines specific hardware and software components to provide enhanced availability for applications and services that are run on the cluster. A failover cluster is designed to reduce the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable. It is recommended that you use redundant components like server and storage power supplies, connections between the nodes and the storage array(s), connections to client systems or other servers in a multi-tier enterprise application architecture in your cluster.
This document provides information and specific configuration tasks that enable you to configure your Microsoft with Dell|EMC AX4-5 Fibre Channel storage array(s).
For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell™Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide on the Dell Support website at support.dell.com. For more information on deploying your cluster with Windows Server 2008 operating systems, see the Dell Failover Clusters
with Microsoft Windows Server 2008 Installation and Troubleshooting Guide
on the Dell Support website at support.dell.com.
For a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at www.dell.com/ha.
®
Windows Server® failover cluster
Introduction 5

Cluster Solution

Your cluster implements a minimum of two node to a maximum of either eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server 2008) clustering and provides the following features:
8-Gbps, 4-Gbps and 2-Gbps Fibre Channel technologies
High availability of resources to network clients
Redundant paths to the shared storage
Failure recovery for applications and services
Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a node or storage system without taking the entire cluster offline
Implementing Fibre Channel technology in a cluster provides the following advantages:
Flexibility
switches without degrading the signal.
Availability
providing multiple data paths and greater availability for clients.
Connectivity
SCSI. Because Fibre Channel devices are hot-pluggable, you can add or remove devices from the nodes without bringing down the cluster.
— Fibre Channel allows a distance of up to 10 km between
— Fibre Channel components use redundant connections,
— Fibre Channel allows more device connections than

Cluster Hardware Requirements

Your cluster requires the following hardware components:
Cluster nodes
Cluster storage
6 Introduction

Cluster Nodes

Table 1-1 lists the hardware requirements for the cluster nodes.
Table 1-1. Cluster Node Requirements
Component Minimum Requirement
Cluster nodes A minimum of two identical Dell™ PowerEdge
required. The maximum number of nodes that is supported depends on the variant of the Windows Server operating system used in your cluster, and on the physical topology in which the storage system and nodes are interconnected.
RAM The variant of the Windows Server operating system that is
installed on your cluster nodes determines the minimum required amount of system RAM.
HBA ports Two Fibre Channel HBAs per node, unless the server employs
an integrated or supported dual-port Fibre Channel HBA.
Where possible, place the HBAs on separate PCI buses to improve availability and performance.
NICs (public and private networks)
At least two NICs: one NIC for the public network and another NIC for the private network.
NOTE: It is recommended that the NICs on each public network
are identical, and that the NICs on each private network are identical.
Internal disk controller
One controller connected to at least two internal hard drives for each node. Use any supported RAID controller or disk controller.
Two hard drives are required for mirroring (RAID 1) and at least three are required for disk striping with parity (RAID 5).
NOTE: It is strongly recommended that you use hardware-
based RAID or software-based disk-fault tolerance for the internal drives.
servers are
NOTE: For more information about supported systems, HBAs and operating system
variants, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at www.dell.com/ha.
Introduction 7

Cluster Storage

Cluster nodes can share access to external storage systems. However, only one of the nodes can own any redundant array of independent disks (RAID) volume in the external storage system at any time. Microsoft Cluster Services (MSCS) controls which node has access to each RAID volume in the shared storage system.
Table 1-2 lists supported storage systems and the configuration requirements for the cluster nodes and stand-alone systems connected to the storage systems.
Table 1-2. Cluster Storage Requirements
Hardware Components Requirement
Supported storage systems
Cluster nodes All nodes must be directly attached to a single storage
Multiple clusters and stand-alone systems
One to four supported Dell | EMC storage systems. For specific storage system requirements see Table 1-3.
system or attached to one or more storage systems through a SAN.
Can share one or more supported storage systems using optional software that is available for your storage system. See "Installing and Configuring the Shared Storage System" on page 35.
The storage systems work together with the following hardware components:
Disk processor enclosure (DPE) - Configured with storage processors that control the RAID arrays in the storage system and provide storage functionalities such as snapshots, LUN masking, and remote mirroring.
Disk array enclosure (DAE) - Provides additional storage and is attached to the disk processor enclosure.
Standby power supply (SPS) - Provides backup power to protect the integrity of the disk processor write cache. The SPS is connected to the disk processor enclosure
8 Introduction
Table 1-3 lists hardware requirements for the disk processor enclosures DPE DAE, and SPS.
Table 1-3. Dell|EMC Storage System Requirements
Storage System
AX4-5 1 DPE with at least 4
NOTE: Ensure that the core software version running on the storage system is
supported by Dell. For specific version requirements, see the Dell Cluster Configuration Support Matrices located on the Dell High Availablity Cluster website at www.dell.com/ha.
Minimum Required Storage
and up to 12 hard drives
Possible Storage Expansion
Up to 3 DAE with a maximum of 12 hard­drives each
SPS
1 is required, the second SPS is optional

Supported Cluster Configurations

Direct-Attached Cluster

In a direct-attached cluster, both nodes of the cluster are directly attached to a single storage system. In this configuration, the RAID controllers (or storage processors) on the storage systems are connected by cables directly to the Fibre Channel HBA ports in the nodes.
Figure 1-1 shows a basic direct-attached, single-cluster configuration.
Introduction 9
Figure 1-1. Direct-Attached, Single-Cluster Configuration
public network
cluster node
private network
Fibre Channel connections
storage system
EMC PowerPath Limitations in a Direct-Attached Cluster
cluster node
Fibre Channel connections
EMC PowerPath provides failover capabilities and multiple path detection as well as dynamic load balancing between multiple ports on the same storage processor. However, direct-attached clusters supported by Dell connect to a single port on each storage processor in the storage system. Because of the single port limitation, PowerPath can provide only failover protection, not load balancing, in a direct-attached configuration.

SAN-Attached Cluster

In a SAN-attached cluster, all of the nodes are attached to a single storage system or to multiple storage systems through a SAN using redundant switch fabrics. SAN-attached clusters are superior to direct-attached clusters in configuration flexibility, expandability, and performance.
Figure 1-2 shows a SAN-attached cluster.
10 Introduction
Figure 1-2. SAN-Attached Cluster
public network
cluster node
private network
Fibre Channel connections
Fibre Channel switch
storage system
cluster node
Fibre Channel connections
Fibre Channel switch

Other Documents You May Need

CAUTION: The safety information that is shipped with your system provides
important safety and regulatory information. Warranty information may be included within this document or as a separate document.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
The
The
The
The
Rack Installation Guide
included with your rack solution describes
how to install your system into a rack.
Getting Started Guide
provides an overview of initially setting up your
system.
Dell Failover Clusters with Microsoft Windows Server 2003 Installation
and Troubleshooting Guide
provides more information on deploying your
cluster with the Windows Server 2003 operating system.
Dell Failover Clusters with Microsoft Windows Server 2008 Installation
and Troubleshooting Guide
provides more information on deploying your
cluster with the Windows Server 2008 operating system.
Introduction 11
•The
Dell Cluster Configuration Support Matrices
provides a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster.
The HBA documentation provides installation instructions for the HBAs.
Systems management software documentation describes the features, requirements, installation, and basic operation of the software.
Operating system documentation describes how to install (if necessary), configure, and use the operating system software.
The Dell PowerVault™ tape library documentation provides information for installing, troubleshooting, and upgrading the tape library.
The EMC PowerPath documentation that came with your HBA kit(s) and Dell|EMC Storage Enclosure User’s Guides.
Updates are sometimes included with the system to describe changes to the system, software, and/or documentation.
NOTE: Always read the updates first because they often supersede
information in other documents.
Release notes or readme files may be included to provide last-minute updates to the system or documentation, or advanced technical reference material intended for experienced users or technicians.
12 Introduction

Cabling Your Cluster Hardware

NOTE: To configure Dell blade server modules in a Dell™ PowerEdge™ cluster, see
the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.

Cabling the Mouse, Keyboard, and Monitor

When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes. See the documentation included with your rack for instructions on cabling each node’s connections to the switch box.

Cabling the Power Supplies

Refer to the documentation for each component in your cluster solution to ensure that the specific power requirements are satisfied.
The following guidelines are recommended to protect your cluster solution from power-related failures:
For nodes with multiple power supplies, plug each power supply into a separate AC circuit.
Use uninterruptible power supplies (UPS).
For some environments, consider having backup generators and power from separate electrical substations.
Figure 2-1, and Figure 2-2 illustrate recommended methods for power cabling for a cluster solution consisting of two PowerEdge systems and two storage systems. To ensure redundancy, the primary power supplies of all the components are grouped onto one or two circuits and the redundant power supplies are grouped onto a different circuit.
Cabling Your Cluster Hardware 13
Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems
and One Standby Power Supply (SPS) in an AX4-5 Storage System
primary power supplies on one AC power strip (or on one AC PDU [not shown])
NOTE: This illustration is intended only to demonstrate the power
distribution of the components.
SPS
redundant power supplies on one AC power strip (or on one AC PDU [not shown])
14 Cabling Your Cluster Hardware
Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge
Systems and Two SPS(s) in an AX4-5 Storage System
A
B
A
B
A
B
0Fibre 1Fibre 0Fibre 1Fibre
A
B
primary power supplies on one AC power strip (or on one AC PDU [not shown])
redundant power supplies on one AC power strip (or on one AC PDU [not shown])

Cabling Your Cluster for Public and Private Networks

The network adapters in the cluster nodes provide at least two network connections for each node, as described in Table 2-1.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
Cabling Your Cluster Hardware 15
Table 2-1. Network Connections
Network Connection Description
Public network All connections to the client LAN.
At least one public network must be configured for Mixed mode for private network failover.
Private network A dedicated connection for sharing cluster health and
status information only.
Figure 2-3 shows an example of cabling in which dedicated network adapters in each node are connected to each other (for the private network) and the remaining network adapters are connected to the public network.
Figure 2-3. Example of Network Cabling Connection
public network
public network adapter
cluster node 1
private network adapter
16 Cabling Your Cluster Hardware
private network
cluster node 2

Cabling the Public Network

Any network adapter supported by a system running TCP/IP may be used to connect to the public network segments. You can install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port.

Cabling the Private Network

The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications. Table 2-2 describes three possible private network configurations.
Table 2-2. Private Network Hardware Components and Connections
Method Hardware Components Connection
Network switch
Point-to-Point Fast Ethernet (two-node clusters only)
Point-to-Point Gigabit Ethernet (two-node clusters only)
Fast Ethernet or Gigabit Ethernet network adapters and switches
Fast Ethernet network adapters
Copper Gigabit Ethernet network adapters
Connect standard Ethernet cables from the network adapters in the nodes to a Fast Ethernet or Gigabit Ethernet switch.
Connect a crossover Ethernet cable between the Fast Ethernet network adapters in both nodes.
Connect a standard Ethernet cable between the Gigabit Ethernet network adapters in both nodes.
NOTE: Throughout this document, Gigabit Ethernet is used to refer to either Gigabit
Ethernet or 10 Gigabit Ethernet.
Using Dual-Port Network Adapters
You can configure your cluster to use the public network as a failover for private network communications. If dual-port network adapters are used, do not use both ports simultaneously to support both the public and private networks.
Cabling Your Cluster Hardware 17
Loading...
+ 39 hidden pages