Dell EMC AX4-5 User Manual

0 (0)

Dell/EMC AX4-5 Fibre Channel

Storage Arrays With

Microsoft® Windows Server®

Failover Clusters

Hardware Installation

and Troubleshooting

Guide

Notes, Cautions, and Warnings

NOTE: A NOTE indicates important information that helps you make better use of your computer.

CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

____________________

Information in this document is subject to change without notice. © 2008-2010 Dell Inc. All rights reserved.

Reproduction of these materials in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.

Trademarks used in this text: Dell, the DELL logo, PowerEdge, and PowerVault are trademarks of Dell Inc.; Active Directory, Microsoft, Windows, Windows Server, and Windows NT are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries.; EMC, Navisphere, and PowerPath are registered trademarks and MirrorView, SAN Copy, and SnapView are trademarks of EMC Corporation.

Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.

January 2010 Rev A01

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . .

.

7

Cluster Solution . . . . . . . . . . . . . . . . . . . . .

.

8

Cluster Hardware Requirements . . . . . . . . . . . .

.

8

Cluster Nodes. . . . . . . . . . . . . . . . . . . .

.

9

Cluster Storage . . . . . . . . . . . . . . . . . . .

 

10

Supported Cluster Configurations . . . . . . . . . . . .

 

11

Direct-Attached Cluster . . . . . . . . . . . . . .

 

11

SAN-Attached Cluster . . . . . . . . . . . . . . .

 

12

Other Documents You May Need . . . . . . . . . . . .

 

13

2 Cabling Your Cluster Hardware . . . . . . . .

 

15

Cabling the Mouse, Keyboard, and Monitor . . . . . .

 

15

Cabling the Power Supplies . . . . . . . . . . . . . . .

 

15

Cabling Your Cluster for Public and

 

 

Private Networks . . . . . . . . . . . . . . . . . . . .

 

17

Cabling the Public Network . . . . . . . . . . . .

 

19

Cabling the Private Network . . . . . . . . . . . .

 

19

NIC Teaming . . . . . . . . . . . . . . . . . . . .

 

20

Cabling the Storage Systems . . . . . . . . . . . . . .

 

20

Cabling Storage for Your

 

 

Direct-Attached Cluster . . . . . . . . . . . . . .

 

20

Cabling Storage for Your

 

 

SAN-Attached Cluster . . . . . . . . . . . . . . .

 

25

Cabling a SAN-Attached Cluster to an

 

 

AX4-5F Storage System. . . . . . . . . . . . . . .

 

27

Contents 3

3Preparing Your Systems

for Clustering . . . . . . . . . . . . . . . . . . . . . 35

Cluster Configuration Overview . . . . . . . . . . . . .

35

Installation Overview . . . . . . . . . . . . . . . . . .

37

Installing the Fibre Channel HBAs. . . . . . . . . . . .

38

Installing the Fibre Channel HBA Drivers. . . . . .

38

Installing EMC PowerPath. . . . . . . . . . . . . . . .

38

Implementing Zoning on a Fibre Channel

 

Switched Fabric . . . . . . . . . . . . . . . . . . . . .

39

Using Worldwide Port Name Zoning . . . . . . . .

39

Installing and Configuring the Shared

 

Storage System. . . . . . . . . . . . . . . . . . . . . .

41

Installing Navisphere Storage System

 

Initialization Utility . . . . . . . . . . . . . . . . .

41

Installing the Expansion Pack Using

 

Navisphere Express. . . . . . . . . . . . . . . . .

42

Installing Navisphere Server Utility. . . . . . . . .

43

Registering a Server With a Storage System . . . .

43

Assigning the Virtual Disks to Cluster Nodes. . . .

44

Advanced or Optional Storage Features . . . . . .

44

Installing and Configuring a Failover Cluster . . . . . .

46

4 Contents

ATroubleshooting . . . . . . . . . . . . . . . . . . . . 47

BCluster Data Form . . . . . . . . . . . . . . . . . . 55

CZoning Configuration Form . . . . . . . . . . . 57

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Contents 5

6 Contents

Introduction

A failover cluster combines specific hardware and software components to provide enhanced availability for applications and services that are run on the cluster. A failover cluster is designed to reduce the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable. It is recommended that you use redundant components like server and storage power supplies, connections between the nodes and the storage array(s), connections to client systems or other servers in a multi-tier enterprise application architecture in your cluster.

This document provides information and specific configuration tasks that enable you to configure your Microsoft® Windows Server® failover cluster with Dell/EMC AX4-5F (2 Fibre Channel ports per Storage Processor) or Dell/EMC AX4-5FX (4 Fibre Channel ports per Storage Processor) storage array(s).

NOTE: Throughout this document, Dell/EMC AX4-5 refers to Dell/EMC AX4-5F and Dell/EMC AX4-5FX storage arrays.

NOTE: Throughout this document, Windows Server 2008 refers to Windows Server 2008 and Windows Server 2008 R2 operating systems.

For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell™ Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide on the Dell Support website at support.dell.com/manuals. For more information on deploying your cluster with Windows Server 2008 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide on the Dell Support website at support.dell.com/manuals.

For a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at dell.com/ha.

Introduction 7

Cluster Solution

Your cluster implements a minimum of two node to a maximum of either eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server 2008) clustering and provides the following features:

8-Gbps and 4-Gbps Fibre Channel technologies

High availability of resources to network clients

Redundant paths to the shared storage

Failure recovery for applications and services

Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a node or storage system without taking the entire cluster offline

Implementing Fibre Channel technology in a cluster provides the following advantages:

Flexibility — Fibre Channel allows a distance of up to 10 km between switches without degrading the signal.

Availability — Fibre Channel components use redundant connections, providing multiple data paths and greater availability for clients.

Connectivity — Fibre Channel allows more device connections than SCSI. Because Fibre Channel devices are hot-swappable, you can add or remove devices from the nodes without bringing down the cluster.

Cluster Hardware Requirements

Your cluster requires the following hardware components:

Cluster nodes

Cluster storage

8 Introduction

Cluster Nodes

Table 1-1 lists the hardware requirements for the cluster nodes.

Table 1-1. Cluster Node Requirements

Component

Minimum Requirement

 

 

Cluster nodes

A minimum of two identical Dell™ PowerEdge™ servers are

 

required. The maximum number of nodes that is supported

 

depends on the variant of the Windows Server operating

 

system used in your cluster, and on the physical topology in

 

which the storage system and nodes are interconnected.

 

 

RAM

The variant of the Windows Server operating system that is

 

installed on your cluster nodes determines the minimum

 

required amount of system RAM.

 

 

HBA ports

Two Fibre Channel HBAs per node, unless the server employs

 

an integrated or supported dual-port Fibre Channel HBA.

 

Where possible, place the HBAs on separate PCI buses to

 

improve availability and performance.

 

 

NICs (public and

At least two NICs: one NIC for the public network and

private networks)

another NIC for the private network.

 

NOTE: It is recommended that the NICs on each public network

 

are identical, and that the NICs on each private network are

 

identical.

 

 

Internal disk

One controller connected to at least two internal hard drives

controller

for each node. Use any supported RAID controller or disk

 

controller.

 

Two hard drives are required for mirroring (RAID 1) and at

 

least three are required for disk striping with parity (RAID 5).

 

NOTE: It is strongly recommended that you use hardware-

 

based RAID or software-based disk-fault tolerance for the

 

internal drives.

 

 

NOTE: For more information about supported systems, HBAs and operating system variants, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at dell.com/ha.

Introduction 9

Cluster Storage

Cluster nodes can share access to external storage systems. However, only one of the nodes can own any redundant array of independent disks (RAID) volume in the external storage system at any time. Microsoft Cluster Services (MSCS) controls which node has access to each RAID volume in the shared storage system.

Table 1-2 lists supported storage systems and the configuration requirements for the cluster nodes and stand-alone systems connected to the storage systems.

Table 1-2. Cluster Storage Requirements

Hardware Components

Requirement

 

 

Supported storage

One to four supported Dell/EMC storage systems. For

systems

specific storage system requirements see Table 1-3.

 

 

Cluster nodes

All nodes must be directly attached to a single storage

 

system or attached to one or more storage systems

 

through a SAN.

 

 

Multiple clusters and

Can share one or more supported storage systems using

stand-alone systems

optional software that is available for your storage system.

 

See "Installing and Configuring the Shared Storage

 

System" on page 41.

 

 

The storage systems work together with the following hardware components:

Disk processor enclosure (DPE)—Configured with storage processors that control the RAID arrays in the storage system and provide storage functionalities such as snapshots, LUN masking, and remote mirroring.

Disk array enclosure (DAE)—Provides additional storage and is attached to the disk processor enclosure.

Standby power supply (SPS)—Provides backup power to protect the integrity of the disk processor write cache. The SPS is connected to the disk processor enclosure

10 Introduction

Table 1-3 lists hardware requirements for the disk processor enclosures DPE DAE, and SPS.

Table 1-3. Dell/EMC Storage System Requirements

Storage

Minimum Required

Possible Storage

SPS

System

Storage

Expansion

 

AX4-5 1 DPE with at least 4 and up to 12 hard drives

Up to 3 DAE with a

First is required,

maximum of 12 hard-

the second SPS is

drives each

optional

NOTE: Ensure that the core software version running on the storage system is supported by Dell. For specific version requirements, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Cluster website at dell.com/ha.

Supported Cluster Configurations

Direct-Attached Cluster

In a direct-attached cluster, all nodes of the cluster are directly attached to a single storage system. In this configuration, the RAID controllers (or storage processors) on the storage systems are connected by cables directly to the Fibre Channel HBA ports in the nodes.

Figure 1-1 shows a basic direct-attached, single-cluster configuration.

Introduction 11

Figure 1-1. Direct-Attached, Single-Cluster Configuration

 

public network

cluster node

cluster node

 

private network

Fibre Channel

Fibre Channel

connections

connections

storage system

EMC® PowerPath® Limitations in a Direct-Attached Cluster

EMC PowerPath provides failover capabilities and multiple path detection as well as dynamic load balancing between multiple ports on the same storage processor. However, direct-attached clusters supported by Dell connect to a single port on each storage processor in the storage system. Because of the single port limitation, PowerPath can provide only failover protection, not load balancing, in a direct-attached configuration.

SAN-Attached Cluster

In a SAN-attached cluster, all of the nodes are attached to a single storage system or to multiple storage systems through a SAN using redundant switch fabrics. SAN-attached clusters are superior to direct-attached clusters in configuration flexibility, expandability, and performance.

Figure 1-2 shows a SAN-attached cluster.

12 Introduction

Figure 1-2. SAN-Attached Cluster

 

 

public network

cluster node

cluster node

 

private network

Fibre Channel

Fibre Channel

connections

connections

Fibre Channel

Fibre Channel

switch

switch

storage system

Other Documents You May Need

CAUTION: The safety information that is shipped with your system provides important safety and regulatory information. Warranty information may be included within this document or as a separate document.

NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the

Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com/manuals.

The Rack Installation Guide included with your rack solution describes how to install your system into a rack.

The Getting Started Guide provides an overview of initially setting up your system.

The Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide provides more information on deploying your cluster with the Windows Server 2003 operating system.

The Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide provides more information on deploying your cluster with the Windows Server 2008 operating system.

Introduction 13

The Dell Cluster Configuration Support Matrices provides a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster.

The HBA documentation provides installation instructions for the HBAs.

Systems management software documentation describes the features, requirements, installation, and basic operation of the software.

Operating system documentation describes how to install (if necessary), configure, and use the operating system software.

The Dell PowerVault™ tape library documentation provides information for installing, troubleshooting, and upgrading the tape library.

The EMC PowerPath documentation that came with your HBA kit(s) and Dell/EMC Storage Enclosure User’s Guides.

Updates are sometimes included with the system to describe changes to the system, software, and/or documentation.

NOTE: Always read the updates first because they often supersede information in other documents.

Release notes or readme files may be included to provide last-minute updates to the system or documentation, or advanced technical reference material intended for experienced users or technicians.

14 Introduction

Cabling Your Cluster Hardware

NOTE: To configure Dell blade server modules in a Dell™ PowerEdge™ cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com/manuals.

Cabling the Mouse, Keyboard, and Monitor

When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes. See the documentation included with your rack for instructions on cabling each node’s connections to the switch box.

Cabling the Power Supplies

Refer to the documentation for each component in your cluster solution to ensure that the specific power requirements are satisfied.

The following guidelines are recommended to protect your cluster solution from power-related failures:

For nodes with multiple power supplies, plug each power supply into a separate AC circuit.

Use uninterruptible power supplies (UPS).

For some environments, consider having backup generators and power from separate electrical substations.

Figure 2-1, and Figure 2-2 illustrate recommended methods for power cabling for a cluster solution consisting of two PowerEdge systems and two storage systems. To ensure redundancy, the primary power supplies of all the components are grouped onto one or two circuits and the redundant power supplies are grouped onto a different circuit.

Cabling Your Cluster Hardware

 

15

 

Dell EMC AX4-5 User Manual

Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems and One Standby Power Supply (SPS) in an AX4-5 Storage System

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

primary power

 

 

 

 

 

 

 

 

 

 

 

 

 

 

redundant power

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

supplies on one AC

 

 

 

 

 

 

 

 

 

 

 

 

 

 

supplies on one AC

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

SPS

 

 

 

power strip (or on one

 

 

 

 

 

 

 

 

power strip (or on one

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

AC PDU [not shown])

 

 

 

 

 

 

 

 

 

 

 

 

 

 

AC PDU [not shown])

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

NOTE: This illustration is intended only to demonstrate the power distribution of the components.

16

Cabling Your Cluster Hardware

Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge Systems and Two SPS(s) in an AX4-5 Storage System

A

B

A

B

A

 

 

B

 

0 Fibre

1 Fibre

0 Fibre

1 Fibre

 

 

A

 

 

 

B

 

primary power supplies on one AC power strip (or on one AC PDU [not shown])

redundant power supplies on one AC power strip (or on one AC PDU [not shown])

Cabling Your Cluster for Public and Private Networks

The network adapters in the cluster nodes provide at least two network connections for each node, as described in Table 2-1.

NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the

Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com/manuals.

Cabling Your Cluster Hardware

 

17

 

Table 2-1. Network Connections

Network Connection

Description

 

 

Public network

All connections to the client LAN.

 

At least one public network must be configured for

 

Mixed mode for private network failover.

Private network

A dedicated connection for sharing cluster health and

 

status information only.

 

 

Figure 2-3 shows an example of cabling in which dedicated network adapters in each node are connected to each other (for the private network) and the remaining network adapters are connected to the public network.

Figure 2-3. Example of Network Cabling Connection

public network

public

private

network

network

adapter

adapter

 

private network

cluster node 1

cluster node 2

18

Cabling Your Cluster Hardware

Loading...
+ 42 hidden pages