Dell EMC CX4 Owner's Manual

0 (0)

Dell/EMC CX4-series

Fibre Channel Storage Arrays

With Microsoft® Windows

Server® Failover Clusters

Hardware Installation

and Troubleshooting

Guide

Notes, Cautions, and Warnings

NOTE: A NOTE indicates important information that helps you make better use of your computer.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

___________________

Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved.

Reproduction of these materials in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.

Trademarks used in this text: Dell, the DELL logo, PowerEdge, and PowerVault are trademarks of Dell Inc.; Active Directory, Microsoft, Windows, Windows Server, Windows XP and Windows NT are either trademarks or registered trademarks of Microsoft Corporation in the United States and/ or other countries.; EMC, Navisphere, and PowerPath are registered trademarks and MirrorView, SAN Copy, and SnapView are trademarks of EMC Corporation.

Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.

October 2008

Rev A00

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . .

7

Cluster Solution . . . . . . . . . . . . . . . . . . . . . .

8

Cluster Hardware Requirements . . . . . . . . . . . . .

8

Cluster Nodes. . . . . . . . . . . . . . . . . . . . .

9

Cluster Storage . . . . . . . . . . . . . . . . . . .

10

Supported Cluster Configurations . . . . . . . . . . . .

12

Direct-Attached Cluster . . . . . . . . . . . . . .

12

SAN-Attached Cluster . . . . . . . . . . . . . . .

13

Other Documents You May Need . . . . . . . . . . . .

13

2 Cabling Your Cluster Hardware . . . . . . . .

15

Cabling the Mouse, Keyboard, and Monitor . . . . . .

15

Cabling the Power Supplies . . . . . . . . . . . . . . .

15

Cabling Your Cluster for Public and Private

 

Networks. . . . . . . . . . . . . . . . . . . . . . . . .

17

Cabling the Public Network . . . . . . . . . . . .

18

Cabling the Private Network . . . . . . . . . . . .

19

NIC Teaming . . . . . . . . . . . . . . . . . . . .

19

Cabling the Storage Systems . . . . . . . . . . . . . .

19

Cabling Storage for Your Direct-Attached

 

Cluster . . . . . . . . . . . . . . . . . . . . . . .

20

Cabling Storage for Your SAN-Attached

 

Cluster . . . . . . . . . . . . . . . . . . . . . . .

25

Contents 3

3Preparing Your Systems for

Clustering . . . . . . . . . . . . . . . . . . . . . . . . 39

Cluster Configuration Overview . . . . . . . . . . . . .

39

Installation Overview . . . . . . . . . . . . . . . . . .

41

Installing the Fibre Channel HBAs. . . . . . . . . . . .

42

Installing the Fibre Channel HBA Drivers. . . . . .

42

Implementing Zoning on a Fibre Channel

 

Switched Fabric . . . . . . . . . . . . . . . . . . . . .

42

Using Zoning in SAN Configurations Containing

 

Multiple Hosts. . . . . . . . . . . . . . . . . . . .

43

Using Worldwide Port Name Zoning . . . . . . . .

43

Installing and Configuring the Shared

 

Storage System. . . . . . . . . . . . . . . . . . . . . .

45

Access Control . . . . . . . . . . . . . . . . . . .

45

Storage Groups . . . . . . . . . . . . . . . . . . .

46

Navisphere Manager . . . . . . . . . . . . . . . .

48

Navisphere Agent. . . . . . . . . . . . . . . . . .

48

EMC PowerPath. . . . . . . . . . . . . . . . . . .

49

Enabling Access Control and Creating

 

Storage Groups Using Navisphere . . . . . . . . .

49

Configuring the Hard Drives on the Shared

 

Storage System(s). . . . . . . . . . . . . . . . . .

51

Optional Storage Features . . . . . . . . . . . . .

52

Updating a Dell/EMC Storage System for

 

Clustering . . . . . . . . . . . . . . . . . . . . . . . .

53

Installing and Configuring a Failover Cluster . . . . . .

53

4 Contents

ATroubleshooting . . . . . . . . . . . . . . . . . . . . 55

BZoning Configuration Form . . . . . . . . . . . 61

CCluster Data Form . . . . . . . . . . . . . . . . . . 63

Contents 5

6 Contents

Introduction

A Dell™ Failover Cluster combines specific hardware and software components to provide enhanced availability for applications and services that are run on the cluster. A Failover Cluster is designed to reduce the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable. It is recommended that you use redundant components like server and storage power supplies, connections between the nodes and the storage array(s), and connections to client systems or other servers in a multi-tier enterprise application architecture in your cluster.

This document provides information to configure your Dell/EMC CX4-series fibre channel storage arrays with one or more Failover Clusters. It provides specific configuration tasks that enable you to deploy the shared storage for your cluster.

For more information on deploying your cluster with Microsoft® Windows Server® 2003 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com. For more information on deploying your cluster with Windows Server 2008 operating systems, see the

Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com.

For a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Failover Cluster, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha.

Introduction 7

Cluster Solution

Your cluster implements a minimum of two nodes to a maximum of either eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server 2008) and provides the following features:

8-Gbps and 4-Gbps Fibre Channel technology

High availability of resources to network clients

Redundant paths to the shared storage

Failure recovery for applications and services

Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a node or storage system without taking the entire cluster offline

Implementing Fibre Channel technology in a cluster provides the following advantages:

Flexibility — Fibre Channel allows a distance of up to 10 km between switches without degrading the signal.

Availability — Fibre Channel components use redundant connections providing multiple data paths and greater availability for clients.

Connectivity — Fibre Channel allows more device connections than Small Computer System Interface (SCSI). Because Fibre Channel devices are hot-pluggable, you can add or remove devices from the nodes without taking the entire cluster offline.

Cluster Hardware Requirements

Your cluster requires the following hardware components:

Cluster nodes

Cluster storage

8 Introduction

Cluster Nodes

Table 1-1 lists the hardware requirements for the cluster nodes.

Table 1-1. Cluster Node Requirements

Component

Minimum Requirement

 

 

Cluster nodes

A minimum of two identical PowerEdge servers are required.

 

The maximum number of nodes that are supported depend

 

on the variant of the Windows Server operating system used

 

in your cluster, and on the physical topology in which the

 

storage system and nodes are interconnected.

 

 

RAM

The variant of the Windows Server operating system that is

 

installed on your cluster nodes determines the minimum

 

RAM required.

 

 

Host Bus Adapter

Two Fibre Channel HBAs per node, unless the server employs

(HBA) ports

an integrated or supported dual-port Fibre Channel HBA.

 

Where possible, place the HBAs on separate PCI buses to

 

improve availability and performance.

 

 

NICs

At least two NICs: one NIC for the public network and

 

another NIC for the private network.

 

NOTE: It is recommended that the NICs on each public network

 

are identical, and that the NICs on each private network are

 

identical.

 

 

Internal disk

One controller connected to at least two internal hard drives

controller

for each node. Use any supported RAID controller or disk

 

controller.

 

Two hard drives are required for mirroring (RAID 1) and at

 

least three are required for disk striping with parity (RAID 5).

 

NOTE: It is strongly recommended that you use

 

hardware-based RAID or software-based disk-fault tolerance

 

for the internal drives.

 

 

NOTE: For more information about supported systems, HBAs and operating system variants, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha.

Introduction 9

Cluster Storage

Table 1-2 lists supported storage systems and the configuration requirements for the cluster nodes and stand-alone systems connected to the storage systems.

Table 1-2. Cluster Storage Requirements

Hardware Components

Requirement

 

 

Supported storage

One to four supported Dell/EMC storage systems. See

systems

Table 1-3 for specific storage system requirements.

 

 

Cluster nodes

All nodes must be directly attached to a single storage system

 

or attached to one or more storage systems through a SAN.

 

 

Multiple clusters and

Can share one or more supported storage systems. See

stand-alone systems

"Installing and Configuring the Shared Storage System" on

 

page 45.

 

 

Table 1-3 lists hardware requirements for the storage processor enclosures (SPE), disk array enclosures (DAE), and standby power supplies (SPS).

Table 1-3. Dell/EMC Storage System Requirements

Processor

Minimum Storage

Possible Storage

SPS

Enclosure

 

Expansion

 

 

 

 

 

CX4-120

One DAE-OS with at

Up to seven DAE’s with

Two for SPE and

 

least five and up to 15

a maximum of 15 hard

DAE-OS

 

hard drives

drives each

 

 

 

 

 

CX4-240

One DAE-OS with at

Up to fifteen DAE’s with

Two for SPE and

 

least five and up to 15

a maximum of 15 hard

DAE-OS

 

hard drives

drives each

 

 

 

 

 

CX4-480

One DAE-OS with at

Up to thirty one DAE’s

Two for SPE and

 

least five and up to 15

with a maximum of 15

DAE-OS

 

hard drives

hard drives each

 

 

 

 

 

CX4-960

One DAE-OS with at

Up to sixty three DAE’s

Two for SPE and

 

least five and up to 15

with a maximum of 15

DAE-OS

 

hard drives

hard drives each

 

 

 

 

 

NOTE: The DAE-OS is the first DAE enclosure that is connected to the CX4-series (including all of the storage systems listed above). Core software is preinstalled on the first five hard drives of the DAE-OS.

10 Introduction

Each storage system in the cluster is centrally managed by one host system (also called a management station) running EMC Navisphere® Manager—a centralized storage management application used to configure Dell/EMC storage systems. Using a graphical user interface (GUI), you can select a specific view of your storage arrays, as shown in Table 1-4.

Table 1-4. Navisphere Manager Storage Views

View

Description

 

 

Storage

Shows the logical storage components and their relationships to each

 

other and identifies hardware faults.

Hosts

Shows the host system's storage group and attached logical unit

 

numbers (LUNs).

Monitors

Shows all Event Monitor configurations, including centralized and

 

distributed monitoring configurations.

 

 

You can use Navisphere Manager to perform tasks such as creating RAID arrays, binding LUNs, and downloading firmware. Optional software for the shared storage systems include:

EMC MirrorView™ — Provides synchronous or asynchronous mirroring between two storage systems.

EMC SnapView™ — Captures point-in-time images of a LUN for backups or testing without affecting the contents of the source LUN.

EMC SAN Copy™ — Moves data between Dell/EMC storage systems without using host CPU cycles or local area network (LAN) bandwidth.

For more information about Navisphere Manager, MirrorView, SnapView, and SAN Copy, see "Installing and Configuring the Shared Storage System" on page 45.

Introduction 11

Supported Cluster Configurations

The following sections describe the supported cluster configurations.

Direct-Attached Cluster

In a direct-attached cluster, all the nodes of the cluster are directly attached to a single storage system. In this configuration, the RAID controllers (or storage processors) on the storage system are connected by cables directly to the Fibre Channel HBA ports in the nodes.

Figure 1-1 shows a basic direct-attached, single-cluster configuration.

Figure 1-1. Direct-Attached, Single-Cluster Configuration

 

 

 

public network

cluster node

 

 

cluster node

 

 

 

private network

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Fibre Channel

Fibre Channel

connections

connections

 

storage system

EMC PowerPath Limitations in a Direct-Attached Cluster

EMC PowerPath® provides failover capabilities, multiple path detection, and dynamic load balancing between multiple ports on the same storage processor. However, the direct-attached clusters supported by Dell connect to a single port on each storage processor in the storage system. Because of the single port limitation, PowerPath can provide only failover protection, not load balancing, in a direct-attached configuration.

12 Introduction

SAN-Attached Cluster

In a SAN-attached cluster, all nodes are attached to a single storage system or to multiple storage systems through a SAN using redundant switch fabrics. SAN-attached clusters are superior to direct-attached clusters in configuration flexibility, expandability, and performance.

Figure 1-2 shows a SAN-attached cluster.

Figure 1-2. SAN-Attached Cluster

 

public network

cluster node

cluster node

 

 

private network

Fibre Channel

Fibre Channel

 

connections

connections

 

Fibre Channel switch

Fibre Channel switch

storage system

Other Documents You May Need

WARNING: The safety information that shipped with your system provides important safety and regulatory information. Warranty information may be included within this document or as a separate document.

NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the

Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.

The Rack Installation Guide included with your rack solution describes how to install your system into a rack.

Introduction 13

The Getting Started Guide provides an overview of initially setting up your system.

For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide.

For more information on deploying your cluster with Windows Server 2008 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide.

The HBA documentation provides installation instructions for the HBAs.

Systems management software documentation describes the features, requirements, installation, and basic operation of the software.

Operating system documentation describes how to install (if necessary), configure, and use the operating system software.

Documentation for any components you purchased separately provides information to configure and install those options.

The Dell PowerVault™ tape library documentation provides information for installing, troubleshooting, and upgrading the tape library.

Any other documentation that came with your server or storage system.

The EMC PowerPath documentation that came with your HBA kit(s) and Dell/EMC Storage Enclosure User’s Guides.

Updates are sometimes included with the system to describe changes to the system, software, and/or documentation.

NOTE: Always read the updates first because they often supersede information in other documents.

Release notes or readme files may be included to provide last-minute updates to the system or documentation, or advanced technical reference material intended for experienced users or technicians.

14 Introduction

Cabling Your Cluster Hardware

NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the

Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.

Cabling the Mouse, Keyboard, and Monitor

When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes. See the documentation included with your rack for instructions on cabling connections of each node to the switch box.

Cabling the Power Supplies

See the documentation for each component in your cluster solution and ensure that the specific power requirements are satisfied.

The following guidelines are recommended to protect your cluster solution from power-related failures:

For nodes with multiple power supplies, plug each power supply into a separate AC circuit.

Use uninterruptible power supplies (UPS).

For some environments, consider having backup generators and power from separate electrical substations.

Figure 2-1 and Figure 2-2 illustrate recommended methods for power cabling for a cluster solution consisting of two PowerEdge systems and two storage systems. To ensure redundancy, the primary power supplies of all the components are grouped into one or two circuits and the redundant power supplies are grouped into a different circuit.

Cabling Your Cluster Hardware

 

15

 

Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems

3

1

3

1

 

 

2

 

2

 

1

0

1

0

0

 

0

 

primary power supplies on one AC power strip (or on one AC Power Distribution Unit [not shown])

redundant power supplies on one AC power strip (or on one AC PDU [not shown])

NOTE: This illustration is intended only to demonstrate the power distribution of the components.

16

Cabling Your Cluster Hardware

Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge Systems

3

1

3

1

 

 

2

 

2

 

1

0

1

0

0

 

0

 

primary power supplies on one AC power strip (or on one AC PDU [not shown])

redundant power supplies on one AC power strip (or on one AC PDU [not shown])

NOTE: This illustration is intended only to demonstrate the power distribution of the components.

Cabling Your Cluster for Public and Private Networks

The network adapters in the cluster nodes provide at least two network connections for each node, as described in Table 2-1.

NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the

Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.

Cabling Your Cluster Hardware

 

17

 

Dell EMC CX4 Owner's Manual

Table 2-1. Network Connections

Network Connection

Description

 

 

Public network

All connections to the client LAN.

 

At least one public network must be configured for Mixed

 

mode for private network failover.

Private network

A dedicated connection for sharing cluster health and

 

status information only.

 

 

Figure 2-3 shows an example of cabling in which dedicated network adapters in each node are connected to each other (for the private network) and the remaining network adapters are connected to the public network.

Figure 2-3. Example of Network Cabling Connection

public network

public

 

 

 

 

 

 

private network

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

network

 

 

 

 

 

 

adapter

 

 

 

 

 

adapter

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

private network

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

cluster node 1

cluster node 2

Cabling the Public Network

Any network adapter supported by a system running TCP/IP may be used to connect to the public network segments. You can install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port.

18

Cabling Your Cluster Hardware

Cabling the Private Network

The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications. Table 2-2 describes three possible private network configurations.

Table 2-2. Private Network Hardware Components and Connections

Method

Hardware Components

Connection

 

 

 

Network

Gigabit Ethernet network

Connect standard Ethernet cables

switch

adapters and switches

from the network adapters in the

 

 

nodes to a Gigabit Ethernet switch.

 

 

 

Point-to-Point

Copper Gigabit Ethernet

Connect a standard Ethernet cable

Gigabit

network adapters

between the Gigabit Ethernet network

Ethernet

 

adapters in both nodes.

(two-node

 

 

clusters only)

 

 

 

 

 

NOTE: Throughout this document, Gigabit Ethernet is used to refer to either Gigabit Ethernet or 10 Gigabit Ethernet.

Using Dual-Port Network Adapters

You can configure your cluster to use the public network as a failover for private network communications. If you are using dual-port network adapters, do not configure both ports simultaneously to support both public and private networks.

NIC Teaming

NIC teaming combines two or more NICs to provide load balancing and fault tolerance. Your cluster supports NIC teaming, only in a public network. NIC teaming is not supported in a private network.

Use the same brand of NICs in a team. Do not mix brands in NIC teaming.

Cabling the Storage Systems

This section provides information on cabling your cluster to a storage system in a direct-attached configuration or to one or more storage systems in a SANattached configuration.

Cabling Your Cluster Hardware

 

19

 

Cabling Storage for Your Direct-Attached Cluster

A direct-attached cluster configuration consists of redundant Fibre Channel host bus adapter (HBA) ports cabled directly to a Dell/EMC storage system.

Figure 2-4 shows an example of a direct-attached, single cluster configuration with redundant HBA ports installed in each cluster node.

Figure 2-4. Direct-Attached Cluster Configuration

 

public network

cluster node

cluster node

 

private network

Fibre Channel

Fibre

 

connections

Channel

 

 

connections

 

storage system

20

Cabling Your Cluster Hardware

Cabling a Cluster to a Dell/EMC Storage System

Each cluster node attaches to the storage system using two Fibre optic cables with duplex local connector (LC) multimode connectors that attach to the HBA ports in the cluster nodes and the storage processor (SP) ports in the Dell/EMC storage system. These connectors consist of two individual Fibre optic connectors with indexed tabs that must be aligned properly into the HBA ports and SP ports.

CAUTION: Do not remove the connector covers until you are ready to insert the connectors into the HBA port, SP port, or tape library port.

Cabling a Two-Node Cluster to a Dell/EMC Storage System

NOTE: The Dell/EMC storage system requires at least 2 front-end fibre channel ports available on each storage processor.

1Connect cluster node 1 to the storage system:

a Install a cable from cluster node 1 HBA port 0 to the first front-end fibre channel port on SP-A.

b Install a cable from cluster node 1 HBA port 1 to the first front-end fibre channel port on SP-B.

2Connect cluster node 2 to the storage system:

a Install a cable from cluster node 2 HBA port 0 to the second front-end fibre channel port on SP-A.

b Install a cable from cluster node 2 HBA port 1 to the second front-end fibre channel port on SP-B.

Cabling Your Cluster Hardware

 

21

 

Loading...
+ 47 hidden pages