Dell EMC AX4-5I User Manual

Dell|EMC AX4-5i iSCSI Storage Arrays
With Microsoft
Hardware Installation and
Troubleshooting Guide
®
Windows Server
Failover Clusters
®
www.dell.com | support.dell.com
NOTE: A NOTE indicates important information that helps you make better use
of your computer.
NOTICE: A NOTICE indicates either potential damage to hardware or loss of
data and tells you how to avoid the problem.
CAUTION: A CAUTION indicates a potential for property damage, personal
injury, or death.
___________________
Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved.
Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and OpenManage are
trademarks of Dell Inc.; Active Directory , Microsoft, W indows, Windows Server , and Windows NT are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries; EMC, Navisphere, and PowerPath are registered trademarks and MirrorView, SAN Copy, and SnapView are trademarks of EMC Corporation.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
April 2008 Rev A00
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 5
Cluster Solution . . . . . . . . . . . . . . . . . . . . . . 6
Cluster Hardware Requirements
Cluster Nodes
Cluster Storage
. . . . . . . . . . . . . . . . . . . . . 7
. . . . . . . . . . . . . . . . . . . . 8
NICs Dedicated to iSCSI
Ethernet Switches Dedicated to iSCSI
Supported Cluster Configurations
Direct-Attached Cluster
iSCSI SAN-Attached Cluster
Other Documents You May Need
. . . . . . . . . . . . . 6
. . . . . . . . . . . . . . . 9
. . . . . . . . 9
. . . . . . . . . . . . . 9
. . . . . . . . . . . . . . . 9
. . . . . . . . . . . . 10
. . . . . . . . . . . . 11
2 Cabling Your Cluster Hardware . . . . . . . . 13
Cabling the Mouse, Keyboard, and Monitor . . . . . . 13
Cabling the Power Supplies
Cabling Your Cluster for Public and Private Networks
. . . . . . . . . . . . . . . . . . . . 15
Cabling the Public Network
Cabling the Private Network
NIC Teaming
. . . . . . . . . . . . . . . . . . . . 17
. . . . . . . . . . . . . . . 13
. . . . . . . . . . . . 16
. . . . . . . . . . . . 17
Cabling the Storage Systems
Cabling Storage for Your Direct-Attached Cluster
Cabling Storage for Your iSCSI SAN-Attached Cluster
. . . . . . . . . . . . . . 18
. . . . . . . . . . . . . . 18
. . . . . . . . . . . . 20
Contents 3
3 Preparing Your Systems
for Clustering
Cluster Configuration Overview . . . . . . . . . . . . . 27
. . . . . . . . . . . . . . . . . . . . . 27
Installation Overview
Installing the iSCSI NICs
. . . . . . . . . . . . . . . . . . 29
. . . . . . . . . . . . . . 30
Installing the Microsoft iSCSI Software Initiator
Modifying the TCP Registry Settings
Installing EMC
Configuring the Shared Storage System
Installing and Configuring a Failover Cluster
. . . . . . . . . . . . . . . . . . 30
. . . . . . . . 31
®
PowerPath
®
. . . . . . . . . . . 31
. . . . . . 32
. . . . 41
A Troubleshooting . . . . . . . . . . . . . . . . . . . 43
B Cluster Data Form
C iSCSI Configuration Worksheet
Index
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
. . . . . . . . . . . . . . . . . 49
. . . . . . . 51
4 Contents

Introduction

A Dell™ Failover Cluster combines specific hardware and software components to provide enhanced availability for applications and services that run on your cluster. A Failover Cluster reduces the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable. It is recommended that you use redundant components like servers, storage power supplies, connections between the nodes and the storage array(s), connections to client systems or other servers in a multi-tier enterprise application architecture in your cluster.
This document provides information and specific configuration tasks that enable you to configure your Failover Cluster with Dell|EMC AX4-5i Internet Small Computer System Interface (iSCSI) storage array(s).
For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell™ Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com. For more information on deploying your cluster with Windows Server 2008 operating systems, see the Dell
Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com.
For a list of recommended operating systems, hardware components, and driver or firmware versions for your Failover Cluster, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at www.dell.com/ha.
Introduction 5

Cluster Solution

Your cluster supports a minimum of two nodes to a maximum of either eight nodes (with Windows Server 2003 operating systems) or sixteen nodes (with Windows Server 2008 operating systems) and provides the following features:
Gigabit Ethernet technology for iSCSI clusters
High availability of resources to network clients
Redundant paths to the shared storage
Failure recovery for applications and services
Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a node or storage system without taking the entire cluster offline
The iSCSI protocol encapsulates SCSI frames that include commands, data, status and so on to Transmission Control Protocol/Internet Protocol (TCP/IP) packets to be transported over Ethernet networks. The iSCSI data blocks are sent between the Microsoft iSCSI Initiator that resides in the host and the iSCSI target, which is usually a storage device. Implementing iSCSI in a cluster provides the following advantages:
Geographic distribution — Wider coverage of Ethernet technology allows cluster nodes and storage systems to be located in different sites.
Low cost for Availability — Redundant connections provide multiple data paths that are available through inexpensive TCP/IP network components.
Connectivity — A single technology for connection of storage systems, cluster nodes and clients within existent local area network (LAN), wide area network (WAN), and storage network.

Cluster Hardware Requirements

Your cluster requires the following hardware components:
Cluster nodes
Cluster storage
6 Introduction

Cluster Nodes

Table 1-1 lists the hardware requirements for the cluster nodes.
Table 1-1. Cluster Node Requirements
Component Minimum Requirement
Cluster nodes A minimum of two identical PowerEdge servers are required.
The maximum number of nodes that is supported depends on the variant of the Windows Server operating system used in your cluster, and on the physical topology in which the storage system and nodes are interconnected.
RAM The variant of the Windows Server operating system that is
installed on your cluster nodes determines the minimum required amount of system RAM.
iSCSI Initiator Install the iSCSI port driver, Initiator Service, and Software
Initiator on each node.
Network Interface Cards (NICs) for iSCSI access
NICs (public and private networks)
Internal disk controller
Two iSCSI NICs or two iSCSI NIC ports per node. Configure the NICs on separate PCI buses to improve availability and performance. TCP/IP Offload Engine (TOE) NICs are also supported for iSCSI traffic.
At least two NICs: one NIC for the public network and another NIC for the private network.
NOTE: It is recommended that the NICs on each public network
are identical, and that the NICs on each private network are identical.
One controller connected to at least two internal hard drives for each node. Use any supported RAID controller or disk controller.
Two hard drives are required for mirroring (RAID 1) and at least three are required for disk striping with parity (RAID 5).
NOTE: It is strongly recommended that you use hardware-based
RAID or software-based disk-fault tolerance for the internal drives.
Introduction 7

Cluster Storage

Cluster nodes can share access to external storage systems. However, only one of the nodes can own any RAID volume in the external storage system at any time. Microsoft Cluster Services (MSCS) controls which node has access to each RAID volume in the shared storage system.
Table 1-2 lists the supported storage systems and the configuration requirements for the cluster nodes and stand-alone systems connected to the storage systems.
Table 1-2. Cluster Storage Requirements
Hardware Components Requirement
Supported storage systems
Cluster nodes All nodes must be directly attached to a single storage
Multiple clusters and stand-alone systems
The storage systems work together with the following hardware components:
Disk Processor Enclosure (DPE)—Configured with storage processors that control the RAID arrays in the storage system and provide storage functionalities such as snapshots, LUN masking, and remote mirroring.
Disk Array Enclosure (DAE)—Provides additional storage and is attached to the disk processor enclosure.
Standby Power Supply (SPS)—Provides backup power to protect the integrity of the disk processor write cache. The SPS is connected to the disk processor enclosure.
Table 1-3 lists hardware requirements for the AX4-5i storage array.
One to four supported Dell|EMC storage systems. For specific storage system requirements, see Table 1-3.
system or attached to one or more storage systems through a SAN.
Can share one or more supported storage systems.
8 Introduction
Table 1-3. Dell|EMC Storage System Requirements
Processor Enclosure
AX4-5i One DPE with at least
NOTE: Ensure that the core software version running on the storage system is
supported. For specific version requirements, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Cluster website at www.dell.com/ha.
Minimum Required Storage
4 and up to 12 hard drives
Possible Storage Expansion
Up to three DAE with a maximum of 12 hard drives each
SPS
1 (required) and 2 (optional)

NICs Dedicated to iSCSI

The NIC controlled by iSCSI Software Initiator acts as an I/O adapter to connect the system's expansion bus and the storage components. Failover Cluster solutions that are configured with the AX4-5i storage array require two iSCSI NICs or NIC ports in each PowerEdge system to provide redundant paths and load balance the I/O data transfer to the storage system.

Ethernet Switches Dedicated to iSCSI

The Gigabit switch for iSCSI access functions as a regular network switch that provides extension and dedicated interconnection between the node and the storage system(s).

Supported Cluster Configurations

Direct-Attached Cluster

In a direct-attached cluster, both nodes of the cluster are directly attached to a single storage system. In this configuration, the RAID controllers (or storage processors) on the storage systems are connected by cables directly to the iSCSI NIC ports in the nodes.
Figure 1-1 shows a basic direct-attached, single-cluster configuration.
Introduction 9
Figure 1-1. Direct-Attached, Single-Cluster Configuration
public network
cluster node
private network
iSCSI connections
storage system
EMC PowerPath Limitations in a Direct-Attached Cluster
cluster node
iSCSI connections
EMC PowerPath provides failover capabilities and multiple path detection as well as dynamic load balancing between multiple ports on the same storage processor. However, direct-attached clusters supported by Dell connect to a single port on each storage processor in the storage system. Because of the single port limitation, PowerPath can provide only failover protection, not load balancing, in a direct-attached configuration.

iSCSI SAN-Attached Cluster

In an iSCSI switch-attached cluster, all of the nodes are attached to a single storage system or to multiple storage systems through redundant LANs for high-availability. iSCSI SAN-attached clusters are superior to direct-attached clusters in configuration flexibility, expandability, and performance.
Figure 1-2 shows an iSCSI SAN-attached cluster.
10 Introduction
Figure 1-2. iSCSI SAN-Attached Cluster
public network
cluster node
iSCSI connections
Ethernet switch
private network
storage system
cluster node
iSCSI connections
Ethernet switch

Other Documents You May Need

CAUTION: For important safety and regulatory information, see the safety
information that shipped with your system. Warranty information may be included within this document or as a separate document.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
NOTE: All documentation in the list below, unless indicated otherwise, is available
on the Dell Support website at support.dell.com.
The
The
The
Rack Installation Guide
included with your rack solution describes
how to install your system into a rack.
Getting Started Guide
provides an overview of initially setting up your
system.
Dell Failover Clusters with Microsoft Windows Server 2003 Installation
and Troubleshooting Guide
provides more information on deploying your
cluster with the Windows Server 2003 operating system.
Introduction 11
•The
Dell Failover Clusters with Microsoft Windows Server 2008 Installation
and Troubleshooting Guide
provides more information on deploying your
cluster with the Windows Server 2008 operating system.
•The
Dell Cluster Configuration Support Matrices
provides a list of recommended operating systems, hardware components, and driver or firmware versions for your Failover Cluster.
Operating system documentation describes how to install (if necessary), configure, and use the operating system software.
Documentation for any hardware and software components you purchased separately provides information to configure and install those options.
The Dell PowerVault™ tape library documentation provides information for installing, troubleshooting, and upgrading the tape library.
The EMC PowerPath documentation and Dell|EMC Storage Enclosure User’s Guides.
NOTE: Always read the updates first because they often supersede
information in other documents.
Release notes or readme files may be included to provide last-minute updates to the system or documentation, or advanced technical reference material intended for experienced users or technicians.
12 Introduction

Cabling Your Cluster Hardware

NOTE: To configure Dell blade server modules in a Dell™ PowerEdge™ cluster, see
the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.

Cabling the Mouse, Keyboard, and Monitor

When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes. For instructions on cabling each node’s connections to the switch box, see the documentation included with your rack.

Cabling the Power Supplies

Refer to the documentation for each component in your cluster solution to ensure that the specific power requirements are satisfied.
The following guidelines are recommended to protect your cluster solution from power-related failures:
For nodes with multiple power supplies, plug each power supply into a separate AC circuit.
Use uninterruptible power supplies (UPS).
For some environments, consider having backup generators and power from separate electrical substations.
Figure 2-1, and Figure 2-2 illustrate recommended methods for power cabling for a cluster solution consisting of two PowerEdge systems and two storage systems. To ensure redundancy, the primary power supplies of all the components are grouped onto one or two circuits and the redundant power supplies are grouped onto a different circuit.
Cabling Your Cluster Hardware 13
Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems
and One SPS in the AX4-5i Storage Array
primary power supplies on one AC power strip (or on one AC PDU [not shown])
NOTE: This illustration is intended only to demonstrate the power
distribution of the components.
SPS
redundant power supplies on one AC power strip (or on one AC PDU [not shown])
14 Cabling Your Cluster Hardware
Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge
Systems and Two SPSs in the AX4-5i Storage Array
primary power supplies on one AC power strip (or on one AC PDU [not shown])
NOTE: This illustration is intended only to demonstrate the power
distribution of the components.
SPS
redundant power supplies on one AC power strip (or on one AC PDU [not shown])

Cabling Your Cluster for Public and Private Networks

The network adapters in the cluster nodes provide at least two network connections for each node, as described in Table 2-1.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
Cabling Your Cluster Hardware 15
Table 2-1. Network Connections
Network Connection Description
Public network All connections to the client LAN.
At least one public network must be configured for Mixed mode for private network failover.
Private network A dedicated connection for sharing cluster health and
status information only.
Figure 2-3 shows an example of cabling in which dedicated network adapters in each node are connected to each other (for the private network) and the remaining network adapters are connected to the public network.
Figure 2-3. Example of Network Cabling Connection
public network
private network public network adapter
cluster node 1
adapter
private network
cluster node 2

Cabling the Public Network

Any network adapter supported by a system running TCP/IP may be used to connect to the public network segments. You can install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port.
16 Cabling Your Cluster Hardware

Cabling the Private Network

The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications. Table 2-2 describes three possible private network configurations.
Table 2-2. Private Network Hardware Components and Connections
Method Hardware Components Connection
Network switch
Point-to-Point Fast Ethernet (two-node clusters only)
Point-to-Point Gigabit Ethernet (two-node clusters only)
NOTE: Throughout this document, the term Gigabit Ethernet refers to either Gigabit
Ethernet or 10 Gigabit Ethernet.
Fast Ethernet or Gigabit Ethernet network adapters and switches
Fast Ethernet network adapters
Copper Gigabit Ethernet network adapters
Connect standard Ethernet cables from the network adapters in the nodes to a Fast Ethernet or Gigabit Ethernet switch.
Connect a crossover Ethernet cable between the Fast Ethernet network adapters in both nodes.
Connect a standard Ethernet cable between the Gigabit Ethernet network adapters in both nodes.
Using Dual-Port Network Adapters
You can configure your cluster to use the public network as a failover for private network communications. If dual-port network adapters are used, do not use both ports simultaneously to support both the public and private networks.

NIC Teaming

NIC teaming combines two or more NICs to provide load balancing and fault tolerance. Your cluster supports NIC teaming, but only in a public network; NIC teaming is not supported in a private network and an iSCSI network.
You should use the same brand of NICs in a team, and you cannot mix brands of teaming drivers.
Cabling Your Cluster Hardware 17
Loading...
+ 37 hidden pages