Dell PowerEdge FE600W, EMC CX3 Series Hardware Installation And Troubleshooting Manual

Dell|EMC CX3-series
Fibre Channel Storage Arrays With
Microsoft
®
Windows Server
®
Failover Clusters
Hardware Installation and
www.dell.com | support.dell.com
Notes, Notices, and Cautions
NOTE: A NOTE indicates important information that helps you make better use
of your computer.
NOTICE: A NOTICE indicates either potential damage to hardware or loss of
data and tells you how to avoid the problem.
CAUTION: A CAUTION indicates a potential for property damage, personal
injury, or death.
___________________
Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved.
Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. Trademarks used in this text: Dell, th e DELL logo, PowerEdge, PowerVault, and Dell OpenManage
are trademarks of Dell Inc.; Active Directory, Microsoft, Windows, Windows Server, and Windows NT are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries.; EMC, EMC ControlCenter, Navisphere, and PowerPath are registered trademarks and Access Logix, MirrorView, SAN Copy , and SnapView are trademarks of EMC Corporation.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
April 2008 Rev A00
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 7
Cluster Solution . . . . . . . . . . . . . . . . . . . . . . 8
Cluster Hardware Requirements
Cluster Nodes
Cluster Storage
. . . . . . . . . . . . . . . . . . . . . 9
. . . . . . . . . . . . . . . . . . . 10
Supported Cluster Configurations
Direct-Attached Cluster
SAN-Attached Cluster
Other Documents You May Need
. . . . . . . . . . . . . 8
. . . . . . . . . . . . 12
. . . . . . . . . . . . . . 12
. . . . . . . . . . . . . . . 13
. . . . . . . . . . . . 14
2 Cabling Your Cluster Hardware . . . . . . . . 17
Cabling the Mouse, Keyboard, and Monitor . . . . . . 17
Cabling the Power Supplies
Cabling Your Cluster for Public and Private Networks
. . . . . . . . . . . . . . . . . . . . 19
. . . . . . . . . . . . . . . 17
Cabling the Storage Systems
Cabling the Public Network
Cabling the Private Network
NIC Teaming
. . . . . . . . . . . . . . . . . . . . 21
. . . . . . . . . . . . . . 22
Cabling Storage for Your Direct-Attached Cluster
. . . . . . . . . . . . . . 22
Cabling Storage for Your SAN-Attached Cluster
. . . . . . . . . . . . . . . 27
. . . . . . . . . . . . 20
. . . . . . . . . . . . 21
Contents 3
3 Preparing Your Systems for
Clustering
. . . . . . . . . . . . . . . . . . . . . . . . 39
Cluster Configuration Overview . . . . . . . . . . . . . 39
Installation Overview
Installing the Fibre Channel HBAs
Installing the Fibre Channel HBA Drivers
. . . . . . . . . . . . . . . . . . 41
. . . . . . . . . . . . 42
. . . . . . 42
Implementing Zoning on a Fibre Channel Switched Fabric
. . . . . . . . . . . . . . . . . . . . . 42
Using Zoning in SAN Configurations Containing Multiple Hosts
Using Worldwide Port Name Zoning
. . . . . . . . . . . . . 43
. . . . . . . . 43
Installing and Configuring the Shared Storage System
Access Logix
Access Control
. . . . . . . . . . . . . . . . . 45
. . . . . . . . . . . . . . . . . . . . 45
. . . . . . . . . . . . . . . . . . . 47
Storage Groups
Navisphere Manager
Navisphere Agent
EMC PowerPath
. . . . . . . . . . . . . . . . . . . 47
. . . . . . . . . . . . . . . . 49
. . . . . . . . . . . . . . . . . . 49
. . . . . . . . . . . . . . . . . . . 49
Enabling Access Logix and Creating Storage Groups Using Navisphere 6.x
. . . . . . . 50
Configuring the Hard Drives on the Shared Storage System(s)
Optional Storage Features
. . . . . . . . . . . . . 51
. . . . . . . . . . . . . 55
Updating a Dell|EMC Storage System for Clustering
. . . . . . . . . . . . . . . . . . . . . . . . 56
Installing and Configuring a Failover Cluster
. . . . . . 56
4 Contents
A Troubleshooting. . . . . . . . . . . . . . . . . . . . 57
B Cluster Data Form . . . . . . . . . . . . . . . . . . 63
C Zoning Configuration Form
. . . . . . . . . . . 65
Contents 5
6 Contents
Introduction
A Dell™ Failover Cluster combines specific hardware and software components to provide enhanced availability for applications and services that are run on the cluster. A Failover Cluster is designed to reduce the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable. It is recommended that you use redundant components like server and storage power supplies, connections between the nodes and the storage array(s), and connections to client systems or other servers in a multi-tier enterprise application architecture in your cluster.
This document provides information to configure your Dell|EMC CX3-series fibre channel storage array with one or more Failover Clusters. It provides specific configuration tasks that enable you to deploy the shared storage for your cluster.
For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com. For more information on deploying your cluster with Windows Server 2008 operating systems, see the
Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at
support.dell.com.
For a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Failover Cluster, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha.
Introduction 7
Cluster Solution
Your cluster implements a minimum of two node clustering to a maximum of either eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server 2008) clustering and provides the following features:
8-Gbps, 4-Gbps, and 2-Gbps Fibre Channel technology
High availability of resources to network clients
Redundant paths to the shared storage
Failure recovery for applications and services
Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a node or storage system without taking the entire cluster offline
Implementing Fibre Channel technology in a cluster provides the following advantages:
Flexibility
switches without degrading the signal.
Availability
providing multiple data paths and greater availability for clients.
— Fibre Channel allows a distance of up to 10 km between
— Fibre Channel components use redundant connections
Connectivity
Small Computer System Interface (SCSI). Because Fibre Channel devices are hot-pluggable, you can add or remove devices from the nodes without taking the entire cluster offline.
— Fibre Channel allows more device connections than
Cluster Hardware Requirements
Your cluster requires the following hardware components:
Servers (cluster nodes)
Storage Array and storage management software
8 Introduction
Cluster Nodes
Table 1-1 lists the hardware requirements for the cluster nodes.
Table 1-1. Cluster Node Requirements
Component Minimum Requirement
Cluster nodes A minimum of two identical PowerEdge servers are required.
The maximum number of nodes that is supported depends on the variant of the Windows Server operating system used in your cluster, and on the physical topology in which the storage system and nodes are interconnected.
RAM The variant of the Windows Server operating system that is
installed on your cluster nodes determines the minimum required amount of system RAM.
HBA ports Two Fibre Channel HBAs per node, unless the server employs
an integrated or supported dual-port Fibre Channel HBA.
Where possible, place the HBAs on separate PCI buses to improve availability and performance.
NICs At least two NICs: one NIC for the public network and
another NIC for the private network.
NOTE: It is recommended that the NICs on each public network
are identical, and that the NICs on each private network are identical.
Internal disk controller
One controller connected to at least two internal hard drives for each node. Use any supported RAID controller or disk controller.
Two hard drives are required for mirroring (RAID 1) and at least three are required for disk striping with parity (RAID 5).
NOTE: It is strongly recommended that you use
hardware-based RAID or software-based disk-fault tolerance for the internal drives.
NOTE: For more information about supported systems, HBAs and operating system
variants, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha.
Introduction 9
Cluster Storage
Table 1-2 lists supported storage systems and the configuration requirements for the cluster nodes and stand-alone systems connected to the storage systems.
Table 1-2. Cluster Storage Requirements
Hardware Components Requirement
Supported storage systems
Cluster nodes All nodes must be directly attached to a single storage
Multiple clusters and stand-alone systems
One to four supported Dell|EMC storage systems. See Table 1-3 for specific storage system requirements.
system or attached to one or more storage systems through a SAN.
Can share one or more supported storage systems using optional software that is available for your storage system. See "Installing and Configuring the Shared Storage System" on page 45.
Table 1-3 lists hardware requirements for the storage processor enclosures (SPE), disk array enclosures (DAE), and standby power supplies (SPS).
Table 1-3. Dell|EMC Storage System Requirements
Processor Enclosure
CX3-10c SPE One DAE3P-OS with
Minimum Storage Possible Storage
Expansion
Up to three DAE with a at least five and up to 15 hard drives
maximum of 15 hard
drives each
SPS
Two per SPE and DAE3P-OS
CX3-20c SPE One DAE3P-OS with
at least five and up to 15 hard drives
CX3-20f SPE One DAE3P-OS with
at least five and up to 15 hard drives
CX3-40c SPE One DAE3P-OS with
at least five and up to 15 hard drives
10 Introduction
Up to seven DAE with a
maximum of 15 hard
drives each
Up to seven DAE with a
maximum of 15 hard
drives each
Up to 15 DAE with a
maximum of 15 hard
drives each
Two per SPE and DAE3P-OS
Two per SPE and DAE3P-OS
Two per SPE and DAE3P-OS
Table 1-3. Dell|EMC Storage System Requirements (continued)
Processor Enclosure
CX3-40f SPE One DAE3P-OS with
CX3-80 SPE One DAE3P-OS with
NOTE: The DAE3P-OS is the first DAE enclosure that is connected to the
CX3-series (including all of the storage systems listed above). Core software is preinstalled on the first five hard drives of the DAE3P-OS.
Minimum Storage Possible Storage
Expansion
Up to 15 DAE with a at least five and up to 15 hard drives
at least five and up to 15 hard drives
maximum of 15 hard
drives each
Up to 31 DAE with a
maximum of 15 hard
drives each
SPS
Two per SPE and DAE3P-OS
Two per SPE and DAE3P-OS
Each storage system in the cluster is centrally managed by one host system (also called a management station) running EMC Navisphere
®
Manager—a centralized storage management application used to configure Dell|EMC storage systems. Using a graphical user interface (GUI), you can select a specific view of your storage arrays, as shown in Table 1-4.
Table 1-4. Navisphere Manager Storage Views
View Description
Storage Shows the logical storage components and their relationships to each
other and identifies hardware faults.
Hosts Shows the host system's storage group and attached logical unit
numbers (LUNs).
Monitors Shows all Event Monitor configurations, including centralized and
distributed monitoring configurations.
Introduction 11
You can use Navisphere Manager to perform tasks such as creating RAID arrays, binding LUNs, and downloading firmware. Optional software for the shared storage systems include:
EMC MirrorView™ — Provides synchronous or asynchronous mirroring between two storage systems.
EMC SnapView™ or testing without affecting the contents of the source LUN.
EMC SAN Copy™ — Moves data between Dell|EMC storage systems without using host CPU cycles or local area network (LAN) bandwidth.
For more information about Navisphere Manager, EMC Access Logix™, MirrorView, SnapView, and SAN Copy, see "Installing and Configuring the Shared Storage System" on page 45.
— Captures point-in-time images of a LUN for backups
Supported Cluster Configurations
The following sections describe the supported cluster configurations.
Direct-Attached Cluster
In a direct-attached cluster, both nodes of the cluster are directly attached to a single storage system. In this configuration, the RAID controllers (or storage processors) on the storage systems are connected by cables directly to the Fibre Channel HBA ports in the nodes.
Figure 1-1 shows a basic direct-attached, single-cluster configuration.
12 Introduction
Figure 1-1. Direct-Attached, Single-Cluster Configuration
public network
cluster node
Fibre Channel connections
private network
cluster node
Fibre Channel connections
EMC PowerPath Limitations in a Direct-Attached Cluster
EMC PowerPath provides failover capabilities, multiple path detection, and dynamic load balancing between multiple ports on the same storage processor. However, direct-attached clusters supported by Dell connect to a single port on each storage processor in the storage system. Because of the single port limitation, PowerPath can provide only failover protection, not load balancing, in a direct-attached configuration.
SAN-Attached Cluster
In a SAN-attached cluster, all nodes are attached to a single storage system or to multiple storage systems through a SAN using redundant switch fabrics. SAN-attached clusters are superior to direct-attached clusters in configuration flexibility, expandability, and performance.
Figure 1-2 shows a SAN-attached cluster.
Introduction 13
Figure 1-2. SAN-Attached Cluster
public network
cluster node
private network
Fibre Channel connections
Fibre Channel switch
storage system
cluster node
Fibre Channel connections
Fibre Channel switch
Other Documents You May Need
CAUTION: The Product Information Guide provides important safety and
regulatory information. Warranty information may be included within this document or as a separate document.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
•The
Rack Installation Guide
how to install your system into a rack.
•The
Getting Started Guide
system.
For more information on deploying your cluster with Windows Server 2003 operating systems, see the
Server 2003 Installation and Troubleshooting Guide
14 Introduction
included with your rack solution describes
provides an overview of initially setting up your
Dell Failover Clusters with Microsoft Windows
.
For more information on deploying your cluster with Windows Server 2008 operating systems, see the
Server 2008 Installation and Troubleshooting Guide
Dell Failover Clusters with Microsoft Windows
.
The HBA documentation provides installation instructions for the HBAs.
Systems management software documentation describes the features, requirements, installation, and basic operation of the software.
Operating system documentation describes how to install (if necessary), configure, and use the operating system software.
Documentation for any components you purchased separately provides information to configure and install those options.
The Dell PowerVault™ tape library documentation provides information for installing, troubleshooting, and upgrading the tape library.
Any other documentation that came with your server or storage system.
The EMC PowerPath documentation that came with your HBA kit(s) and Dell|EMC Storage Enclosure User’s Guides.
Updates are sometimes included with the system to describe changes to the system, software, and/or documentation.
NOTE: Always read the updates first because they often supersede
information in other documents.
Release notes or readme files may be included to provide last-minute updates to the system or documentation, or advanced technical reference material intended for experienced users or technicians.
Introduction 15
16 Introduction
Cabling Your Cluster Hardware
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
Cabling the Mouse, Keyboard, and Monitor
When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes. See the documentation included with your rack for instructions on cabling connections of each node to the switch box.
Cabling the Power Supplies
See the documentation for each component in your cluster solution and ensure that the specific power requirements are satisfied.
The following guidelines are recommended to protect your cluster solution from power-related failures:
For nodes with multiple power supplies, plug each power supply into a separate AC circuit.
Use uninterruptible power supplies (UPS).
For some environments, consider having backup generators and power from separate electrical substations.
Figure 2-1 and Figure 2-2 illustrate recommended methods for power cabling for a cluster solution consisting of two PowerEdge systems and two storage systems. To ensure redundancy, the primary power supplies of all the components are grouped into one or two circuits and the redundant power supplies are grouped into a different circuit.
Cabling Your Cluster Hardware 17
Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems
primary power supplies on one AC power strip (or on one AC PDU [not shown])
NOTE: This illustration is intended only to demonstrate the power
distribution of the components.
redundant power supplies on one AC power strip (or on one AC PDU [not shown])
18 Cabling Your Cluster Hardware
Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge Systems
primary power supplies on one AC power strip (or on one AC PDU [not shown])
NOTE: This illustration is intended only to demonstrate the power
distribution of the components.
redundant power supplies on one AC power strip (or on one AC PDU [not shown])
Cabling Your Cluster for Public and Private Networks
The network adapters in the cluster nodes provide at least two network connections for each node, as described in Table 2-1.
NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the
Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
Cabling Your Cluster Hardware 19
Table 2-1. Network Connections
Network Connection Description
Public network All connections to the client LAN.
At least one public network must be configured for Mixed mode for private network failover.
Private network A dedicated connection for sharing cluster health and
status information only.
Figure 2-3 shows an example of cabling in which dedicated network adapters in each node are connected to each other (for the private network) and the remaining network adapters are connected to the public network.
Figure 2-3. Example of Network Cabling Connection
public network
public network adapter
cluster node 1
Cabling the Public Network
Any network adapter supported by a system running TCP/IP may be used to connect to the public network segments. You can install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port.
20 Cabling Your Cluster Hardware
private network adapter
private network
cluster node 2
Cabling the Private Network
The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications. Table 2-2 describes three possible private network configurations.
Table 2-2. Private Network Hardware Components and Connections
Method Hardware Components Connection
Network switch
Point-to-Point Fast Ethernet (two-node clusters only)
Point-to-Point Gigabit Ethernet (two-node clusters only)
NOTE: Throughout this document, Gigabit Ethernet is used to refer to either Gigabit
Ethernet or 10 Gigabit Ethernet.
Fast Ethernet or Gigabit Ethernet network adapters and switches
Fast Ethernet network adapters
Copper Gigabit Ethernet network adapters
Connect standard Ethernet cables from the network adapters in the nodes to a Fast Ethernet or Gigabit Ethernet switch.
Connect a crossover Ethernet cable between the Fast Ethernet network adapters in both nodes.
Connect a standard Ethernet cable between the Gigabit Ethernet network adapters in both nodes.
Using Dual-Port Network Adapters
You can configure your cluster to use the public network as a failover for private network communications. If you are using dual-port network adapters, do not configure both ports simultaneously to support both public and private networks.
NIC Teaming
NIC teaming combines two or more NICs to provide load balancing and fault tolerance. Your cluster supports NIC teaming, only in a public network. NIC teaming is not supported in a private network.
Use the same brand of NICs in a team. Do not mix brands in NIC teaming.
Cabling Your Cluster Hardware 21
Loading...
+ 49 hidden pages