No part of this publication may be reproduced or distributed in any form or by any means, or stored in a
database or retrieval system, without the prior written consent of EMC Corporation.
The information contained in this document is subject to change without notice. EMC Corporation assumes
no responsibility for any errors that may appear.
All computer software programs, including but not limited to microcode, described in this document are
furnished under a license, and may be used or copied only in accordance with the terms of such license.
EMC either owns or has the right to license the computer software programs described in this document.
EMC Corporation retains all rights, title and interest in the computer software programs.
EMC Corporation makes no warranties, expressed or implied, by operation of law or otherwise, relating to
this document, the products or the computer software programs described herein. EMC CORPORATION
DISCLAIMS ALL IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR
PURPOSE. In no event shall EMC Corporation be liable for (a) incidental, indirect, special, or consequential
damages or (b) any damages whatsoever resulting from the loss of use, data or profits, arising out of this
document, even if advised of the possibility of such damages.
Trademark Information
EMC2, EMC, MOSAIC:2000, Symmetrix, CLARiiON, and Navisphere are registered trademarks and EMC Enterprise Storage, The Enterprise Storage
Company, The EMC Effect, Connectrix, EDM, SDMS, SRDF, Timefinder, PowerPath, InfoMover, FarPoint, EMC Enterprise Storage Network, EMC
Enterprise Storage Specialist, EMC Storage Logix, Universal Data Tone, E-Infostructure, Celerra, Access Logix, MirrorView, and SnapView are
trademarks of EMC Corporation.
All other trademarks mentioned herein are the property of their respective owners.
ii
EMC Fibre Channel Storage Systems Configuration Planning Guide
This planning guide provides an overview of Fibre Channel
disk-array storage-system models and offers essential background
information and worksheets to help you with the installation and
configuration planning.
Please read this guide
•if you are considering purchase of an EMC Fibre Channel
disk-array storage system and want to understand its features; or
•before you plan the installation of a storage system.
You should be familiar with the host servers that will use the storage
systems and with the operating systems of the servers. After reading
this guide, you will be able to
•determine the best storage system components for your
installation
• About Unshared Storage.................................................................1-16
About Fibre Channel Storage Systems and Networks (SANs)
1-1
About Fibre Channel Storage Systems and Networks (SANs)
1
Introducing EMC Fibre Channel Storage Systems
EMC Fibre Channel disk-array storage systems provide terabytes of
disk storage capacity, high transfer rates, flexible configurations, and
highly available data at low cost.
A storage system package includes a host-bus adapter driver package
with hardware and software to connect with a server, storage
management software, Fibre Channel interconnect hardware, and
one or more storage systems.
Fibre Channel is a high-performance serial protocol that allows
transmission of both network and I/O channel data. It is a low level
protocol, independent of data types, and supports such formats as
SCSI and IP.
The Fibre Channel standard supports several physical topologies,
including switched fabric point-to-point and arbitrated loop (FC-AL).
The topologies used by the Fibre Channel storage systems described
in this manual are switched fabric and FC-AL.
A switch fabric is a set of point-to-point connections between nodes,
the connection being made through one or more Fibre Channel
switches. Each node may have its own unique address, but the path
between nodes is governed by a switch. The nodes are connected by
optical cable.
A Fibre Channel arbitrated loop is a circuit consisting of nodes. Each
node has a unique address, called a Fibre Channel arbitrated loop
address. The nodes are connected by optical cables. An optical cable
can transmit data over great distances for connections that span
entire enterprises and can support remote disaster recovery systems.
Copper cable serves well for local connections; its length is limited to
30 meters (99 feet).
About Fibre Channel Storage Systems and Networks (SANs)
1
Each connected device in a switched fabric or arbitrated loop is a
server adapter (initiator) or a target (storage system). The switches
and hubs are not considered nodes.
Server Adapter (initiator)
Node
Adapter
Storage System (tar
Connection
Figure 1-2Nodes - Initiator and Target
Node
get)
EMC1802
Fibre Channel Background
1-3
About Fibre Channel Storage Systems and Networks (SANs)
1
Fibre Channel Storage Components
A Fibre Channel storage system has three main components:
•Server component (host-bus adapter driver package with adapter
and software)
•Interconnect components (cables based on Fibre Channel
standards, switches, and hubs)
•Storage components (storage system with storage processors —
SPs — and power supply and cooling hardware)
Server Component (Host-Bus Adapter Driver Package with Software)
The host-bus adapter driver package includes a host-bus adapter and
support software. The adapter is a printed-circuit board that slides
into an I/O slot in the server’s cabinet. It transfers data between
server memory and one or more disk-array storage systems over
Fibre Channel — as controlled by the support software (adapter
driver).
Interconnect Components
CablesDepending on your needs, you can choose copper or optical cables.
One or more servers can use a storage system. For high availability —
in event of an adapter failure — a server can have two adapters.
Server
Adapter
Adapter
EMC1803
Depending on your server type, you may have a choice of adapters.
The adapter is designed for a specific host bus; for example, a PCI bus
or SBUS. Some adapter types support copper or optical cabling; some
support copper cabling only.
The interconnect components include the cables, Fibre Channel
switch (for shared storage), and Fibre Channel hub (for unshared
storage).
Fibre Channel Switches
About Fibre Channel Storage Systems and Networks (SANs)
1
The maximum length of copper cable is 30 meters (99 feet) between
nodes or hubs. The maximum length of optical cable between server
and hub or storage system is much greater, depending on the cable
type. For example, 62.5-micron multimode cable can span up to 500
meters (1,640 feet) while 9-micron single-mode cable can span up to
10 kilometers (6.2 miles). This ability to span great distances is a
major advantage of optical cable.
Some nodes have connections that require a specific type of cable:
copper or optical. Other nodes allow for the conversion from copper
to optical using a conversion device called a GigaBit Interface
Converter (GBIC) or Media Interface Adapter (MIA). In most cases, a
GBIC or MIA lets you substitute long-distance optical connections for
shorter copper connections.
With extenders, optical cable can span up to 40 km (25 miles). This
ability to span great distances is a major advantage of optical cable.
Details on cable lengths and rules appear later in this manual.
A Fibre Channel switch, which is a requirement for shared storage (a
Storage Area Network, SAN) connects all the nodes cabled to it using
a fabric topology. A switch adds serviceability and scalability to any
installation; it allows on-line insertion and removal of any device on
the fabric and maintains integrity if any connected device stops
participating. A switch also provides host-to-storage-system access
control in a multiple-host shared-storage environment. A switch has
several advantages over a hub: it provides point-to-point connections
(as opposed to a hub’s loop that includes all nodes) and it offers
zoning to specify paths between nodes in the switch itself.
Fibre Channel Storage Components
1-5
About Fibre Channel Storage Systems and Networks (SANs)
1
You can cascade switches (connect one switch port to another switch)
for additional port connections.
Switch topology (point-to-point)
Server
Adapter
Hub uses
loop between
ports
SP SP
Server
Adapter
SP
Adapter
Storage systems
ServerServer
Adapter
witch uses discrete
onnections between
orts
o illustrate the comparison, this figure shows just one adapter per server and one switch or
hub. Normally, such installations include two adapters per server and two switches or hubs.
Figure 1-3Switch and Hub Topologies Compared
Hub topology (loop)
Server
Adapter
SP
Switch Zoning
Switch zoning defines paths between connected nodes. Each zone
encloses one or more adapters and one or more SPs. A switch can
have as many zones as it has ports. The current connection limits are
four SP ports to one adapter port (the SPs fan in to the adapter) and
15 adapters to one SP (the SPs fan out to the adapters). There are
several zone types, including the single-initiator type, which is the
recommended type.
Server
Adapter
1-6
In the following figure, Server 1 has access to one SP (SP A) in storage
systems 1 and 2; it has no access to any other SP.
About Fibre Channel Storage Systems and Networks (SANs)
Server1
1
Adapter
Zone
SP SP
Storage system 1
To illustrate switch zoning, this figure shows just one HBA per server and one
switch or hub. Normally, such installations will include two HBAs per server
and two switches or hubs.
Figure 1-4A Switch Zone
If you do not define a zone in a switch, all adapter ports connected to
the switch can communicate with all SP ports connected to the
switch. However, access to an SP does not necessarily provide access
to the SP’s storage; access to storage is governed by the Storage
Groups you create (defined later).
Adapter
Switch
fabric
SP
Storage system 2 Storage system 3
Adapter
SP
SP
SP
Fibre Channel switches are available with 16 or 8 ports. They are
compact units that fit in 2 U (3.5 inches) for the 16-port or 1 U (1.75
inches) for the 8-port. They are available to fit into a rackmount
cabinet or as small deskside enclosures.
Ports
Figure 1-516-Port Switch, Back View
EMC1807
Fibre Channel Storage Components
1-7
About Fibre Channel Storage Systems and Networks (SANs)
1
If your servers and storage systems will be far apart, you can place
the switches closer to the servers or the storage systems, as
convenient.
A switch is technically a repeater, not a node, in a Fibre Channel loop.
However, it is bound by the same cabling distance rules as a node.
Fibre Channel HubsA hub connects all the nodes cabled to it into a single logical loop. A
hub adds serviceability and scalability to any loop; it allows on-line
insertion and removal of any device on the loop and maintains loop
integrity if any connected device stops participating.
Fibre channel hubs are compact units that fit in 1 U (1.75 inches) of
storage space. They are available to fit into a rackmount cabinet or as
small deskside units.
1-8
The nine-pin port can connect to a server,
storage system, or another hub.
Figure 1-6Nine-Port Hub
If your servers and storage systems will be far apart, you can place
the hubs closer to the servers or the storage systems, as convenient.
About Fibre Channel Storage Systems and Networks (SANs)
Storage Component (Storage Systems, Storage Processors (SPs), and Other
Hardware)
EMC disk-array storage systems, with their storage processors,
power supplies, and cooling hardware form the storage component
of a Fibre Channel system. The controlling unit, a Disk-array
Processor Enclosure (DPE) looks like the following figure.
Disk
modules
EMC1808
Figure 1-7Disk-Array Processor Enclosure (DPE)
1
DPE hardware details appear in a later chapter.
Fibre Channel Storage Components
1-9
About Fibre Channel Storage Systems and Networks (SANs)
1
Types of Storage System Installations
You can use a storage systems in any of several types of installation:
•Unshared direct with one server is the simplest and least costly;
•Shared-or-clustered direct lets two clustered servers share
storage resources with high availability (FC4500 storage systems;
and
•Shared switched, with one or two switch fabrics, lets two to 15
servers share the resources of several storage systems in a Storage
Area Network (SAN) Shared switched installations are available
in a high-availability (HA) version, with two HBAs per server,
with two switches, or with one HBA per server and one switch.
Unshared Direct
(one or two servers)
Server
Adapter
Adapter
Path 1
Path 2
Figure 1-8Types of Storage System Installation
Shared-or-Clustered Direct
(two servers)
Adapter
Server
Adapter
Adapter
Server
Adapter
Disk-array storage systems
Shared Switched
(multiple servers)
Server
Adapter
Switch fabric Switch fabric
Server
Adapter
Adapter
Adapter
Server
Adapter
Adapter
Storage systems for any shared installation require EMC Access
Logix™ software to control server access to the storage system LUNs.
The Shared-or-clustered direct installation may be either shared (that
is, use Access Logix to control LUN access) or clustered (without
Access Logix, using cluster software to control LUN access),
depending on the hardware model.
About Fibre Channel Storage Systems and Networks (SANs)
About Switched Shared Storage and SANs (Storage Area
Networks)
This section explains the features that let multiple servers share
disk-array storage systems on a SAN (storage area network).
A SAN is a collection of storage devices connected to servers via Fibre
Channel switches to provide a central location for disk storage.
Centralizing disk storage among multiple servers has many
advantages, including
•highly available data
•flexible association between servers and storage capacity
•centralized management for fast, effective response to users’ data
storage needs
•easier file backup and recovery
An EMC SAN is based on shared storage; that is, the SAN requires
the Access Logix option to provides flexible access control to storage
system LUNs.
1
ServerServer
Adapter
Adapter
Storage systems
Figure 1-9Components of a SAN
Adapter
Switch fabric
Fibre Channel switches can control data access to storage systems
through the use of switch zoning. With zoning, an administrator can
specify groups (called zones) of Fibre Channel devices (such as
host-bus adapters, specified by worldwide name), and SPs between
which the switch will allow communication.
About Switched Shared Storage and SANs (Storage Area Networks)
SP A
Adapter
SP B
Switch fabric
SP A SP B
Server
Adapter
Adapter
Path 1
Path 2
1-11
About Fibre Channel Storage Systems and Networks (SANs)
1
However, switch zoning cannot selectively control data access to
LUNs in a storage system, because each SP appears as a single Fibre
Channel device to the switch. So switch zoning can prevent or allow
communication with an SP, but not with specific disks or LUNs
attached to an SP. For access control with LUNs, a different solution is
required: Storage Groups.
Storage Groups
A Storage Group is one or more LUNs (logical units) within a storage
system that is reserved for one or more servers and is inaccessible to
other servers. Storage Groups are the central component of shared
storage; storage systems that are unshared do not use Storage
Groups.
When you configure shared storage, you specify servers and the
Storage Group(s) each server can read from and/or write to. The Base
Software firmware running in each storage system enforces the
server-to-Storage Group permissions.
A Storage Group can be accessed by more than one server if all the
servers run cluster software. The cluster software enforces orderly
access to the shared Storage Group LUNs.
The following figure shows a simple shared storage configuration
consisting of one storage system with two Storage Groups. One
Storage Group serves a cluster of two servers running the same
operating system, and the other Storage Group serves a UNIX
database server. Each server is configured with two independent
paths to its data, including separate host-bus adapters, switches, and
SPs, so there is no single point of failure for access to its data.
About Fibre Channel Storage Systems and Networks (SANs)
Highly available cluster
File ServerMail ServerDatabase Server
Operating
system A
Adapter
Adapter
Operating
system A
Adapter
Adapter
Operating
system B
Adapter
Adapter
1
Cluster
Storage Group
Database Server
Storage Group
Figure 1-10 Sample SAN Configuration
Access Control with Shared Storage
Access control permits or restricts a server’s access to shared storage.
There are two kinds of access control:
•Configuration access control
•Data access control
Configuration access control lets you restrict the servers through
which a user can send configuration commands to an attached
storage system.
Data access control is provided by Storage Groups. During storage
system configuration, using a management utility, the system
administrator associates a server with one or more LUNs.
Switch fabric
SP A
LUN
LUN
LUN
LUN
LUN
LUN
LUN
Switch fabric
SP B
Physical storage
systems with up to
100 disks per storage
system
Path 1
Path 2
Each server sees its Storage Group as if it were an entire storage
system, and never sees the other LUNs on the storage system.
Therefore, it cannot access or modify data on LUNs that are not part
of its Storage Group. However, you can define a Storage Group to be
accessible by more than one server, if, as shown above, the servers
run cluster software.
About Switched Shared Storage and SANs (Storage Area Networks)
1-13
About Fibre Channel Storage Systems and Networks (SANs)
1
The following figure shows both data access control (Storage Groups)
and configuration access control. Each server has exclusive read and
write access to its designated Storage Group. Of the four servers
connected to the SAN, only the Admin server can send configuration
commands to the storage system.
Highly available cluster
Admin server
Operating
system A
Adapter
Adapter
01
02
Admin Storage Group
Dedicated
Data access by adapters 01, 02
Inventory Storage Group
Dedicated
Data access by adapters 03, 04
E-mail and Web server
Storage Group
Shared
Data access by
adapters 05, 06, 07, 08
Inventory server
Operating
system A
Adapter
03
Switch fabric
Adapter
04
SP A
E-mail server
Operating
system B
Adapter
Adapter
05
06
Switch fabric
SP B
LUN
LUN
LUN
LUN
LUN
LUN
LUN
LUN
LUN
LUN
Web server
Operating
system B
Adapter
Adapter
07
08
Configuation access, by
adapters 01and 02
(Admin server only)
1-14
Figure 1-11 Data and Configuration Access Control with Shared Storage
For shared storage, you need a Disk-array Processor Enclosure (DPE)
storage system.
A DPE is a 10-slot enclosure with hardware RAID features provided
by one or two storage processors (SPs). For shared storage, two SPs
are required. In addition to its own disks, a DPE can support up to
nine 10-slot Disk Array Enclosures (DAEs) for a total of 100 disks.
DAE
DAE
DAE
About Fibre Channel Storage Systems and Networks (SANs)
1
DPE
Standby power
supply (SPS)
Figure 1-12 Storage System with a DPE and Three DAEs
About Switched Shared Storage and SANs (Storage Area Networks)
EMC1741
1-15
About Fibre Channel Storage Systems and Networks (SANs)
1
About Unshared Storage
Unshared storage systems are less costly and less complex than
shared storage systems. They offer many shared storage system
features; for example, you can use multiple unshared storage systems
with multiple servers. However, with multiple servers, unshared
storage offers less flexibility and security than shared storage, since
any user with write access to a privileged server’s files can enable
access to any storage system.
Storage System Hardware for Unshared Storage
For unshared storage, there are four types of storage system, each
using the FC-AL protocol. Each type is available in a rackmount or
deskside (office) version.
•Disk-array Processor Enclosure (DPE) storage systems. A DPE is
a 10-slot enclosure with hardware RAID features provided by one
or two storage processors (SPs). In addition to its own disks, a
DPE can support up to 110 additional disks in 10-slot Disk Array
Enclosures (DAEs) for a total of 120 disks. This is the same type of
storage system used for shared storage, but it has a different SP
and different Core Software.
1-16
•Intelligent Disk Array Enclosure (iDAE). An iDAE, like a DPE,
has SPs and thus all the features of a DPE, but is thinner and has a
limit of 30 disks.
•Disk Array Enclosure (DAE). A DAE does not have SPs. A DAE
can connect to a DPE or an iDAE, or you can use it without SPs. A
DAE used without an SP does not inherently include RAID, but
can operate as a RAID device using software running on the
server system. Such a DAE is also known as Just a Box of Disks, or
JBOD.
About Fibre Channel Storage Systems and Networks (SANs)
Disk-array processor enclosure (DPE)
1
Deskside DPE with DAE
Intelligent disk-array enclosure (iDAE)
30-slot desksideRackmount10-slot deskside
Rackmount DPE, one enclosure,
supports up to 9 DAEs
Figure 1-13 Storage System Hardware for Unshared Storage
What Next?For information about RAID types and RAID tradeoffs, continue to
the next chapter. To plan LUNs and file systems for shared storage,
skip to Chapter 3; or for unshared storage, Chapter 4. For details on
the storage-system hardware — shared and unshared — skip to
Chapter 5. For storage-system management utilities, skip to
Chapter 6.
About Unshared Storage
1-17
About Fibre Channel Storage Systems and Networks (SANs)