No part of this publication may be reproduced or distributed in any form or by any means, or stored in a
database or retrieval system, without the prior written consent of EMC Corporation.
The information contained in this document is subject to change without notice. EMC Corporation assumes
no responsibility for any errors that may appear.
All computer software programs, including but not limited to microcode, described in this document are
furnished under a license, and may be used or copied only in accordance with the terms of such license.
EMC either owns or has the right to license the computer software programs described in this document.
EMC Corporation retains all rights, title and interest in the computer software programs.
EMC Corporation makes no warranties, expressed or implied, by operation of law or otherwise, relating to
this document, the products or the computer software programs described herein. EMC CORPORATION
DISCLAIMS ALL IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR
PURPOSE. In no event shall EMC Corporation be liable for (a) incidental, indirect, special, or consequential
damages or (b) any damages whatsoever resulting from the loss of use, data or profits, arising out of this
document, even if advised of the possibility of such damages.
Trademark Information
EMC2, EMC, MOSAIC:2000, Symmetrix, CLARiiON, and Navisphere are registered trademarks and EMC Enterprise Storage, The Enterprise Storage
Company, The EMC Effect, Connectrix, EDM, SDMS, SRDF, Timefinder, PowerPath, InfoMover, FarPoint, EMC Enterprise Storage Network, EMC
Enterprise Storage Specialist, EMC Storage Logix, Universal Data Tone, E-Infostructure, Celerra, Access Logix, MirrorView, and SnapView are
trademarks of EMC Corporation.
All other trademarks mentioned herein are the property of their respective owners.
ii
EMC Fibre Channel Storage Systems Configuration Planning Guide
This planning guide provides an overview of Fibre Channel
disk-array storage-system models and offers essential background
information and worksheets to help you with the installation and
configuration planning.
Please read this guide
•if you are considering purchase of an EMC Fibre Channel
disk-array storage system and want to understand its features; or
•before you plan the installation of a storage system.
You should be familiar with the host servers that will use the storage
systems and with the operating systems of the servers. After reading
this guide, you will be able to
•determine the best storage system components for your
installation
• About Unshared Storage.................................................................1-16
About Fibre Channel Storage Systems and Networks (SANs)
1-1
About Fibre Channel Storage Systems and Networks (SANs)
1
Introducing EMC Fibre Channel Storage Systems
EMC Fibre Channel disk-array storage systems provide terabytes of
disk storage capacity, high transfer rates, flexible configurations, and
highly available data at low cost.
A storage system package includes a host-bus adapter driver package
with hardware and software to connect with a server, storage
management software, Fibre Channel interconnect hardware, and
one or more storage systems.
Fibre Channel is a high-performance serial protocol that allows
transmission of both network and I/O channel data. It is a low level
protocol, independent of data types, and supports such formats as
SCSI and IP.
The Fibre Channel standard supports several physical topologies,
including switched fabric point-to-point and arbitrated loop (FC-AL).
The topologies used by the Fibre Channel storage systems described
in this manual are switched fabric and FC-AL.
A switch fabric is a set of point-to-point connections between nodes,
the connection being made through one or more Fibre Channel
switches. Each node may have its own unique address, but the path
between nodes is governed by a switch. The nodes are connected by
optical cable.
A Fibre Channel arbitrated loop is a circuit consisting of nodes. Each
node has a unique address, called a Fibre Channel arbitrated loop
address. The nodes are connected by optical cables. An optical cable
can transmit data over great distances for connections that span
entire enterprises and can support remote disaster recovery systems.
Copper cable serves well for local connections; its length is limited to
30 meters (99 feet).
About Fibre Channel Storage Systems and Networks (SANs)
1
Each connected device in a switched fabric or arbitrated loop is a
server adapter (initiator) or a target (storage system). The switches
and hubs are not considered nodes.
Server Adapter (initiator)
Node
Adapter
Storage System (tar
Connection
Figure 1-2Nodes - Initiator and Target
Node
get)
EMC1802
Fibre Channel Background
1-3
About Fibre Channel Storage Systems and Networks (SANs)
1
Fibre Channel Storage Components
A Fibre Channel storage system has three main components:
•Server component (host-bus adapter driver package with adapter
and software)
•Interconnect components (cables based on Fibre Channel
standards, switches, and hubs)
•Storage components (storage system with storage processors —
SPs — and power supply and cooling hardware)
Server Component (Host-Bus Adapter Driver Package with Software)
The host-bus adapter driver package includes a host-bus adapter and
support software. The adapter is a printed-circuit board that slides
into an I/O slot in the server’s cabinet. It transfers data between
server memory and one or more disk-array storage systems over
Fibre Channel — as controlled by the support software (adapter
driver).
Interconnect Components
CablesDepending on your needs, you can choose copper or optical cables.
One or more servers can use a storage system. For high availability —
in event of an adapter failure — a server can have two adapters.
Server
Adapter
Adapter
EMC1803
Depending on your server type, you may have a choice of adapters.
The adapter is designed for a specific host bus; for example, a PCI bus
or SBUS. Some adapter types support copper or optical cabling; some
support copper cabling only.
The interconnect components include the cables, Fibre Channel
switch (for shared storage), and Fibre Channel hub (for unshared
storage).
Fibre Channel Switches
About Fibre Channel Storage Systems and Networks (SANs)
1
The maximum length of copper cable is 30 meters (99 feet) between
nodes or hubs. The maximum length of optical cable between server
and hub or storage system is much greater, depending on the cable
type. For example, 62.5-micron multimode cable can span up to 500
meters (1,640 feet) while 9-micron single-mode cable can span up to
10 kilometers (6.2 miles). This ability to span great distances is a
major advantage of optical cable.
Some nodes have connections that require a specific type of cable:
copper or optical. Other nodes allow for the conversion from copper
to optical using a conversion device called a GigaBit Interface
Converter (GBIC) or Media Interface Adapter (MIA). In most cases, a
GBIC or MIA lets you substitute long-distance optical connections for
shorter copper connections.
With extenders, optical cable can span up to 40 km (25 miles). This
ability to span great distances is a major advantage of optical cable.
Details on cable lengths and rules appear later in this manual.
A Fibre Channel switch, which is a requirement for shared storage (a
Storage Area Network, SAN) connects all the nodes cabled to it using
a fabric topology. A switch adds serviceability and scalability to any
installation; it allows on-line insertion and removal of any device on
the fabric and maintains integrity if any connected device stops
participating. A switch also provides host-to-storage-system access
control in a multiple-host shared-storage environment. A switch has
several advantages over a hub: it provides point-to-point connections
(as opposed to a hub’s loop that includes all nodes) and it offers
zoning to specify paths between nodes in the switch itself.
Fibre Channel Storage Components
1-5
About Fibre Channel Storage Systems and Networks (SANs)
1
You can cascade switches (connect one switch port to another switch)
for additional port connections.
Switch topology (point-to-point)
Server
Adapter
Hub uses
loop between
ports
SP SP
Server
Adapter
SP
Adapter
Storage systems
ServerServer
Adapter
witch uses discrete
onnections between
orts
o illustrate the comparison, this figure shows just one adapter per server and one switch or
hub. Normally, such installations include two adapters per server and two switches or hubs.
Figure 1-3Switch and Hub Topologies Compared
Hub topology (loop)
Server
Adapter
SP
Switch Zoning
Switch zoning defines paths between connected nodes. Each zone
encloses one or more adapters and one or more SPs. A switch can
have as many zones as it has ports. The current connection limits are
four SP ports to one adapter port (the SPs fan in to the adapter) and
15 adapters to one SP (the SPs fan out to the adapters). There are
several zone types, including the single-initiator type, which is the
recommended type.
Server
Adapter
1-6
In the following figure, Server 1 has access to one SP (SP A) in storage
systems 1 and 2; it has no access to any other SP.
About Fibre Channel Storage Systems and Networks (SANs)
Server1
1
Adapter
Zone
SP SP
Storage system 1
To illustrate switch zoning, this figure shows just one HBA per server and one
switch or hub. Normally, such installations will include two HBAs per server
and two switches or hubs.
Figure 1-4A Switch Zone
If you do not define a zone in a switch, all adapter ports connected to
the switch can communicate with all SP ports connected to the
switch. However, access to an SP does not necessarily provide access
to the SP’s storage; access to storage is governed by the Storage
Groups you create (defined later).
Adapter
Switch
fabric
SP
Storage system 2 Storage system 3
Adapter
SP
SP
SP
Fibre Channel switches are available with 16 or 8 ports. They are
compact units that fit in 2 U (3.5 inches) for the 16-port or 1 U (1.75
inches) for the 8-port. They are available to fit into a rackmount
cabinet or as small deskside enclosures.
Ports
Figure 1-516-Port Switch, Back View
EMC1807
Fibre Channel Storage Components
1-7
About Fibre Channel Storage Systems and Networks (SANs)
1
If your servers and storage systems will be far apart, you can place
the switches closer to the servers or the storage systems, as
convenient.
A switch is technically a repeater, not a node, in a Fibre Channel loop.
However, it is bound by the same cabling distance rules as a node.
Fibre Channel HubsA hub connects all the nodes cabled to it into a single logical loop. A
hub adds serviceability and scalability to any loop; it allows on-line
insertion and removal of any device on the loop and maintains loop
integrity if any connected device stops participating.
Fibre channel hubs are compact units that fit in 1 U (1.75 inches) of
storage space. They are available to fit into a rackmount cabinet or as
small deskside units.
1-8
The nine-pin port can connect to a server,
storage system, or another hub.
Figure 1-6Nine-Port Hub
If your servers and storage systems will be far apart, you can place
the hubs closer to the servers or the storage systems, as convenient.
About Fibre Channel Storage Systems and Networks (SANs)
Storage Component (Storage Systems, Storage Processors (SPs), and Other
Hardware)
EMC disk-array storage systems, with their storage processors,
power supplies, and cooling hardware form the storage component
of a Fibre Channel system. The controlling unit, a Disk-array
Processor Enclosure (DPE) looks like the following figure.
Disk
modules
EMC1808
Figure 1-7Disk-Array Processor Enclosure (DPE)
1
DPE hardware details appear in a later chapter.
Fibre Channel Storage Components
1-9
About Fibre Channel Storage Systems and Networks (SANs)
1
Types of Storage System Installations
You can use a storage systems in any of several types of installation:
•Unshared direct with one server is the simplest and least costly;
•Shared-or-clustered direct lets two clustered servers share
storage resources with high availability (FC4500 storage systems;
and
•Shared switched, with one or two switch fabrics, lets two to 15
servers share the resources of several storage systems in a Storage
Area Network (SAN) Shared switched installations are available
in a high-availability (HA) version, with two HBAs per server,
with two switches, or with one HBA per server and one switch.
Unshared Direct
(one or two servers)
Server
Adapter
Adapter
Path 1
Path 2
Figure 1-8Types of Storage System Installation
Shared-or-Clustered Direct
(two servers)
Adapter
Server
Adapter
Adapter
Server
Adapter
Disk-array storage systems
Shared Switched
(multiple servers)
Server
Adapter
Switch fabric Switch fabric
Server
Adapter
Adapter
Adapter
Server
Adapter
Adapter
Storage systems for any shared installation require EMC Access
Logix™ software to control server access to the storage system LUNs.
The Shared-or-clustered direct installation may be either shared (that
is, use Access Logix to control LUN access) or clustered (without
Access Logix, using cluster software to control LUN access),
depending on the hardware model.
About Fibre Channel Storage Systems and Networks (SANs)
About Switched Shared Storage and SANs (Storage Area
Networks)
This section explains the features that let multiple servers share
disk-array storage systems on a SAN (storage area network).
A SAN is a collection of storage devices connected to servers via Fibre
Channel switches to provide a central location for disk storage.
Centralizing disk storage among multiple servers has many
advantages, including
•highly available data
•flexible association between servers and storage capacity
•centralized management for fast, effective response to users’ data
storage needs
•easier file backup and recovery
An EMC SAN is based on shared storage; that is, the SAN requires
the Access Logix option to provides flexible access control to storage
system LUNs.
1
ServerServer
Adapter
Adapter
Storage systems
Figure 1-9Components of a SAN
Adapter
Switch fabric
Fibre Channel switches can control data access to storage systems
through the use of switch zoning. With zoning, an administrator can
specify groups (called zones) of Fibre Channel devices (such as
host-bus adapters, specified by worldwide name), and SPs between
which the switch will allow communication.
About Switched Shared Storage and SANs (Storage Area Networks)
SP A
Adapter
SP B
Switch fabric
SP A SP B
Server
Adapter
Adapter
Path 1
Path 2
1-11
About Fibre Channel Storage Systems and Networks (SANs)
1
However, switch zoning cannot selectively control data access to
LUNs in a storage system, because each SP appears as a single Fibre
Channel device to the switch. So switch zoning can prevent or allow
communication with an SP, but not with specific disks or LUNs
attached to an SP. For access control with LUNs, a different solution is
required: Storage Groups.
Storage Groups
A Storage Group is one or more LUNs (logical units) within a storage
system that is reserved for one or more servers and is inaccessible to
other servers. Storage Groups are the central component of shared
storage; storage systems that are unshared do not use Storage
Groups.
When you configure shared storage, you specify servers and the
Storage Group(s) each server can read from and/or write to. The Base
Software firmware running in each storage system enforces the
server-to-Storage Group permissions.
A Storage Group can be accessed by more than one server if all the
servers run cluster software. The cluster software enforces orderly
access to the shared Storage Group LUNs.
The following figure shows a simple shared storage configuration
consisting of one storage system with two Storage Groups. One
Storage Group serves a cluster of two servers running the same
operating system, and the other Storage Group serves a UNIX
database server. Each server is configured with two independent
paths to its data, including separate host-bus adapters, switches, and
SPs, so there is no single point of failure for access to its data.
About Fibre Channel Storage Systems and Networks (SANs)
Highly available cluster
File ServerMail ServerDatabase Server
Operating
system A
Adapter
Adapter
Operating
system A
Adapter
Adapter
Operating
system B
Adapter
Adapter
1
Cluster
Storage Group
Database Server
Storage Group
Figure 1-10 Sample SAN Configuration
Access Control with Shared Storage
Access control permits or restricts a server’s access to shared storage.
There are two kinds of access control:
•Configuration access control
•Data access control
Configuration access control lets you restrict the servers through
which a user can send configuration commands to an attached
storage system.
Data access control is provided by Storage Groups. During storage
system configuration, using a management utility, the system
administrator associates a server with one or more LUNs.
Switch fabric
SP A
LUN
LUN
LUN
LUN
LUN
LUN
LUN
Switch fabric
SP B
Physical storage
systems with up to
100 disks per storage
system
Path 1
Path 2
Each server sees its Storage Group as if it were an entire storage
system, and never sees the other LUNs on the storage system.
Therefore, it cannot access or modify data on LUNs that are not part
of its Storage Group. However, you can define a Storage Group to be
accessible by more than one server, if, as shown above, the servers
run cluster software.
About Switched Shared Storage and SANs (Storage Area Networks)
1-13
About Fibre Channel Storage Systems and Networks (SANs)
1
The following figure shows both data access control (Storage Groups)
and configuration access control. Each server has exclusive read and
write access to its designated Storage Group. Of the four servers
connected to the SAN, only the Admin server can send configuration
commands to the storage system.
Highly available cluster
Admin server
Operating
system A
Adapter
Adapter
01
02
Admin Storage Group
Dedicated
Data access by adapters 01, 02
Inventory Storage Group
Dedicated
Data access by adapters 03, 04
E-mail and Web server
Storage Group
Shared
Data access by
adapters 05, 06, 07, 08
Inventory server
Operating
system A
Adapter
03
Switch fabric
Adapter
04
SP A
E-mail server
Operating
system B
Adapter
Adapter
05
06
Switch fabric
SP B
LUN
LUN
LUN
LUN
LUN
LUN
LUN
LUN
LUN
LUN
Web server
Operating
system B
Adapter
Adapter
07
08
Configuation access, by
adapters 01and 02
(Admin server only)
1-14
Figure 1-11 Data and Configuration Access Control with Shared Storage
For shared storage, you need a Disk-array Processor Enclosure (DPE)
storage system.
A DPE is a 10-slot enclosure with hardware RAID features provided
by one or two storage processors (SPs). For shared storage, two SPs
are required. In addition to its own disks, a DPE can support up to
nine 10-slot Disk Array Enclosures (DAEs) for a total of 100 disks.
DAE
DAE
DAE
About Fibre Channel Storage Systems and Networks (SANs)
1
DPE
Standby power
supply (SPS)
Figure 1-12 Storage System with a DPE and Three DAEs
About Switched Shared Storage and SANs (Storage Area Networks)
EMC1741
1-15
About Fibre Channel Storage Systems and Networks (SANs)
1
About Unshared Storage
Unshared storage systems are less costly and less complex than
shared storage systems. They offer many shared storage system
features; for example, you can use multiple unshared storage systems
with multiple servers. However, with multiple servers, unshared
storage offers less flexibility and security than shared storage, since
any user with write access to a privileged server’s files can enable
access to any storage system.
Storage System Hardware for Unshared Storage
For unshared storage, there are four types of storage system, each
using the FC-AL protocol. Each type is available in a rackmount or
deskside (office) version.
•Disk-array Processor Enclosure (DPE) storage systems. A DPE is
a 10-slot enclosure with hardware RAID features provided by one
or two storage processors (SPs). In addition to its own disks, a
DPE can support up to 110 additional disks in 10-slot Disk Array
Enclosures (DAEs) for a total of 120 disks. This is the same type of
storage system used for shared storage, but it has a different SP
and different Core Software.
1-16
•Intelligent Disk Array Enclosure (iDAE). An iDAE, like a DPE,
has SPs and thus all the features of a DPE, but is thinner and has a
limit of 30 disks.
•Disk Array Enclosure (DAE). A DAE does not have SPs. A DAE
can connect to a DPE or an iDAE, or you can use it without SPs. A
DAE used without an SP does not inherently include RAID, but
can operate as a RAID device using software running on the
server system. Such a DAE is also known as Just a Box of Disks, or
JBOD.
About Fibre Channel Storage Systems and Networks (SANs)
Disk-array processor enclosure (DPE)
1
Deskside DPE with DAE
Intelligent disk-array enclosure (iDAE)
30-slot desksideRackmount10-slot deskside
Rackmount DPE, one enclosure,
supports up to 9 DAEs
Figure 1-13 Storage System Hardware for Unshared Storage
What Next?For information about RAID types and RAID tradeoffs, continue to
the next chapter. To plan LUNs and file systems for shared storage,
skip to Chapter 3; or for unshared storage, Chapter 4. For details on
the storage-system hardware — shared and unshared — skip to
Chapter 5. For storage-system management utilities, skip to
Chapter 6.
About Unshared Storage
1-17
About Fibre Channel Storage Systems and Networks (SANs)
This chapter explains RAID types you can choose for your storage
system LUNs. If you already know about RAID types and know
which ones you want, you can skip this background information and
skip to Chapter 5. Topics are
• RAID Benefits and Tradeoffs..........................................................2-12
• Guidelines for RAID Types.............................................................2-17
• Sample Applications for RAID Types ...........................................2-19
This chapter applies primarily to storage systems with storage processors
(SPs). For a storage system without SPs (a DAE-only or JBOD system), RAID
types are limited by the RAID software you run on the server. The RAID
terms and definitions used here conform to generally accepted standards.
RAID Types and Tradeoffs
2-1
RAID Types and Tradeoffs
2
Introducing RAID
The storage system uses RAID (redundant array of independent
disks) technology. RAID technology groups separate disks into one
logical unit (LUN) to improve reliability and/or performance.
The storage system supports five RAID levels and two other disk
configurations, the individual unit and the hot spare (global spare).
You group the disks into one RAID Group by binding them using a
storage-system management utility.
Four of the RAID types use disk striping and two use mirroring.
Disk Striping
Mirroring
Using disk stripes, the storage-system hardware can read from and
write to multiple disks simultaneously and independently. By
allowing several read/write heads to work on the same task at once,
disk striping can enhance performance. The amount of information
read from or written to each disk makes up the stripe element size.
The stripe size is the stripe element size multiplied by the number of
disks in a group. For example, assume a stripe element size of 128
sectors (the default) and a five-disk group. The group has five disks,
so you would multiply five by the stripe element size of 128 to yield a
stripe size of 640 sectors.
The storage system uses disk striping with most RAID types.
Mirroring maintains a second (and optionally through software, a
third) copy of a logical disk image that provides continuous access if
the original image becomes inaccessible. The system and user
applications continue running on the good image without
interruption. There are two kinds of mirroring: hardware mirroring,
in which the SP synchronizes the disk images; and software
mirroring, in which the operating system synchronizes the images.
Software mirroring consumes server resources, since the operating
system must mirror the images, and has no offsetting advantages; we
mention it here only for historical completeness.
With a storage system, you can create a hardware mirror by binding
disks as a RAID 1 mirrored pair or a RAID 1/0 Group (a mirrored
RAID 0 Group); the hardware will then mirror the disks
automatically.
Some RAID types let you create multiple LUNs on one RAID Group.
You can then allot each LUN to a different user, server, or application.
For example, a five-disk RAID 5 Group that uses 36-Gbyte disks
offers 144 Gbytes of space. You could bind three LUNs, say with 24,
60, and 60 Gbytes of storage capacity, for temporary, mail, and
customer files.
One disadvantage of multiple LUNs on a RAID Group is that I/O to
each LUN may affect I/O to the others in the group; that is, if traffic
to one LUN is very heavy, I/O performance with other LUNs may
degrade. The main advantage of multiple LUNs per RAID Group is
the ability to divide the enormous amount of disk space provided by
RAID Groups on newer, high-capacity disks.
RAID Group
LUN 0
temp
LUN 1
mail
LUN 2
customers
DiskDiskDiskDiskDisk
LUN 0
temp
LUN 1
mail
LUN 2
customers
LUN 0
temp
LUN 1
mail
LUN 2
customers
LUN 0
temp
LUN 1
mail
LUN 2
customers
LUN 0
temp
LUN 1
mail
LUN 2
customers
EMC1814
Figure 2-1Multiple LUNs in a RAID Group
Introducing RAID
2-3
RAID Types and Tradeoffs
2
RAID Types
You can choose from the following RAID types: RAID 5, RAID 3,
RAID 1, RAID 0, RAID 1/0, individual disk unit, and hot spare.
RAID 5 Group (Individual Access Array)
A RAID 5 Group usually consists of five disks (but can have three to
sixteen). A RAID 5 Group uses disk striping. With a RAID 5 Group on
a full-fibre storage system, you can create up to 32 RAID 5 LUNs to
apportion disk space to different users, servers, and applications.
The storage system writes parity information that lets the group
continue operating if a disk fails. When you replace the failed disk,
the SP rebuilds the group using the information stored on the
working disks. Performance is degraded while the SP rebuilds the
group. However, the storage system continues to function and gives
users access to all data, including data stored on the failed disk.
The following figure shows user and parity data with the default
stripe element size of 128 sectors (65,536 bytes) in a five-disk RAID 5
Group. The stripe size comprises all stripe elements. Notice that the
disk block addresses in the stripe proceed sequentially from the first
disk to the second, third, and fourth, then back to the first, and so on.
RAID 5 Groups offer excellent read performance and good write
performance. Write performance benefits greatly from
storage-system caching.
RAID 3 Group (Parallel Access Array)
A RAID 3 Group consists of five or more disks. The hardware always
reads from or writes to all the disks. A RAID 3 Group uses disk
striping. To maintain the RAID 3 performance, you can create only
one LUN per RAID 3 Group.
The storage system writes parity information that lets the group
continue operating if a disk fails. When you replace the failed disk,
the SP rebuilds the group using the information stored on the
working disks. Performance is degraded while the SP rebuilds the
group. However, the storage system continues to function and gives
users access to all data, including data stored on the failed disk.
Fifth disk
…
EMC1815
RAID Types
2-5
RAID Types and Tradeoffs
2
The following figure shows user and parity data with a data block
size of 2 Kbytes in a RAID 3 Group. Notice that the byte addresses
proceed from the first disk to the second, third, and fourth, then the
first, and so on.
Data block
First disk
Second disk
Third disk
5120-56311024-1535 3072-35837168-7679 9116-9627
Fourth disk
Fifth disk
…
…
…
User data
…
…
Parity data
EMC1816
Stripe
si
ze
Stripe
element
size
Bytes
2048-2559 4096-4607 6144-6655 8192-8603
0-511
512-1023 2560-3071 4608-5119 6656-71678604-9115
3584-40951536-20475632-6143 7680-8191 9628-10139
ParityParityParityParityParity
2-6
Figure 2-3RAID 3 Group
RAID 3 differs from RAID 5 in several important ways. First, in a
RAID 3 Group the hardware processes disk requests serially; whereas
in a RAID 5 Group the hardware can interleave disk requests. Second,
with a RAID 3 Group, the parity information is stored on one disk;
with a RAID 5 Group, it is stored on all disks. Finally, with a RAID 3
Group, the I/O occurs in small units (one sector) to each disk. A
RAID 3 Group works well for single-task applications that use I/Os
of blocks larger than 64 Kbytes.
Each RAID 3 Group requires some dedicated SP memory (6 Mbytes
recommended per group). This memory is allocated when you create
the group and becomes unavailable for storage-system caching. For
top performance, we suggest that you do not use RAID 3 Groups
with RAID 5, RAID 1/0, or RAID 0 Groups, since SP processing
power and memory are best devoted to the RAID 3 Groups. RAID 1
mirrored pairs and individual units require less SP processing power,
and therefore work well with RAID 3 Groups.
For each write to a RAID 3 Group, the storage system
1. Calculates the parity data.
2. Writes the new user and parity data.
RAID Types and Tradeoffs
2
RAID 1 Mirrored Pair
A RAID 1 Group consists of two disks that are mirrored
automatically by the storage-system hardware.
RAID 1 hardware mirroring within the storage system is not the same
as software mirroring or hardware mirroring for other kinds of disks.
Functionally, the difference is that you cannot manually stop
mirroring on a RAID 1 mirrored pair, and then access one of the
images independently. If you want to use one of the disks in such a
mirror separately, you must unbind the mirror (losing all data on it),
rebind the disk in as the type you want, and software format the
newly bound LUN.
With a storage system, RAID 1 hardware mirroring has the following
advantages:
•automatic operation (you do not have to issue commands to
initiate it)
•physical duplication of images
•a rebuild period that you can select during which the SP recreates
the second image after a failure
With a RAID 1 mirrored pair, the storage system writes the same data
to both disks, as follows.
First disk
0
0
1
1
2
Second disk
2
3
3
4
…
4
…
User data
EMC1817
Figure 2-4RAID 1 Mirrored Pair
RAID Types
2-7
RAID Types and Tradeoffs
2
RAID 0 Group (Nonredundant Array)
A RAID 0 Group consists of three to a maximum of sixteen disks. A
RAID 0 Group uses disk striping, in which the hardware writes to or
reads from multiple disks simultaneously. In a full-fibre storage
system, you can create up to 32 LUNs per RAID Group.
Unlike the other RAID levels, with RAID 0 the hardware does not
maintain parity information on any disk; this type of group has no
inherent data redundancy. RAID 0 offers enhanced performance
through simultaneous I/O to different disks.
If the operating system supports software mirroring, you can use
software mirroring with the RAID 0 Group to provide high
availability. A desirable alternative to RAID 0 is RAID 1/0.
RAID 1/0 Group (Mirrored RAID 0 Group)
A RAID 1/0 Group consists of four, six, eight, ten, twelve, fourteen,
or sixteen disks. These disks make up two mirror images, with each
image including two to eight disks. The hardware automatically
mirrors the disks. A RAID 1/0 Group uses disk striping. It combines
the speed advantage of RAID 0 with the redundancy advantage of
mirroring. With a RAID 1/0 Group on a full-fibre storage system, you
can create up to 32 RAID 5 LUNs to apportion disk space to different
users, servers, and applications.
2-8
The following figure shows the distribution of user data with the
default stripe element size of 128 sectors (65,536 bytes) in a six-disk
RAID 1/0 Group. Notice that the disk block addresses in the stripe
proceed sequentially from the first mirrored disks (first and fourth
disks) to the second mirrored disks (second and fifth disks), to the
third mirrored disks (third and sixth disks), and then from the first
mirrored disks, and so on.
A RAID 1/0 Group can survive the failure of multiple disks,
providing that one disk in each image pair survives.
An individual disk unit is a disk bound to be independent of any
other disk in the cabinet. An individual unit has no inherent high
availability, but you can make it highly available by using software
mirroring with another individual unit. You can create one LUN per
individual disk unit. If you want to apportion the disk space, you can
do so using partitions, file systems, or user directories.
Hot Spare
A hot spare is a dedicated replacement disk on which users cannot
store information. A hot spare is global: if any disk in a RAID 5
Group, RAID 3 Group, RAID 1 mirrored pair, or RAID 1/0 Group
fails, the SP automatically rebuilds the failed disk’s structure on the
hot spare. When the SP finishes rebuilding, the disk group functions
as usual, using the hot spare instead of the failed disk. When you
RAID Types
2-9
RAID Types and Tradeoffs
2
replace the failed disk, the SP copies the data from the former hot
spare onto the replacement disk.
When the copy is done, the disk group consists of disks in the original
slots, and the SP automatically frees the hot spare to serve as a hot
spare again. A hot spare is most useful when you need the highest
data availability. It eliminates the time and effort needed for someone
to notice that a disk has failed, find a suitable replacement disk, and
insert the disk.
When you plan to use a hot spare, make sure the disk has the capacity to
serve in any RAID Group in the storage-system chassis. A RAID Group
cannot use a hot spare that is smaller than a failed disk in the group.
You can have one or more hot spares per storage-system chassis. You
can make any disk in the chassis a hot spare, except for a disk that
serves for Core Software storage or the write cache vault. That is, a
hot spare can be any of the following disks:
DPE or iDAE system without write caching:disks 3-119
DPE system with write caching:disks 9-119
iDAE system with write caching:disks 5-29
30-slot SCSI-disk system:disks A1-E1, A2-E2,
B3-E3, A4-E4
2-10
An example of hot spare usage for a deskside DPE storage system
follows.
1. RAID 5 group consists of disk modules 0-4; RAID 1 mirrored pair is
modules 5 and 6; hot spare is module 9.
2. Disk module 3 fails.
3. RAID 5 group becomes modules 0, 1, 2, 9, and 4; now no hot spare is
available.
4. System operator replaces failed module 3 with a functional module.
5. RAID 5 group once again is 0-4 and hot spare is 9.
Figure 2-6How a Hot Spare Works
Hot spare
RAID Types
2-11
RAID Types and Tradeoffs
2
RAID Benefits and Tradeoffs
This section reviews RAID types and explains their benefits and
tradeoffs. You can create seven types of LUN:
•RAID 5 Group (individual access array)
•RAID 3 Group (parallel access array)
•RAID 1 mirrored pair
•RAID 1/0 Group (mirrored RAID 0 Group); a RAID 0 Group
mirrored by the storage-system hardware
•RAID 0 Group (nonredundant individual access array); no
inherent high-availability features, but can be software mirrored
if the operating system supports mirroring
•Individual unit; no inherent high-availability features but can be
software mirrored, if the operating system supports mirroring
•Hot spare; serves only as an automatic replacement for any disk
in a RAID type other than 0; does not store data during normal
system operations
2-12
Plan the disk unit configurations carefully. After a disk has been bound into a
LUN, you cannot change the RAID type of that LUN without unbinding it,
and this means losing all data on it.
The following table compares the read and write performance,
tolerance for disk failure, and relative cost per megabyte (Mbyte) of
the RAID types. Figures shown are theoretical maximums.
Table 2-1Performance, Availability, and Cost of RAID Types (Individual Unit = 1.0)
2
Performance
Relative read
Disk configuration
RAID 5 Group with
fivedisk s
RAID 3 Group with
fivedisks
RAID 1 mirrored pairUp to 2Up to 12
RAID 1/0 Group with
10 disks
Individual unit111
Notes: These performance numbers are not based on storage-system caching. With caching,
the performance numbers for RAID 5 writes improve significantly.
Performance multipliers vary with load on server and storage system.
performance
without cache
Up to 5 with five disks
(for small I/O requests, 2
to 8 Kbytes)
Up to 4 (for large I/O
requests)
Up to 10Up to 5
Relative write
performance
without cache
Up to 1.25 with five disks
(for small I/O requests, 2 to
8 Kbytes)
Up to 4 (for large I/O
requests)
Relative
cost per
Mbyte
1.25
1.25
RAID 5, with individual access, provides high read throughput for
small requests (blocks of 2 to 8 Kbytes) by allowing simultaneous
reads from each disk in the group. RAID 5 write throughput is
limited by the need to perform four I/Os per request (I/Os to read
and write data and parity information). However, write caching
improves RAID 5 write performance.
RAID 3, with parallel access, provides high throughput for
sequential, large block-size requests (blocks of more than 64 Kbytes).
With RAID 3, the system accesses all five disks in each request but
need not read data and parity before writing – advantageous for large
requests but not for small ones. RAID 3 employs SP memory without
caching, which means you do not need the second SP and BBU that
caching requires.
Generally, the performance of a RAID 3 Group increases as the size of
the I/O request increases. Read performance increases rapidly with
read requests up to 1Mbyte. Write performance increases greatly for
sequential write requests that are greater than 256 Kbytes. For
applications issuing very large I/O requests, a RAID 3 LUN provides
significantly better write performance than a RAID 5 LUN.
RAID Benefits and Tradeoffs
2-13
RAID Types and Tradeoffs
2
We do not recommend using RAID 3 in the same storage-system
chassis with RAID 5 or RAID 1/0.
A RAID 1 mirrored pair has its disks locked in synchronization, but
the SP can read data from the disk whose read/write heads are closer
to it. Therefore, RAID 1 read performance can be twice that of an
individual disk while write performance remains the same as that of
an individual disk.
A RAID 0 Group (nonredundant individual access array) or RAID
1/0 Group (mirrored RAID 0 Group) can have as many I/O
operations occurring simultaneously as there are disks in the group.
Since RAID 1/0 locks pairs of RAID 0 disks the same way as RAID 1
does, the performance of RAID 1/0 equals the number of disk pairs
times the RAID 1 performance number. If you want high throughput
for a specific LUN, use a RAID 1/0 or RAID 0 Group. A RAID 1/0
Group requires at least six disks; a RAID 0 Group, at least three disks.
An individual unit needs only one I/O operation per read or write
operation.
RAID types 5, 1, 1/0, and 0 allow multiple LUNs per RAID Group. If
you create multiple LUNs on a RAID Group, the LUNs share the
RAID Group disks, and the I/O demands of each LUN affect the I/O
service time to the other LUNs. For best performance, you may want
to use one LUN per RAID Group.
Storage Flexibility
Certain RAID Group types — RAID 5, RAID 1, RAID 1/0, and RAID
0 — let you create up to 32 LUNs in each group. This adds flexibility,
particularly with large disks, since it lets you apportion LUNs of
various sizes to different servers, applications, and users. Conversely,
with RAID 3, there can be only one LUN per RAID Group, and the
group must include five or nine disks — a sizable block of storage to
devote to one server, application, or user. However, the nature of
RAID 3 makes it ideal for that single-threaded type of application.
Data Availability and Disk Space Usage
If data availability is critical and you cannot afford to wait hours to
replace a disk, rebind it, make it accessible to the operating system,
and load its information from backup, then use a redundant RAID
Group: RAID 5, RAID 3, RAID 1 mirrored pair, or RAID 1/0. Or bind
a RAID 0 Group or individual disk unit that you will later mirror
with software mirroring. If data availability is not critical, or disk
space usage is critical, bind an individual unit or RAID 0 Group
without software mirroring.
A RAID 1 mirrored pair or RAID 1/0 Group provides very high data
availability. They are more expensive than RAID 5 or RAID 3 Groups,
since only 50 percent of the total disk capacity is available for user
data, as shown on page 2-13.
A RAID 5 or RAID 3 Group provides high data availability, but
requires more disks than a mirrored pair. In a RAID 5 or RAID 3
Group of five disks, 80 percent of the disk space is available for user
data. So RAID 5 and RAID 3 Groups use disk space much more
efficiently than a mirrored pair. A RAID 5 or RAID 3 Group is usually
more suitable than a RAID 1 mirrored pair for applications where
high data availability, good performance, and efficient disk space
usage are all of relatively equal importance.
2
RAID Benefits and Tradeoffs
2-15
RAID Types and Tradeoffs
2
RAID 5 GroupRAID 3 Group
1st disk
user and parity data
2nd disk
user and parity data
3rd disk
user and parity data
4th disk
user and parity data
5th disk
user and parity data
Disk Mirror (RAID 1 mirrored pair)
1st disk
user data
2nd disk
user data
1st disk
user data
2nd disk
user data
3rd disk
user data
4th disk
user data
5th disk
parity data
50% user data
50% redundant dat
80% user data
20% parity dat
50% user data
50% redundant data
a
100% user dat
a
RAID 0 Group
(nonredundant array)
1st disk
user data
a
2nd disk
user data
3rd disk
user data
RAID 1/0 Group
1st disk
user data
2nd disk
user data
3rd disk
user data
4th disk
user data
5th disk
user data
2-16
Individual Disk Unit
User data
Figure 2-7Disk Space Usage in the RAID Configurations
100% user dat
a
Hot Spare
Reserved
A RAID 0 Group (nonredundant individual access array) provides all
its disk space for user files, but does not provide any high availability
features.
A RAID 1/0 Group provides the best combination of performance
and availability, at the highest cost per Mbyte of disk space.
An individual unit, like a RAID 0 Group, provides no
high-availability features. All its disk space is available for user data,
as shown in the figure above.
To decide when to use a RAID 5 Group, RAID 3 Group, mirror (that
is, a RAID 1 mirrored pair or RAID 1/0 Group), a RAID 0 Group,
individual disk unit, or hot spare, you need to weigh these factors:
•Importance of data availability
•Importance of performance
•Amount of data stored
•Cost of disk space
The following guidelines will help you decide on RAID types.
Use a RAID 5 Group (individual access array) for applications
where
•Data availability is very important
•Large volumes of data will be stored
•Multitask applications use I/O transfers of different sizes
•Good read and moderate write performance are important (write
caching can improve (RAID 5 write performance)
RAID Types and Tradeoffs
2
•You want the flexibility of multiple LUNs per RAID Group
Use a RAID 3 Group (parallel access array) for applications where
•Data availability is very important
•Large volumes of data will be stored
•A single-task application uses large I/O transfers (more than 64
Kbytes). The operating system must allow transfers aligned to
start at disk addresses that are multiples of 2 Kbytes from the start
of the LUN.
Use a RAID 1 mirrored pair for applications where
•Data availability is very important
•Speed of write access is important and write activity is heavy
Use a RAID 1/0 Group (mirrored nonredundant array) for
applications where
•Data availability is critically important
•Overall performance is very important
Guidelines for RAID Types
2-17
RAID Types and Tradeoffs
2
Use a RAID 0 Group (nonredundant individual access array) for
applications where
•High availability is not important
•Overall performance is very important
Use an individual unit for applications where
•High availability is not important
•Speed of write access is somewhat important
Use a hot spare where
•In any RAID 5, RAID 3, RAID 1/0 or RAID 1 Group, high
availability is so important that you want to regain data
redundancy quickly without human intervention if any disk in
the Group fails
•Minimizing the degraded performance caused by disk failure in a
RAID 5 or RAID 3 Group is important
This section describes some types of applications in which you would
want to use a RAID 5 Group, RAID 3 Group, RAID 1 mirrored pair,
RAID 0 Group (nonredundant array), RAID 1/0 Group, or individual
unit.
RAID 5 Group(individual access array) — Useful as a database
repository or a database server that uses a normal or low percentage
of write operations (writes are 33 percent or less of all I/O
operations). Use a RAID 5 Group where multitask applications
perform I/O transfers of different sizes. Write caching can
significantly enhance the write performance of a RAID 5 Group.
For example, a RAID 5 Group is suitable for multitasking
applications that require a large history database with a high read
rate, such as a database of legal cases, medical records, or census
information. A RAID 5 Group also works well with transaction
processing applications, such as an airline reservations system, where
users typically read the information about several available flights
before making a reservation, which requires a write operation. You
could also use a RAID 5 Group in a retail environment, such as a
supermarket, to hold the price information accessed by the
point-of-sale terminals. Even though the price information may be
updated daily, requiring many write operations, it is read many more
times during the day.
RAID Types and Tradeoffs
2
RAID 3 Group— A RAID 3 Group (parallel access array) works well
with a single-task application that uses large I/O transfers (more than
64 Kbytes), aligned to start at a disk address that is a multiple of 2
Kbytes from the beginning of the logical disk. RAID 3 Groups can use
SP memory to great advantage without the second SP and battery
backup unit required for storage-system caching.
You might use a RAID 3 Group for a single-task application that does
large I/O transfers, like a weather tracking system, geologic charting
application, medical imaging system, or video storage application.
RAID 1 mirrored pair — A RAID 1 mirrored pair is useful for
logging or record-keeping applications because it requires fewer
disks than a RAID 0 Group (nonredundant array) and provides high
availability and fast write access. Or you could use it to store daily
updates to a database that resides on a RAID 5 Group, and then,
during off-peak hours, copy the updates to the database on the
RAID 5 Group.
Sample Applications for RAID Types
2-19
RAID Types and Tradeoffs
2
RAID 0 Group (nonredundant individual access array) — Use a
RAID 0 Group where the best overall performance is important. In
terms of high availability, a RAID 0 Group is less available than an
individual unit. A RAID 0 Group (like a RAID 5 Group) requires a
minimum of three disks. A RAID 0 Group serves well for an
application that uses short-term data to which users need quick
access.
RAID 1/0 Group (mirrored RAID 0 Group) — A RAID 1/0 Group
provides the best balance of performance and availability. You can
use it very effectively for any of the RAID 5 applications. A RAID 1/0
Group requires a minimum of four disks.
Individual unit — An individual unit is useful for print spooling,
user file exchange areas, or other such applications, where high
availability is not important or where the information stored is easily
restorable from backup.
The performance of an individual unit is slightly less than a standard
disk not in an storage system. The slight degradation results from SP
overhead.
Hot spare — A hot spare provides no data storage but enhances the
availability of each RAID 5, RAID 3, RAID 1, and RAID 1/0 Group in
a storage system. Use a hot spare where you must regain high
availability quickly without human intervention if any disk in such a
RAID Group fails. A hot spare also minimizes the period of degraded
performance after a RAID 5 or RAID 3 disk fails.
2-20
What Next?This chapter explained RAID Group types and tradeoffs. To plan
LUNs and file systems for shared storage, continue to Chapter 3; or
for unshared storage, skip to Chapter 4. For details on storagesystem hardware — shared and unshared — skip to Chapter 5.
For storage-system management utilities, skip to Chapter 6.
This chapter shows a sample RAID, LUN, and Storage Group
configuration with shared storage, and then provides worksheets for
planning your own shared storage installation. Topics are
• Dual Paths to LUNs...........................................................................3-2
• Planning Applications, LUNs, and Storage Groups.....................3-6
Planning File Systems and LUNs with Shared Switched Storage
3-1
Planning File Systems and LUNs with Shared Switched Storage
3
Dual Paths to LUNs
A shared storage system includes two or more servers, one or two
Fibre Channel switches, and one or more storage systems, each with
two SPs and Access Logix software.
With shared storage, there are two paths to each LUN in the storage
system. The storage-system software, using optional software called
Application Transparent Failover (ATF), can automatically switch to
the other path if a device (such as a host-bus adapter or cable) fails.
With unshared storage, if the server has two adapters and the storage
system has two SPs, ATF software is available as an option. With two
adapters and two SPs, ATF can perform the same function as with
shared systems: automatically switch to the other path if a device
(such as host bus adapter or cable) fails.
Planning File Systems and LUNs with Shared Switched Storage
Sample Shared Switched Installation
The following figure shows a sample shared switched
(high-availability) storage system connected to three servers: two
servers in a cluster and one server running a database management
program.
Highly available cluster
3
File Server (FS)Mail Server(MS)
Operating
system A
Adapter
Adapter
Private storage
Cluster
Storage Group
Database Server
Storage Group
Operating
system A
Adapter
Switch fabric
SP A
FS R5
Files A
MS R5
ISP A mail
MS R5
Users
DS R5
Users
DS R1
LogD1
Spare
Adapter
Spare
SP B
FS R5
Files B
MS R5
ISP B mail
MS R5
Specs
DS R5
Dbase2
DS R5 (6
disks) Dbase1
Database Server(DS)
Operating
system B
Adapter
Adapter
Switch fabric
Disk IDs
4_0-4_9
3_0-3_9
2_0-2_9
1_0-1_9
0_0-0_9
Path 1
Path 2
Figure 3-1Sample Shared Switched High Availability installation
Sample Shared Switched Installation
3-3
Planning File Systems and LUNs with Shared Switched Storage
3
The storage-system disk IDs and server Storage Group LUNs are as
follows.
Clustered System LUNs
File Server LUNs (FS) - SP BMail Server LUNs (MS) - SP A
Disk IDs
RAID type, storage type
4_0-4_4 RAID 5, Files A
4_5-4_9 RAID 5, Files B
6_0, 6_1 – Hot spare (automatically replaces a failed disk in any server’s LUN)
RAID type, storage type
Disk IDs
2_0-2_4 RAID 5, ISP A mail
2_5-2_9 RAID 5, ISP B mail
3_0-3_4 RAID 5, Users
3_5-3_9 RAID 5, Specs
Planning File Systems and LUNs with Shared Switched Storage
Mail Server — 576 Gbytes on four LUNs
MS R5
ISP mail
Unit O on five disks bound as a RAID 5 Group for
144 Gbytes of storage; for the mail delivered via ISP A.
MS R5
ISP mail
Unit P on five disks bound as a RAID 5 Group for
144 Gbytes of storage; for the mail delivered via ISP B.
MSR5
Users
Unit Q on five disks bound as a RAID 5 Group for
144 Gbytes of storage; for user directories and files.
MS R5
Specs
Unit R on five disks bound as a RAID 5 Group for
144 Gbytes of storage; for specifications.
Database Server — 416 Gbytes on four LUNs
DS R5
Users
Unit users on five disks bound as a RAID 5 Group for
144 Gbytes of storage; for user directories.
3
DS R5
Dbase2
DS R1
Logs
DS R5
Dbase1
Unit dbase2 on five disks bound as a RAID 5 Group
for 144 Gbytes of storage; for the second database system.
Unit logfiles on two disks bound as a RAID 1 mirrored
pair for 36 Gbytes of storage; for the database log files.
Unit dbase on six disks bound as a RAID 5 Group for
180 Gbytes of storage; for the primary database system.
Sample Shared Switched Installation
3-5
Planning File Systems and LUNs with Shared Switched Storage
3
Planning Applications, LUNs, and Storage Groups
This section helps you plan your shared storage use — the
applications to run, the LUNs that will hold them, and the Storage
Groups that will belong to each server. The worksheets to help you
do this include
•Application and LUN planning worksheet - lets you outline your
storage needs.
•LUN and Storage Group planning worksheet - lets you decide on
the disks to compose the LUNs and the LUNs to compose the
Storage Groups for each server.
•LUN details worksheet - lets you plan each LUN in detail.
Make as many copies of each blank worksheet as you need. You will
need this information later when you configure the shared storage
system.
Use the following worksheet to list the applications you will run and
the RAID type and size of LUN to hold them. For each application
that will run in the SAN, write the application name, file system (if
any), RAID type, LUN ID (ascending integers, starting with 0), disk
space required, and finally the name of the servers and operating
systems that will use the LUN.
Application and LUN Planning Worksheet
Planning File Systems and LUNs with Shared Switched Storage
3
Application
Application
a
il 1
M
Mail 2
Database index
File system,
partition, or drive
RAID type of
LUN
LUN
ID (hex)
A sample worksheet begins as follows:
File system,
partition, or drive
RAID type of
LUN
LUN
ID (hex)
RAID 50
RAID 51
RAI
D 1 2
Disk space
required
(Gbytes)
Disk space
required
(Gbytes)
72 G
b
72 G
b
18 G
b
Server name and
operating system
Server name and
operating system
Ser
ver
1, NT
Ser
ver
1, NT
Ser
ver
2, NT
Completing the Application and LUN Planning Worksheet
Application. Enter the application name or type.
File system, partition, or drive. Write the drive letter (for Windows
only) and the partition, file system, logical volume, or drive letter
name, if any.
With a Windows operating system, the LUNs are identified by drive
letter only. The letter does not help you identify the disk
configuration (such as RAID 5). We suggest that later, when you use
the operating system to create a partition on a LUN, you use the disk
administrator software to assign a volume label that describes the
RAID configuration. For example, for drive T, assign the volume ID
RAID5_T. The volume label will then identify the drive letter.
Planning Applications, LUNs, and Storage Groups
3-7
Planning File Systems and LUNs with Shared Switched Storage
3
RAID type of LUN. This is the RAID Group type you want for this
partition, file system, or logical volume. The features of RAID types
are explained in Chapter 2. For a RAID 5, RAID 1, RAID 1/0, and
RAID 0 Group, you can create one or more LUNs on the RAID
Group. For other RAID types, you can create only one LUN per RAID
Group.
LUN ID. The LUN ID is a hexadecimal number assigned when you
bind the disks into a LUN. By default, the ID of the first LUN bound
is 0, the second 1, and so on. Each LUN ID must be unique within the
storage system, regardless of its Storage Group or RAID Group.
The maximum number of LUNs supported on one host-bus adapter
depends on the operating system.
Disk space required (Gbytes). Consider the largest amount of disk
space this application will need, then add a factor for growth.
Server hostname and operating system. Enter the server hostname
(or, if you don’t know the name, a short description that identifies the
server) and the operating system name, if you know it.
LUN and Storage Group Planning Worksheet
Use the following worksheet to select the disks that will make up the
LUNs and Storage Groups in the SAN. A shared storage system can
include up to 100 disks, numbered 0 through 99, left to right from the
bottom up.
Planning File Systems and LUNs with Shared Switched Storage
LUN and Storage Group Planning Worksheet
11_0 11_1 11_211_3 11_4 11_5 11_6 11_7 11_811_9
10_0 10_1 10_210_3 10_4 10_5 10_6 10_7 10_810_9
9_0 9_1 9_2 9_3 9_4 9_5 9_6 9_7 9_8 9_9
8_0 8_1 8_2 8_3 8_4 8_5 8_6 8_7 8_8 9_9
7_0 7_1 7_2 7_3 7_4 7_5 7_6 7_7 7_8 7_9
6_0 6_1 6_2 6_3 6_4 6_5 6_6 6_7 6_8 6_9
5_0 5_1 5_2 5_3 5_4 5_5 5_6 5_7 5_8 5_9
4_0 4_1 4_2 4_3 4_4 4_5 4_6 4_7 4_8 4_9
3_0 3_1 3_2 3_3 3_4 3_5 3_6 3_7 3_8 3_9
2_0 2_1 2_2 2_3 2_4 2_5 2_6 2_7 2_8 2_9
3
1_0 1_1 1_2 1_3 1_4 1_5 1_6 1_7 1_8 1_9
0_0 0_1 0_2 0_3 0_4 0_5 0_6 0_7 0_8 0_9
Storage system number or name:_______________
Storage Group ID or name:______ Server hostname:_____________________ Dedicated Shared
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
Storage Group ID or name:______ Server hostname:_____________________ Dedicated Shared
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
Storage Group ID or name:______ Server hostname:_____________________ Dedicated Shared
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
Planning Applications, LUNs, and Storage Groups
3-9
Planning File Systems and LUNs with Shared Switched Storage
_
_
_
_
_
_
_
_
3
Part of a sample LUN and Storage Group worksheet follows.
3_0 3_1 3_2 3_3 3_4 3_5 3_6 3_7 3_8 3_9
2_0 2_1 2_2 2_3 2_4 2_5 2_6 2_7 2_8 2_9
LUN 2
RAID 1
LUN 0
RAID 5
Storage system number or name:_______________
Storage Group ID or name:______ Server hostname:_____________________ Dedicated Shared
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________
Storage Group ID or name:______ Server hostname:_____________________ Dedicated Shared
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________
LUN ID or name_______RAID type ___ Cap. (Gb) _____ Disk IDs_____________________________
0
1
2
1_0 1_1 1_2 1_3 1_4 1_5 1_6 1_7 1_8 1_9
0_0 0_1 0_2 0_3 0_4 0_5 0_6 0_7 0_8 0_9
SS1
Mail 1
5
5
Index1
1
72
72
Server2
18
Server1
LUN 1
RAID 5
X
0_0, 0_1, 0_2, 0_3, 0_4
0_5, 0_6, 0_7, 0_8, 0_9
X
1_0, 1_1
Completing the LUN and Storage Group Planning Worksheet
As shown, draw circles around the disks that will compose each
LUN, and within each circle specify the RAID type (for example,
RAID 5) and LUN ID. This is information you will use to bind the
disks into LUNs. For disk IDs, use the form shown. This form is
enclosure_diskID, where enclosure is the enclosure number (the bottom
one is 0, above it 1, and so on) and diskID is the disk position (left is 0,
next is 1, and so on).
3-10
None of the disks 0_0 through 0_8 may be used as a hot spare.
Next, complete as many of the Storage System sections as needed for
all the Storage Groups in the SAN. Copy the (blank) worksheet as
needed for all Storage Groups in each storage system.
A storage system is any group of enclosures connected to a DPE; it
can include up to 11 DAE enclosures for a total of 120 disks. If a
Planning File Systems and LUNs with Shared Switched Storage
3
Storage Group will be dedicated (not accessible by another system in
a cluster), mark the Dedicated box at the end of its line; if the Storage
Group will be accessible to one or more other servers in a cluster,
write the hostnames of all servers and mark the Shared box.
Use the following LUN details worksheet to plan the individual
LUNs. Complete as many of these as needed for all LUNs in your
SAN.
Planning Applications, LUNs, and Storage Groups
3-11
Planning File Systems and LUNs with Shared Switched Storage
3
LUN Details Worksheet
Storage system (complete this section once for each storage system)
Storage-system number or name:______
Storage-system installation type
❏ Unshared Direct ❏ Shared-or-Clustered Direct❏ Shared Switched
SP FC-AL address ID (unshared only):SP A:_____SP B:_____
SP FC-AL address ID. This does not apply to shared storage, in
which the switch determines the address of each device.
Use memory for caching. You can use SP memory for read/write
caching or RAID 3. (Using both caching and RAID 3 in the same
storage system is not recommended.) You can use different cache
settings for different times of day (for example, for user I/O
during the day, use more write cache; for sequential batch jobs at
night, use more read cache. You enable caching for specific LUNs
— allowing you to tailor your cache resources according to
priority. If you choose caching, check the box and continue to the
next step; for RAID 3, skip to the RAID Group ID entry.
Read cache size. If you want a read cache, it should generally be
about one third of the total available cache memory.
3-14
Write cache size. The write cache should be two thirds of the total
available. Some memory is required for system overhead, so you
cannot determine a precise figure at this time. For example, for
256 Mbytes of total memory, you might have 240 Mbytes
available, and you would specify 80 Mbytes for the read cache
and 160 Mbytes for the write cache.
Cache page size. This applies to both read and write caches. It can
be 2, 4, 8, or 16 Kbytes. As a general guideline, we suggest
• For a general-purpose file server — 8 Kbytes
• For a database application — 2 or 4 Kbytes
The ideal cache page size depends on the operating system and
application.
Use memory for RAID 3. If you want to use the SP memory for
RAID 3, check the box.
RAID Group/LUN Entries
Complete a RAID Group/LUN entry for each LUN and hot spare.
Planning File Systems and LUNs with Shared Switched Storage
LUN ID. The LUN ID is a hexadecimal number assigned when
you bind the disks into a LUN. By default, the ID of the first LUN
bound is 0, the second 1, and so on. Each LUN ID must be unique
within the storage system, regardless of its Storage Group or
RAID Group.
The maximum number of LUNs supported on one host-bus
adapter depends on the operating system.
RAID Group ID. This ID is a hexadecimal number assigned
when you create the RAID Group. By default, the number of the
first RAID Group in a storage system is 0, the second 1, and so on,
up to the maximum of 1F (31).
Size (RAID Group size). Enter the user-available capacity in
gigabytes (Gbytes) of the whole RAID Group. You can determine
the capacity as follows:
• A five-disk RAID 5 or RAID 3 Group of 36-Gbyte disks holds
144 Gbytes;
• An eight-disk RAID 1/0 Group of 36-Gbyte disks also holds
144 Gbytes;
• A RAID 1 mirrored pair of 36-Gbyte disks holds 36 Gbytes;
and
• An individual disk of an 36-Gbyte disk also holds 36 Gbytes.
Each disk in the RAID Group must have the same capacity;
otherwise, you will waste disk storage space.
LUN size. Enter the user-available capacity in gigabytes (Gbytes)
of the LUN. You can make this the same size as the RAID Group,
above. Or, for a RAID 5, RAID 1, RAID 1/0, or RAID 0 Group,
you can make the LUN smaller than the RAID Group. You might
do this if you wanted a RAID 5 Group with a large capacity and
wanted to place many smaller capacity LUNs on it; for example,
to specify a LUN for each user. However, having multiple LUNs
Planning Applications, LUNs, and Storage Groups
3-15
Planning File Systems and LUNs with Shared Switched Storage
3
per RAID Group may adversely impact performance. If you want
multiple LUNs per RAID Group, then use a RAID Group/LUN
series of entries for each LUN.
Disk IDs. Enter the ID(s) of all disks that will make up the LUN
or hot spare. These are the same disk IDs you specified on the
previous worksheet. For example, for a RAID 5 Group in the DPE
(enclosure 0, disks 2 through 6), enter 0_2, 0_3, 0_4, 0_5, and 0_6.
SP. Specify the SP that will own the LUN: SP A or SP B. You can
let the management program automatically select the SP to
balance the workload between SPs; to do so, leave this entry
blank.
RAID type. Copy the RAID type from the previous worksheet.
For example, RAID 5 or hot spare. For a hot spare (not strictly
speaking a LUN at all), skip the rest of this LUN entry and
continue to the next LUN entry (if any).
If this is a RAID 3 Group, specify the amount of SP memory for
that group. To work efficiently, each RAID 3 Group needs at least
6 Mbytes of memory.
Caching. If you want to use caching (entry on page 3-14), you can
specify whether you want caching — read and write, read, or
write for this LUN. Generally, write caching improves
performance far more than read caching. The ability to specify
caching on a LUN basis provides additional flexibility, since you
can use caching for only the units that will benefit from it. Read
and write caching recommendations follow.
3-16
Table 3-1Cache Recommendations for Different RAID Types
RAID 5RAID 3RAID 1RAID 1/0RAID 0Individual Unit
Highly Recommended Not allowedRecommended Recommended Recommended Recommended
Servers that can access this LUN. Enter the name of each server
(copied from the LUN and Storage Group worksheet).
Operating system information:Device name. Enter the
operating system device name, if this is important and if you
know it. Depending on your operating system, you may not be
able to complete this field now.
Planning File Systems and LUNs with Shared Switched Storage
File system, partition, or drive. Write the name of the file system,
partition, or drive letter you will create on this LUN. This is the
same name you wrote on the application worksheet.
On the following line, write any pertinent notes; for example, the
file system mount- or graft-point directory pathname (from the
root directory). If this storage system’s chassis will be shared with
another server, and the other server is the primary owner of this
disk, write secondary. (As mentioned earlier, if the storage system
will be used by two servers, we suggest you complete one of
these worksheets for each server.)
What Next?This chapter outlined the planning tasks for shared storage systems.
If you have completed the worksheets to your satisfaction, you are
ready to learn about the hardware needed for these systems as
explained in Chapter 5.
3
Planning Applications, LUNs, and Storage Groups
3-17
Planning File Systems and LUNs with Shared Switched Storage
This chapter shows sample RAID and LUN configurations with
direct storage installations and then provides worksheets for
planning your own storage installation. Topics are
• Dual SPs and Paths to LUNs............................................................4-2
• Unshared Direct and Shared-or-Clustered Direct Storage...........4-2
• Planning Applications and LUNs ...................................................4-4
Planning LUNs and File Systems with Unshared Direct Storage
4-1
Planning LUNs and File Systems with Unshared Direct Storage
4
Dual SPs and Paths to LUNs
If a storage system has two SPs, there are two routes to its LUNs. If
the server has two adapters and the storage system has two SPs,
Application Transparent Failover (ATF). ATF can automatically
switch to the other path, without disrupting applications, if a device
(such as a host-bus adapter, cable, or SP) fails.
Unshared Direct and Shared-or-Clustered Direct Storage
This section explains the direct (unswitched) options available for
connecting storage systems to servers. As needs change, you may
want to change a configuration. You can do so without changing your
LUN configuration or losing user data.
There are two types of installation:
•Unshared direct with one server is the simplest and least costly;
•Shared-or-clustered direct lets two clustered servers share
storage resources with high availability.
Sample Unshared Direct Installation
Disk IDs
100-109
010-019
Figure 4-1Unshared Direct Installation
4-2
EMC Fibre Channel Storage Systems Configuration Planning Guide
Server
Adapter
Adapter
SP ASP B
Database
RAID 5
Sys
RAID 1
Users
RAID 5
Clients, mail
RAID 5
Path
Path 2
1
EMC1825
Planning LUNs and File Systems with Unshared Direct Storage
The storage system disk IDs and LUNs are as follows. The LUN
capacities shown assume 36-Gbyte disks.
If each disk holds 36 Gbytes, then the storage-system chassis provides
Server 1 with 256 Gbytes of disk storage, 220 Gbytes highly available;
it provides Server 2 with 216 Gbytes of storage, all highly available.
Each server has its own SP, which controls that server’s LUNs; those
LUNs remain primary to that server. The LUNs are as follows.
Unshared Direct and Shared-or-Clustered Direct Storage
RAID type, storage type, capacity
Disk IDs
1_0-1_7 RAID 5 (8 disks), Cust Accounts, 216
Gbytes
4-3
Planning LUNs and File Systems with Unshared Direct Storage
4
Planning Applications and LUNs
This section helps you plan your unshared (direct) storage use —
applications you want to run and the LUNs that will hold them. The
worksheets to help you do this include
•Application and file system planning worksheet - lets you outline
your storage needs.
•LUN planning worksheet - lets you decide on the disks that will
compose the LUNs.
•LUN details worksheet - lets you plan each LUN in detail.
Make as many copies of each blank worksheet as you need. You will
need this information later when you configure the shared storage
system.
Sample file system, Storage Group, and LUN worksheets appear later
in this chapter.
Application and LUN Planning
4-4
Use the following worksheet to plan your file systems and RAID
types. For each application, write the application name, file system (if
any), RAID type, LUN ID (ascending integers, starting with 0), disk
space required, and finally the name of the servers and operating
systems that will use the LUN.
EMC Fibre Channel Storage Systems Configuration Planning Guide
Application and LUN Planning Worksheet
Planning LUNs and File Systems with Unshared Direct Storage
4
RAID type of
ApplicationFile system (if any)
LUN
A sample worksheet begins as follows:
ApplicationFile system (if any)
a
il 1RAI D 50
M
Mail 2RAI D 51
Database indexRAI
RAID type of
LUN
D 1 2
LUN
ID (hex)
LUN
ID (hex)
Disk space
required
(Gbytes)
Disk space
required
(Gbytes)
72 G
b
72 G
b
18 G
b
Server name and
operating system
Server name and
operating system
Ser
ver
1, NT
Ser
ver
1, NT
Ser
ver
2, NT
Completing the Application and LUN Planning Worksheet
Application. Enter the application name or type.
File system, partition, or drive. Write the drive letter (for Windows
only) and the partition, file system, logical volume, or drive letter
(Windows only) name.
With a system such as Windows NT, the LUNs are identified by drive
letter only. The letter does not help you identify the disk
configuration (such as RAID 5). We suggest that later, when you use
the operating system to create a partition on the unit, you use the disk
administrator software to assign a volume label that describes the
Planning Applications and LUNs
4-5
Planning LUNs and File Systems with Unshared Direct Storage
4
RAID configuration. For example, for drive T, assign the volume ID
RAID5_T. The volume label will then identify the drive letter.
RAID type of LUN is the RAID Group type you want for this
partition, file system, or logical volume. The features of RAID types
are explained in Chapter 2. For a RAID 5, RAID 1, RAID 1/0, and
RAID 0 Group, you can create one or more LUNs on the RAID
Group. For other RAID types, you can create only one LUN per RAID
Group.
LUN ID is a hexadecimal number assigned when you bind the disks
into a LUN. By default, the ID of the first LUN bound is 0, the second
1, and so on. Each LUN ID must be unique within the storage system,
regardless of its Storage Group or RAID Group.
The maximum number of LUNs supported on one host-bus adapter
depends on the operating system. Some systems allow only eight
LUNs (numbers 0 through 7). For an operating system with this
restriction, if you want a hot spare, assign the hot spare an ID above
7; for example, 8 or 9. The operating system never accesses a hot
spare, so the ID is irrelevant to it.
Disk space required (Gbytes), Consider the largest amount of disk
space this application will need, then add a factor for growth.
LUN Planning Worksheet
4-6
EMC Fibre Channel Storage Systems Configuration Planning Guide
Server hostname and operating system Enter the server hostname
(or, if you don’t know the name, a short description that identifies the
server) and the operating system name, if you know it.
If this storage system will be used by two servers, provide a copy of
this worksheet to the other server. This is particularly important
where one server may take over the other’s LUNs. If a LUN will be
shared, on the Notes section of the LUN details worksheet, write
Primary toserver-name or Secondary toserver-name.
Use one of the following worksheets (Rackmount or Deskside) to
select the disks that will make up the LUNs. Depending on model, a
full-fibre rackmount storage system can include up to 100 disks,
numbered 0 through 99, left to right from the bottom up.
Again depending on model, a deskside storage system can hold ten,
20, or 30 disks.
Planning LUNs and File Systems with Unshared Direct Storage
LUN Planning Worksheet - Rackmount
Full-fibre storage system
11_0 11_1 11_2 11_3 11_4 11_5 11_6 11_7 11_8 11_9
10_0 10_1 10_2 10_3 10_4 10_5 10_6 10_7 10_8 10_9
9_0 9_1 9_2 9_3 9_4 9_5 9_6 9_7 9_8 9_9
8_0 8_1 8_2 8_3 8_4 8_5 8_6 8_7 8_8 8_9
7_0 7_1 7_2 7_3 7_4 7_5 7_6 7_7 7_8 7_9
6_0 6_1 6_2 6_3 6_4 6_5 6_6 6_7 6_8 6_9
5_0 5_1 5_2 5_3 5_4 5_5 5_6 5_7 5_8 5_9
4_0 4_1 4_2 4_3 4_4 4_5 4_6 4_7 4_8 4_9
3_0 3_1 3_2 3_3 3_4 3_5 3_6 3_7 3_8 3_9
4
2_0 2_1 2_2 2_3 2_4 2_5 2_6 2_7 2_8 2_9
1_0 1_1 1_2 1_3 1_4 1_5 1_6 1_7 1_8 1_9
0_0 0_1 0_2 0_3 0_4 0_5 0_6 0_7 0_8 0_9
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
Planning Applications and LUNs
4-7
Planning LUNs and File Systems with Unshared Direct Storage
_
_
_
_
_
_
_
_
4
LUN Planning Worksheet - Deskside
Full-fibre storage system
0_0
0_1
0_2
0_3
0_4
0_5
0_6
0_7
0_8
0_9
1_0
1_1
1_2
1_3
1_4
1_5
1_6
1_7
1_8
1_9
2_0
2_1
2_2
2_3
2_4
2_5
2_6
2_7
2_8
2_9
Storage system number_____
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_______________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_______________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_______________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_______________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_______________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_______________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_______________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_______________________________________
4-8
EMC Fibre Channel Storage Systems Configuration Planning Guide
Planning LUNs and File Systems with Unshared Direct Storage
A sample LUN worksheet follows.
2_0 2_1 2_2 2_3 2_4 2_5 2_6 2_7 2_8 2_9
4
LUN 2
1_0 1_1 1_2 1_3 1_4 1_5 1_6 1_7 1_8 1_9
RAID 1
LUN 0
0_0 0_1 0_2 0_3 0_4 0_5 0_6 0_7 0_8 0_9
RAID 5
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
LUN number_______RAID type ___ Cap. (Gb) _____ Disk IDs_________________________________________
0
1
2
Completing the LUN Planning Worksheet
As shown, draw circles around the disks that will compose each
LUN, and within each circle specify the RAID type (for example,
RAID 5) and LUN ID. This is information you will use to bind the
disks into LUNs. For disk IDs, use the form shown. This form is
enclosure_diskID, where enclosure is the enclosure number (the bottom
one is 0, above it 1, and so on) and diskID is the disk position (left is 0,
next is 1, and so on).
5
1
5
1
144
144
36
LUN 1
RAID 5
0_0, 0_1, 0_2, 0_3, 0_4
0_5, 0_6, 0_7, 0_8, 0_9
1_0, 1_1
None of the disks 0_0 through 0_8 may be used as a hot spare.
Next, complete as many of the LUN sections as needed for each
storage system.Copy the (blank) worksheet as needed for all LUNs in
each storage system. A storage system is any group of enclosures
connected to a DPE; a full-fibre system can include up to nine DAE
enclosures for a total of 100 disks.
LUN Details WorksheetUse the following LUN details worksheet to plan the individual
LUNs. Complete as many of these as needed for all LUNs.
Planning Applications and LUNs
4-9
Planning LUNs and File Systems with Unshared Direct Storage
4
LUN Details Worksheet
Storage system (complete this section once for each storage system)
Storage-system number or name:______
Storage-system installation type
❏ Unshared Direct ❏ Shared-or-Clustered Direct ❏ Shared Switched
SP FC-AL address ID (unshared only):SP A:_____SP B:_____
SP memory (Mbytes):SP A:______SP B:______
V Use for cachingRead cache size:___ MB Write cache size: ___ MB Cache page size:___KB
Operating system information: Device name:File system, partition, or drive:T
RAID Group ID: 1 Size,GB: 144 LUN size,GB: 144 Disk IDs:SP: VA VB
RAID type:VRAID 5V RAID 3 - Memory, MB:___V RAID 1 mirrored pairV RAID 0
Caching: VRead and write VWrite VRead VNone
Servers that can access this LUN: Server1
Operating system information: Device name:File system, partition, or drive:U
RAID Group ID: 2 Size,GB:LUN size, GB: 36 Disk IDs:SP: VA VB
RAID type:VRAID 5V RAID 3 - Memory, MB:___V RAID 1 mirrored pairV RAID 0
Caching: VRead and write VWrite VRead VNone
Servers that can access this LUN:
Operating system information: Device name:File system, partition, or drive:V
X
V RAID 1/0V Individual diskV Hot spare
X
18
V RAID 1/0V Individual diskV Hot spare
X
Server1
Server1
LUN ID:__1__
LUN ID:__2__
0_5, 0_6, 0_7, 0_8, 0_9
1_0, 1_1
X
X
X
Planning Applications and LUNs
4-11
Planning LUNs and File Systems with Unshared Direct Storage
4
Completing the LUN Details Worksheet
Complete the header portion of the worksheet for each storage
system as described below. Copy the blank worksheet as needed.
Sample completed LUN worksheets appear later.
Storage-System Entries
Storage-system configuration. Specify Unshared Direct (one server)
or Shared-or-Clustered Direct (two servers).
For any multiple-server configuration, each server will need cluster
software.
SP FC-AL address ID. For unshared storage, which uses FC-AL
addressing, each SP (and each other node) on a Fibre Channel loop
must have a unique FC-AL address ID. You set the SP FC-AL address
ID using switches on the back panel of the SP. The valid FC-AL
address ID range is a number 0 through 125 decimal, which is 0
through 7D hexadecimal. For any number above 9, we suggest
hexadecimal, since the switches are marked in hexadecimal.
If you have two FC-AL loops, we suggest a unique FC-AL address ID
for each SP on both loops.
4-12
SP memory. Enter the amount of memory each SP has. If a storage
system has two SPs, they will generally have the same amount of
memory. You can allocate this memory to storage-system caching or
RAID 3 use.
Use memory for caching. You can use SP memory for read/write
caching or RAID 3. (Using both caching and RAID 3 in the same
storage system not recommended.) You can use different cache
settings for different times of day (for example, for user I/O during
the day, use more write cache; for sequential batch jobs at night, use
more read cache. You enable caching for specific LUNs — allowing
you to tailor your cache resources according to priority. If you choose
caching, check the box and continue to the next step; for RAID 3, skip
to the RAID Group ID entry.
Read cache size. If you want a read cache, it should generally be
about one third of the total available cache memory.
Write cache size. The write cache should be two thirds of the total
available. Some memory is required for system overhead, so you
cannot determine a precise figure at this time. For example, for 256
Mbytes of total memory, you might have 240 Mbytes available, and
EMC Fibre Channel Storage Systems Configuration Planning Guide
Planning LUNs and File Systems with Unshared Direct Storage
you would specify 80 Mbytes for the read cache and 160 Mbytes for
the write cache.
Cache page size. This applies to both read and write caches. It can be
2, 4, 8, or 16 Kbytes. As a general guideline, we suggest
For a general-purpose file server — 8 Kbytes
For a database application — 2 or 4 Kbytes
The ideal cache page size depends on the operating system and
application.
Use memory for RAID 3. If you want to use the SP memory for
RAID 3, check the box.
RAID Group/LUN Entries
Complete a RAID Group/LUN entry for each LUN and hot spare.
LUN ID. The LUN ID is a hexadecimal number assigned when you
bind the disks into a LUN. By default, the ID of the first LUN bound
is 0, the second 1, and so on. Each LUN ID must be unique within the
storage system, regardless of its Storage Group or RAID Group.
The maximum number of LUNs supported on one host-bus adapter
depends on the operating system. Some systems allow only eight
LUNs (numbers 0 through 7). For an operating system with this
restriction, if you want a hot spare, assign the hot spare an ID above
7; for example, 8 or 9. The operating system never accesses a hot
spare, so the ID is irrelevant to it.
4
RAID Group ID. This is a hexadecimal number assigned when you
create the RAID Group. By default, the number of the first RAID
Group in a storage system is 0, the second 1, and so on, up to the
maximum of 1F (31).
Size (RAID Group size) Enter the user-available capacity in gigabytes
(Gbytes) of the whole RAID Group. You can determine the capacity
as follows:
RAID-5 or RAID-3 Group: disk-size * (number-of-disks - 1)
RAID 1/0 or RAID-1
Planning LUNs and File Systems with Unshared Direct Storage
4
For example,
•A five-disk RAID 5 or RAID 3 Group of 36-Gbyte disks holds
144 Gbytes;
•An eight-disk RAID 1/0 Group of 36-Gbyte disks also holds
144Gbytes;
•A RAID 1 mirrored pair of 36-Gbyte disks holds 36 Gbytes; and
•An individual disk of a 36-Gbyte disk also holds 36 Gbytes.
Each disk in the RAID Group must have the same capacity;
otherwise, you will waste disk storage space.
LUN Size. Enter the user-available capacity in gigabytes (Gbytes) of
the LUN. You can make this the same size as the RAID Group, above.
Or, for a RAID 5, RAID 1, RAID 1/0, or RAID 0 Group, you can make
the LUN smaller than the RAID Group. You might do this if you
wanted a RAID 5 Group with a large capacity and wanted to place
many smaller capacity LUNs on it; for example, to specify a LUN for
each user. However, having multiple LUNs per RAID Group may
adversely impact performance. If you want multiple LUNs per RAID
Group, then use a RAID Group/LUN series of entries for each LUN.
4-14
Disk IDs. Enter the ID(s) of all disks that will make up the LUN or
hot spare. These are the same disk IDs you specified on the previous
worksheet. For example, for a RAID-5 Group in the DPE (enclosure 0,
disks 2 through 6), enter 0_2, 0_3, 0_4, 0_5, and 0_6.
SP. Specify the SP that will own the LUN: SP A or SP B. You can let
the management program automatically select the SP to balance the
workload between SPs; to do so, leave this entry blank.
RAID type. Copy the RAID type from the previous worksheet. For
example, RAID 5 or hot spare. For a hot spare (not strictly speaking a
LUN at all), skip the rest of this LUN entry and continue to the next
LUN entry (if any).
If this is a RAID 3 Group, specify the amount of SP memory for that
group. To work efficiently, each RAID 3 Group needs at least
6Mbytes of memory.
Caching. If you want to use caching (entry on page 4-12), you can
specify whether you want caching — read and write, read, or write
for this LUN. Generally, write caching improves performance far
more than read caching. The ability to specify caching on a LUN basis
provides additional flexibility, since you can use caching for only the
EMC Fibre Channel Storage Systems Configuration Planning Guide
Planning LUNs and File Systems with Unshared Direct Storage
units that will benefit from it. Read and write caching
recommendations follow.
Table 4-1Cache Recommendations for Different RAID Types
RAID 5RAID 3RAID 1RAID 1/0RAID 0Individual Unit
Highly Recommended Not allowedRecommended Recommended Recommended Recommended
Servers that can access this LUN. Enter the name of each server that
will be able to use the LUN. Normally, you need to restrict access by
establishing SP ownership of LUNs when you bind them.
Operating system information:Device name. Enter the operating
system device name, if this is important and if you know it.
Depending on your operating system, you may not be able to
complete this field now.
File system, partition, or drive. Write the name of the file system,
partition, or drive letter you will create on this LUN. This is the same
name you wrote on the application worksheet.
On the following line, write any pertinent notes; for example, the file
system mount- or graft-point directory pathname (from the root
directory). If this storage system’s chassis will be shared with another
server, and the other server is the primary owner of this disk, write
secondary. (If the storage system will be used by two servers, we
suggest you complete one of these worksheets for each server.)
4
What Next?This chapter outlined the planning tasks for unshared storage
systems. If you have completed the worksheets to your satisfaction,
you are ready to learn about the hardware needed for these systems
as explained in Chapter 5.
Planning Applications and LUNs
4-15
Planning LUNs and File Systems with Unshared Direct Storage
4
4-16
EMC Fibre Channel Storage Systems Configuration Planning Guide
Invisible Body Tag
5
Storage System
Hardware
This chapter describes the storage-system hardware components.
Topi cs are
• Hardware for Shared Storage...........................................................5-3
• Hardware for Unshared Storage......................................................5-6
• Planning Your Hardware Components........................................ 5-11
• Hardware Data Sheets.....................................................................5-14
• Cabinets for Rackmount Enclosures .............................................5-20
• Cable and Configuration Guidelines ............................................5-21
The storage systems attach to the server and the interconnect
components described in Chapter 1.
Server
component
Interconnect
component
Storage
component
Unshared Direct
(one server)
Server
Adapter
Adapter
FC loop 1
FC loop 2
Figure 5-1Shared and Unshared Storage
Shared-or-Clustered Direct
(two servers)
Server
Adapter
Disk-array storage systems
Server
Adapter
Adapter
Adapter
Shared Switched
(multiple servers)
Server
Adapter
Switch fabric Switch fabric
Server
Adapter
Adapter
Adapter
Server
Adapter
Adapter
5-2
EMC Fibre Channel Storage Systems Configuration Planning Guide
Hardware for Shared Storage
The primary hardware component for shared storage is a ten-slot
Disk-array Processor Enclosure (DPE) with two storage processors
(SP). The DPE can support up to nine separate 10-slot enclosures
called Disk Array Enclosures (DAEs) for a total of 100 disks. Shared
storage requires two SPs and the Access Logix software option.
A DPE with a DAE is available as a deskside system, but with a
capacity of 20 disks this cannot provide the expandability and total
storage capacity needed for a SAN (storage area network). So this
section does not cover the deskside version.
Storage Hardware — Rackmount DPE-Based Storage Systems
The DPE rackmount enclosure is a sheet-metal housing with a front
door, a midplane, and slots for the storage processors (SPs), link
control cards (LCCs), disk modules, power supplies, and fan packs.
All components can be replaced under power. The DPE rackmount
model looks like the following figure.
Storage System Hardware
5
Front
Link control
card (LCC)
Disk modules (front door
removed for clarity)
Figure 5-2DPE Storage-System Components – Rackmount Model
Storage processors (SPs)
Back
FC ports with GBICs
A separate standby power supply (SPS) is required to support write
caching. All the shared storage components — rackmount DPE,
DAEs, SPSs, and cabinet — are shown in the following figure.
Hardware for Shared Storage
Power supplies
LCC
Drive fan module
(detached for clarity)
5-3
Storage System Hardware
5
Disks
DAE
DAE
DPE
Standby power
supplies (SPSs)
Figure 5-3Rackmount System with DPE and DAEs
DAE
DAE
DPE
SPs
Front
Rear
EMC1744
The disks — available in differing capacities — fit into slots in the
enclosure. Each module has a unique ID that you use when binding
or monitoring its operation. The ID is derived from the enclosure
address (always 0 for the DPE, settable on a DAE) and the disk
module slot numbers.
5-4
EMC Fibre Channel Storage Systems Configuration Planning Guide
Storage Processor (SP)
Storage System Hardware
Disk Modules and Module IDs — Rackmount DPE-Based System
10 11 12 13 14 15 16 17 18 19
0
1 2
3
4 5
6 7
8
9
The SP provides the intelligence of the storage system. Using its own
operating system (called Core Software), the SP processes the data
written to or read from the disk modules, and monitors the modules
themselves. An SP consists of a printed-circuit board with memory
modules (DIMMs), and status lights.
5
For high availability, a storage system can support a second SP. The
second SP provides a second route to a storage system and also lets
the storage system use write caching for enhanced write
performance. Two SPs are required for shared storage.
ServerServer
Adapter
Adapter
Storage systems
Figure 5-4Shared Storage Systems
Adapter
Switch fabric
SP A
See Chapter 3 for more examples of shared storage.
Adapter
SP B
Server
Adapter
Adapter
Switch fabric
SP A SP B
Path 1
Path 2
Hardware for Shared Storage
5-5
Storage System Hardware
5
Hardware for Unshared Storage
Unshared storage systems are less costly and less complex than
shared storage systems. They offer many shared storage system
features; for example, you can use multiple unshared storage systems
with multiple servers. However, with multiple servers, unshared
storage offers less flexibility and security than shared storage, since
any user with write access to privileged server files can enable access
to any storage system.
Types of Storage System for Unshared Storage
For unshared storage, there are four types of storage system, each
using the FC-AL protocol. Each type is available in a rackmount or
deskside (office) version.
•Disk-array Processor Enclosure (DPE) storage systems. A DPE is
a 10-slot enclosure with hardware RAID features provided by one
or two storage processors (SPs). In addition to its own disks, a
DPE can support up to 110 additional disks in 10-slot Disk Array
Enclosures (DAEs) for a total of 120 disks. This is the same kind of
storage system used for shared storage, but it uses a different
storage processor (SP).
5-6
•Intelligent Disk Array Enclosure (iDAE). An iDAE, like a DPE,
has SPs and thus all the features of a DPE, but is thinner and has a
limit of 30 disks.
•Disk Array Enclosure (DAE). A DAE does not have SPs. A DAE
can connect to a DPE or an iDAE, or you can use it without SPs. A
DAE used without an SP does not inherently include RAID, but
can operate as a RAID device using software running on the
server system. Such a DAE is also known as Just a Box of Disks, or
JBOD.
EMC Fibre Channel Storage Systems Configuration Planning Guide
Disk-array processor enclosure (DPE)
Storage System Hardware
5
Deskside DPE with DAE
Rackmount DPE, one enclosure,
supports up to 9 DAEs
Intelligent disk-array enclosure (iDAE)
30-slot desksideRackmount10-slot deskside
Figure 5-5Storage System Types for Unshared Storage
Hardware for Unshared Storage
5-7
Storage System Hardware
5
The following figure shows some components of a deskside DPE.
Components for rackmount types are similar.
DPE
SP fan cover
(covers SP
fan pack)
Front
DAE link control
card (LCC)
DPE LCC
DAE power supplies
DPE power supplies
DAE
Front doors
(cover disk modules)
Figure 5-6DPE Components - Deskside Model
Back (fans and cables
omitted for clarity)
Power distribution
units
Storage processors
(SPs)
DPE LCC
DAE LCC
FC ports
SPS units
5-8
Disks
The disks — available in differing capacities — fit into slots The disks — available in differing capacities — fit into slots in the
enclosure. Each disk has a unique ID that you use when binding it or
monitoring its operation. The ID is the enclosure address (always 0
for the DPE, settable on a DAE) and the disk slot number.
EMC Fibre Channel Storage Systems Configuration Planning Guide
Storage System Hardware
5
Figure 5-7Disks and Disk IDs
Storage Processor (SP)
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
0
1 2
3
4 5
6 7
8
9
The SP provides the intelligence of the storage system. Using its own
operating system (called Core Software), the SP processes the data
written to or read from the disk modules, and monitors the modules
themselves. An SP consists of a printed-circuit board with memory
modules (DIMMs), status lights, and switches for setting FC-AL
addresses.
For high availability, a storage system can support a second SP. A
second SP provides a second route to a storage system, so both SPs
can connect to the same server or two different servers, as follows.
Hardware for Unshared Storage
5-9
Storage System Hardware
5
Server
Adapter
Adapter
Storage system
DAE(s)
DPE
SP A
Cables
FC loop 1
FC loop 2
Figure 5-8Storage System with Two SPs Connected to the Same Server
Highly available cluster
SP B
5-10
Server 2Server 1
Adapter
Adapter
FC loop 1
FC loop 2
Figure 5-9Storage System with Two SPs Connected to Different Servers
Adapter
Adapter
Storage system
SP A
Either SP can control any LUN in the storage system, but only one SP
at a time can control a LUN. If one SP cannot access a LUN it
controlled (because of a failure), you can transfer control of the LUN
to the other SP, manually or via software.
EMC Fibre Channel Storage Systems Configuration Planning Guide
DAE(s)
DPE
SP B
Storage-system caching provides significant performance
enhancement. Read caching is available with one or two SPs.
Mirrored write caching, particularly helpful with RAID 5 I/O,
requires two SPs (to mirror one another, for cache integrity) and a
Standby Power Supply (SPS) to enable the SPs to write their cached
data to disk if power fails.
Planning Your Hardware Components
This section helps you plan the hardware components — adapters,
switches or hubs, cables, storage systems, and site requirements —
for each server in your installation.
For shared storage, you must use a DPE rackmount system with two
SPs and high-availability options. We assume you have some idea of
how many servers, adapters, switches or hubs, storage systems, and
SPs you want. Skip to the component data sheets following.
For unshared storage, you can use one or two SPs and you can choose
among storage system configurations. This section assumes you have
examined the configurations shown starting on page 4-2 and have
some idea of how many servers, adapters, switches or hubs, storage
systems, and SPs you want. It ends with blank worksheets and
sample worksheets.
Storage System Hardware
5
Configuration Tradeoffs - Shared Storage
The hardware configuration required for shared storage is very
specific: two host-bus adapters in each attached server, two Fibre
Channel switches, and two SPs per storage system. Choices you can
make with shared storage systems include the number of storage
systems (up to 15 are allowed), and for each storage system the cache
configuration (maximum or minimum), and one or two standby
power supplies (SPS units).
The number of storage systems in the SAN depends on the servers’
processing demands. For each system, the larger cache improves
write performance for very large processing loads; the redundant SPS
lets write caching continue if one SPS fails.
Planning Your Hardware Components
5-11
Storage System Hardware
5
Configuration Tradeoffs - Unshared Storage
For each storage-system enclosure, you have two important areas of
choice: rackmount or deskside model, and high-availability options.
Generally, rackmount systems are more versatile; you can add
capacity in a cabinet without consuming more floor space. However,
rackmount systems require additional hardware, such as cabinets
and mounting rails, and someone must connect power cords and
cables within them. For large storage requirements, rackmount
systems may be more economical than deskside systems. Deskside
systems are more convenient; they ship with all internal cabling in
place and require only ac power and connection to the servers.
For high availability, there are many variations. The most important
high-availability features are a second SP/LCC pair, second power
supply, and standby power supply (SPS). The second SP/LCC and
SPS let you use write caching to enhance performance; the second SP
provides continuous access to storage-system disks if one SP or LCC
fails. Another high-availability option is a redundant SPS.
Yet another option, for a deskside system, is a second power
distribution unit (PDU), which lets you route ac power from an
independent source. Used this way, the second PDU protects against
failure in one of the two ac power sources. With a rackmount system,
you can acquire a cabinet with one or two ac inlet cords. The second
inlet cord, connected to a second ac power source, provides the same
advantage for all storage systems in the cabinet as the second PDU in
the deskside storage system.
5-12
For deskside systems, the optional high-availability hardware fits
into the deskside cabinet. Deskside high-availability options are as
follows.
EMC Fibre Channel Storage Systems Configuration Planning Guide
For rackmount systems, the standby power supply or supplies (SPS
or BBU) must be placed in a tray directly beneath the storage system.
Typically, any hubs in the cabinet mount at the top or bottom of the
cabinet. Rackmount options are as follows.
The hardware data sheets shown in this section provide the plant
requirements, including dimensions (footprint), weight, power
requirements, and cooling needs, for DPE, iDAE, DAE, and 30-slot
SCSI disk systems. Sections on cabinets and cables follow the data
sheets.
DPE Data Sheet
Deskside modelRackmount model
Depth
74.7 cm
(30 in)
Width
52.1 cm
(20.6 in)
For shared storage, a rackmount DPE and one or more rackmount
DAEs are required. For unshared storage, you can use a rackmount or
deskside DPE and DAE(s). The DPE dimensions and requirements
are shown in the following figure.
DPE Dimensions and Requirements
Width
Depth
70 cm
(27.6 in)
Height
68 cm
(26.8 in)
44.5 cm
(17.5 in)
Height
28.6 c
(11.3 in
6.5 U
SPS mounting tra
height 4.44 cm
(1.75 in), 1 U;
depth 54.1 cm
(21.3 in)
5-14
Weight (without packaging)DesksideRackmount
Maximum (max disks, SPs,
144 kg (316 lb)52 kg (115 lb)
LCCs, PSs):
with 2 SPSs 165 kg (364 lb)74 kg (163 lb)
EMC Fibre Channel Storage Systems Configuration Planning Guide
Storage System Hardware
Power requirements
Voltage rating:100 V ac to 240 V ac –10%/+15%,
single-phase, 47 Hz to 63 Hz;
power supplies are auto-ranging
Current draw:At 100 v ac input – Deskside
DPE/DAE: 12.0 A; Rackmount
DPE: 8.0 A max; SPS: 1.0 A max per
unit during charge
Power consumption:Deskside DPE/DAE: 1200 VA;
Rackmount DPE: 800 VA max SPS:
1.0 A per unit during charge
Power cables (single or dual)
ac inlet connector:IEC 320-C14 power inlet
Deskside power cord:USA: 1.8 m (6.0 ft):
NEMA 6-15P plug
Outside USASpecific to country
Operating environment
Te mp er at ur e:1 0oC to 40oC (50o F to 104o F)
Relative humidity:Noncondensing, 20% to 80%
Altitude:40
o
C to 2,438 m (8,000 ft); 37oC to 3,050 m
(10,000 ft)
3
Heat dissipation (max): Deskside DPE/DAE: 3931x10
J/hr (2730
BTU/hr) max estimated;
Rackmount DPE: 2520x10
3
J/hr (2390
BTU/hr) max estimated
Air flow:Front to back
5
Service clearances
Front: 30.3 cm (1 ft)
Back:60.6 cm (2 ft)
Hardware Data Sheets
5-15
Storage System Hardware
5
iDAE Data Sheet
You can use a rackmount or deskside DPE and DAE(s) for unshared
storage. The iDAE dimensions and requirements are shown in the
following figure.