EMC believes the information in this publication is accurate as of its publication date. However, the
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication require an applicable
software license.
Trademark Information
EMC2, EMC, Navisphere, CLARiiON, MOSAIC:2000, and Symmetrix are registered trademarks and EMC Enterprise Storage, The Enterprise Storage
Company, The EMC Effect, Connectrix, EDM, SDMS, SRDF, Timefinder, PowerPath, InfoMover, FarPoint, EMC Enterprise Storage Network, EMC
Enterprise Storage Specialist, EMC Storage Logix, Universal Data Tone, E-Infostructure, Access Logix, Celerra, SnapView, and MirrorView are
trademarks of EMC Corporation.
All other trademarks mentioned herein are the property of their respective owners.
ii
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
7-1Sample Shared Switched Environment with Manager ........................... 7-3
viii
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
Audience for the Manual
Preface
This planning guide provides an overview of Fibre Channel
disk-array storage-system models and offers useful background
information and worksheets to help you plan.
Please read this guide
•if you are considering purchase of an EMC FC-series (Fibre
Channel) FC4700 disk-array storage system and want to
understand its features; or
•before you plan the installation of a storage system.
You should be familiar with the host servers that will use the storage
systems and with the operating systems of the servers. After reading
this guide, you will be able to
•determine the best storage-system components for your
installation
•determine your site requirements
•configure storage systems correctly
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
ix
Preface
Organization of the Manual
Conventions Used in This Manual
Where to Get HelpObtain technical support by calling your local sales office.
Chapter 1Provides background information about Fibre Channel
features and explains the major types of storage.
Chapter 2Describes the RAID Groups and the different ways they
store data.
Chapter 3Describes the optional EMC MirrorView™ remote
mirroring software.
Chapter 4Describes the optional EMC SnapView™ snapshot copy
software.
Chapter 5Helps you plan your storage system software and LUNs.
Chapter 6Explains the hardware components of storage systems.
Chapter 7Describes storage-system management utilities.
A note presents information that is important, but not hazard-related.
If you are located outside the USA, call the nearest EMC office for
technical assistance.
For service, call:
United States: (800) 782-4362 (SVC-4EMC)
Canada:(800) 543-4782 (543-4SVC)
Worldwide:(508) 497-7901
and ask for Customer Service.
Your CommentsYour suggestions will help us continue to improve the accuracy,
organization, and overall quality of the user publications. Please
e-mail us at techpub_comments@emc.com to let us know your
opinion or any errors concerning this manual.
x
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
Invisible Body Tag
1
About Fibre Channel
FC4700 Storage
Systems and Storage
Networks
This chapter introduces Fibre Channel FC4700 disk-array storage
systems and storage area networks (SANs). Major sections are
About Fibre Channel FC4700 Storage Systems and Storage Networks
1-1
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
Introducing Fibre Channel Storage Systems
EMC Fibre Channel FC4700 disk-array storage systems provide
terabytes of disk storage capacity, high transfer rates, flexible
configurations, and highly available data at low cost.
1-2
EMC1801
Figure 1-1Cutaway View of FC4700 Storage System
A storage-system package includes a host bus adapter driver kit with
hardware and software to connect with a server, storage management
software, Fibre Channel interconnect hardware, and one or more
storage systems.
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
Fibre Channel Background
Fibre Channel is a high-performance serial protocol that allows
transmission of both network and I/O channel data. It is a low level
protocol, independent of data types, and supports such formats as
SCSI and IP.
The Fibre Channel standard supports several physical topologies,
including switched fabric point-to-point and arbitrated loop (FC-AL).
The topologies used by the Fibre Channel storage systems described
in this manual are switched fabric and FC-AL.
A switch fabric is a set of point-to-point connections between nodes,
the connection being made through one or more Fibre Channel
switches. Each node may have its own unique address, but the path
between nodes is governed by a switch. The nodes are connected by
optical cable.
A Fibre Channel arbitrated loop is a circuit consisting of nodes. Each
node has a unique address, called a Fibre Channel arbitrated loop
address. The nodes are connected by optical cables. An optical cable
can transmit data over great distances for connections that span
entire enterprises and can support remote disaster recovery systems.
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
Each connected device in a switched fabric or arbitrated loop is a
server adapter (initiator) or a target (storage system). The switches
are not considered nodes.
Server Adapter (initiator)
Node
Adapter
Storage System (target)
Connection
Figure 1-2Nodes - Initiator and Target
Node
EMC1802
Fibre Channel Background
1-3
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
Fibre Channel Storage Components
A Fibre Channel storage system has three main components:
•Server component (host bus adapter driver package with adapter
and software)
•Interconnect components (cables based on Fibre Channel
standards, and switches)
•Storage component (storage system with storage processors —
SPs — and power supply and cooling hardware)
Server Component (Host Bus Adapter Driver Package with Software)
The host bus adapter driver package includes a host bus adapter and
support software. The adapter is a printed-circuit board that slides
into an I/O slot in the server’s cabinet. It transfers data between
server memory and one or more disk-array storage systems over
Fibre Channel — as controlled by the support software (adapter
driver).
Interconnect Components
1-4
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
One or more servers can use a storage system. For high availability —
in event of an adapter failure — a server can have two adapters.
Server
Adapter
Adapter
EMC1803
Depending on your server type, you may have a choice of adapters.
The adapter is designed for a specific kind of bus; for example, a PCI
bus or SBUS. Any adapter you choose must support optical cable.
The interconnect components include the optical cables between
components and any Fibre Channel switches.
The maximum length of optical cable between server and switch or
storage system is 500 meters (1,640 feet) for 62.5-micron multimode
cable or 10 kilometers (6.2 miles) for 9-micron single-mode cable.
Fibre Channel Switches
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
With extenders, optical cable can span up to 40 kilometers (25 miles)
or more. This ability to span great distances is a major advantage of
optical cable.
Details on cable lengths and rules appear later in this manual.
A Fibre Channel switch, which is required for switched shared
storage (a storage area network, SAN), connects all the nodes cabled
to it using a fabric topology. A switch adds serviceability and
scalability to any installation; it allows on-line insertion and removal
of any device on the fabric and maintains integrity if any connected
device stops participating. A switch also provides
server-to-storage-system access control. A switch provides
point-to-point connections
You can cascade switches (connect one switch port to another switch)
for additional port connections.
Server
Adapter
SP
Storage system
To illustrate the point-to-point quality of a switch, this figure shows
just one adapter per server and one switch. Normally, such installations
include two adapters per server and two switches.
Figure 1-3Switch Topology (Port to Port)
Server
Adapter
SP
Storage system
Server
SP
Adapter
SP
EMC1805
Fibre Channel Storage Components
1-5
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
Switch Zoning
Switch zoning lets an administrator define paths between connected
nodes based on the node’s unique World Wide Name. Each zone
encloses one or more server adapters and one or more SPs. A switch
can have as many zones as it has ports.
The current connection limits are four SP ports to one adapter port
(the SPs fan in to the adapter) and 15 adapters to one SP (the SPs fan
out to the adapters). There are several zone types, including the
single-initiator type, which is the recommended type for
FC4700-series systems.
In the following figure, Server 1 has access to one SP (SP A) in storage
systems 1 and 2; it has no access to any other SP.
SP
Storage system 1
To illustrate switch zoning, this figure shows just one HBA per server
and one switch. Normally, such installations will include two HBAs per
server and two switches.
Figure 1-4A Switch Zone
If you do not define a zone in a switch, all adapter ports connected to
the switch can communicate with all SP ports connected to the
switch. However, access to an SP does not necessarily provide access
to the SP’s storage; access to storage is governed by the Storage
Groups you create (defined later).
Fibre Channel switches are available with 16 or 8 ports. They are
compact units that fit in 2 U (3.5 inches) for the 16 port or 1 U (1.75
Server 1
Adapter
Zone
SP
Server 2
Adapter
Switch fabric
SP
Storage system 2
SP
Server 3
Adapter
SP
Storage system 3
SP
EMC1806
1-6
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
About Fibre Channel FC4700 Storage Systems and Storage Networks
inches) for the 8 port. They are available to fit into a rackmount
cabinet or as small deskside enclosures.
Ports
Figure 1-516-Port Switch, Back View
If your servers and storage systems will be far apart, you can place
the switches closer to the servers or the storage systems, as
convenient.
A switch is technically a repeater, not a node, in a Fibre Channel loop.
However, it is bound by the same cabling distance rules as a node.
1
EMC1807
Storage Component (Storage Systems, SPs, and Other Hardware)
EMC FC-series disk-array storage systems, with their storage
processors, power supplies, and cooling hardware form the storage
component of a Fibre Channel system. The controlling unit, a Model
FC4700 disk-array processor enclosure (DPE) looks like the following
figure.
Disk
modules
EMC1808
Figure 1-6Model 4700 DPE
DPE hardware details appear in a later chapter.
Fibre Channel Storage Components
1-7
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
Types of Storage-System Installations
You can use a storage system in any of several types of installation:
•Unshared direct with one server is the simplest and least costly.
•Shared-or-clustered direct, with a limit of two servers, lets two
servers share storage resources with high availability.
•Shared switched, with two switch fabrics, lets two to 15 servers
share the resources of several storage systems in a storage area
network (SAN). Shared switched installations are available in
high-availability versions (two HBAs per server) or with one
HBA per server. Shared switched storage systems can have
multiple paths to each SP, providing multipath I/O for dynamic
load sharing and greater throughput.
Unshared Direct
(one or two servers)
Server
Adapter
Adapter
SP A
Storage system
Path
Path 2
SP B
1
Figure 1-7Types of Storage-System Installation
Shared or Clustered
Direct (two servers)
Server
Adapter
SP A
Storage system
Adapter
Server
Adapter
SP B
Adapter
SP A
Storage system
Shared Switched (multiple servers,
Multiple P
Server
Adapter
Switch fabricSwitch fabric
SP B
aths to SPs)
Server
Adapter
Adapter
SP A
Storage system
Adapter
SP B
Server
Adapter
Adapter
SP A
Storage system
Storage systems for any shared installation require EMC Access
TM
Logix
software to control server access to the storage-system LUNs.
The Shared-or-clustered direct installation can be either shared (that
is, use Access Logix to control LUN access) or clustered (without
Access Logix, but with operating system cluster software controlling
LUN access), depending on the hardware model. FC4700 storage
systems are shared; they include Access Logix, which means the
servers need not use cluster software to control LUN access.
SP B
EMC1809
1-8
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
About Fibre Channel FC4700 Storage Systems and Storage Networks
About Switched Shared Storage and SANs (Storage Area
Networks)
This section explains the features that let multiple servers share
disk-array storage systems on a SAN (storage area network).
A SAN is one or more storage devices connected to servers through
Fibre Channel switches to provide a central location for disk storage.
Centralizing disk storage among multiple servers has many
advantages, including
•highly available data
•flexible association between servers and storage capacity
•centralized management for fast, effective response to users’ data
storage needs
•easier file backup and recovery
An EMC SAN is based on shared storage; that is, the SAN requires
EMC Access Logix to provide flexible access control to
storage-system LUNs. Within the SAN, a network connection to each
SP in the storage system lets you configure and manage it.
1
Server
Adapter
Adapter
Path 1
Path 2
Figure 1-8Components of a SAN
Switch fabric
SP A
Storage system
Fibre Channel switches can control data access to storage systems
through the use of switch zoning. With zoning, an administrator can
specify groups (called zones) of Fibre Channel devices (such as
About Switched Shared Storage and SANs (Storage Area Networks)
SP B
Server
Adapter
Adapter
Server
Adapter
Switch fabric
SP A
Storage system
SP B
Adapter
EMC1810
1-9
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
host-bus adapters, specified by worldwide name), and SPs between
which the switch fabric will allow communication.
However, switch zoning cannot selectively control data access to
LUNs in a storage system, because each SP appears as a single Fibre
Channel device to the switch fabric. So switch zoning can prevent or
allow communication with an SP, but not with specific disks or LUNs
attached to an SP. For access control with LUNs, a different solution is
required: Storage Groups.
Storage Groups
A Storage Group is one or more LUNs (logical units) within a storage
system that is reserved for one or more servers and is inaccessible to
other servers. Storage Groups are the central component of shared
storage; storage systems that are unshared do not use Storage
Groups.
When you configure shared storage, you specify servers and the
Storage Group(s) each server can read from and/or write to. The Base
Software running in each storage system enforces the
server-to-Storage Group permissions.
A Storage Group can be accessed by more than one server if all the
servers run cluster software. The cluster software enforces orderly
access to the shared Storage Group LUNs.
The following figure shows a simple shared storage configuration
consisting of one storage system with two Storage Groups. One
Storage Group serves a cluster of two servers running the same
operating system, and the other Storage Group serves a UNIX®
database server. Each server is configured with two independent
paths to its data, including separate host bus adapters, switches, and
SPs, so there is no single point of failure for access to its data.
1-10
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
Path
1
Path 2
Cluster Storage
Group
Database Server
Storage Group
Highly available cluster
Adapter
Adapter
Switch fabric
r
SP ASP B
Mail serve
Operating
system A
File serve
Operating
system A
Adapter
Adapter
Switch fabric
LUN
LUN
LUN
LUN
LUN
LUN
LUN
r
Database
r
serve
Operating
system
B
Adapter
Adapter
Physical storage
system with up to
100 disks per system
EMC1811
Figure 1-9Sample Shared Storage Configuration
Access Control with Shared Storage
Access control permits or restricts a server’s access to shared storage.
Configuration access, the ability to configure storage systems, is
governed by username and password access to a configuration file on
each server.
Data access, the ability to read and write information to
storage-system LUNs, is provided by Storage Groups. During
storage-system configuration, using a management utility, the system
administrator associates a server with one or more LUNs. The
associated LUNs compose a Storage Group.
About Switched Shared Storage and SANs (Storage Area Networks)
1-11
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
Each server sees its Storage Group as if it were an entire storage
system, and never sees the other LUNs on the storage system.
Therefore, it cannot access or modify data on LUNs that are not part
of its Storage Group. However, you can define a Storage Group to be
accessible by more than one server, if, as shown above in Figure 1-9,
the servers run cluster software.
The following figure shows access control through Storage Groups.
Each server has exclusive read and write access to its designated
Storage Group.
Highly available cluster
Admin Server
Operating
system A
Adapter 00
Adapter 01
Inventory Server
Operating
system A
Adapter 02
Adapter 03
E-mail Server
Operating
system B
Adapter 04
Adapter 05
Web Server
Operating
system B
Adapter 06
Adapter 07
1-12
Switch fabric
SP ASP B
Admin Storage Group
Dedicated Data access
by adapters 00, 01
Inventory Storage Group
Dedicated Data access
by adapters 02, 03
E-mail and Web Server
Storage Sroup Shared
Data access by
adapters 04, 05, 06, 07
Figure 1-10 Data Access Control with Shared Storage
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
Switch fabric
LUN
LUN
LUN
LUN
LUN
LUN
LUN
LUN
LUN
LUN
Path 1
Path 2
EMC1812
Storage-System Hardware
A Fibre Channel storage system is based on a disk-array processor
enclosure (DPE).
A DPE is a 10-slot enclosure with hardware RAID features provided
by one or two storage processors (SPs). For high availability, two SPs
are required. In addition to its own disks, each DPE can support up to
nine 10-slot Disk Array Enclosures (DAEs) for a total of 100 disks per
storage system.
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
DAE
DAE
DAE
DPE
Standby power
supply (SPS)
EMC1741
Figure 1-11 Storage System with DPE and Three DAEs
What Next?For information about RAID types and RAID tradeoffs, continue to
the next chapter.
For information on the MirrorView™ or SnapView™ software
options, go to Chapter 3 or 4.
To plan LUNs and file systems, skip to Chapter 5. For details on the
storage-system hardware, skip to Chapter 6.
About Switched Shared Storage and SANs (Storage Area Networks)
1-13
About Fibre Channel FC4700 Storage Systems and Storage Networks
1
1-14
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
Invisible Body Tag
2
RAID Types and
Tradeoffs
This chapter explains RAID types you can choose for your storagesystem LUNs. If you already know about RAID types and know
which ones you want, you can skip this background information and
go to the planning chapter (Chapter 5). Topics are
• RAID Benefits and Tradeoffs..........................................................2-12
• Guidelines for RAID Groups..........................................................2-17
• Sample Applications for RAID Types ...........................................2-19
RAID Types and Tradeoffs
2-1
RAID Types and Tradeoffs
2
Introducing RAID
The storage system uses RAID (redundant array of independent
disks) technology. RAID technology groups separate disks into one
logical unit (LUN) to improve reliability and/or performance.
The storage system supports five RAID levels and two other disk
configurations, the individual unit and the hot spare (global spare).
You group the disks into one RAID Group by binding them using a
storage-system management utility.
Four of the RAID Groups use disk striping and two use mirroring.
Disk Striping
Mirroring
Using disk stripes, the storage-system hardware can read from and
write to multiple disks simultaneously and independently. By
allowing several read/write heads to work on the same task at once,
disk striping can enhance performance. The amount of information
read from or written to each disk makes up the stripe element size.
The stripe size is the stripe element size multiplied by the number of
disks in a group. For example, assume a stripe element size of 128
sectors (the default) and a five-disk group. The group has five disks,
so you would multiply five by the stripe element size of 128 to yield a
stripe size of 640 sectors.
The storage system uses disk striping with most RAID types.
Mirroring maintains a copy of a logical disk image that provides
continuous access if the original image becomes inaccessible. The
system and user applications continue running on the good image
without interruption. There are two kinds of mirroring: hardware
mirroring, in which the SP synchronizes the disk images; and
software mirroring, in which the operating system synchronizes the
images. Software mirroring consumes server resources, since the
operating system must mirror the images, and has no offsetting
advantages; we mention it here only for historical completeness.
With a storage system, you can create a hardware mirror by binding
disks as a RAID 1 mirrored pair or a RAID 1/0 Group (a mirrored
RAID 0 Group); the hardware will then mirror the disks
automatically.
2-2
With a LUN of any RAID type, a storage system can maintain a
remote copy using the optional MirrorView software. MirrorView
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
RAID Groups and LUNs
RAID Types and Tradeoffs
2
remote mirroring, primarily useful for disaster recovery, is explained
in Chapter 3.
Some RAID types let you create multiple LUNs on one RAID Group.
You can then allot each LUN to a different user, server, or application.
For example, a five-disk RAID 5 Group that uses 36-Gbyte disks
offers 144 Gbytes of space. You could bind three LUNs, say with 24,
60, and 60 Gbytes of storage capacity, for temporary, mail, and
customer files.
One disadvantage of multiple LUNs on a RAID Group is that I/O to
each LUN may affect I/O to the others in the group; that is, if traffic
to one LUN is very heavy, I/O performance with other LUNs may
degrade. The main advantage of multiple LUNs per RAID Group is
the ability to divide the enormous amount of disk space provided by
RAID Groups on newer, high-capacity disks.
RAID Group
LUN 0
temp
LUN 1
mail
LUN 2
customers
DiskDiskDiskDiskDisk
Figure 2-1Multiple LUNs in a RAID Group
LUN 0
temp
LUN 1
mail
LUN 2
customers
LUN 0
temp
LUN 1
mail
LUN 2
customers
LUN 0
temp
LUN 1
mail
LUN 2
customers
Introducing RAID
LUN 0
temp
LUN 1
mail
LUN 2
customers
EMC1814
2-3
RAID Types and Tradeoffs
2
RAID Types
You can choose from the following RAID types: RAID 5, RAID 3,
RAID 1, RAID 0, RAID 1/0, individual disk unit, and hot spare.
You can choose an additional type of redundant disk — a remote
mirror — for any RAID type except a hot spare.
RAID 5 Group (Individual Access Array)
A RAID 5 Group usually consists of five disks (but can have three to
sixteen). A RAID 5 Group uses disk striping. With a RAID 5 group,
you can create up to 32 RAID 5 LUNs to apportion disk space to
different users, servers, and applications.
The storage system writes parity information that lets the Group
continue operating if a disk fails. When you replace the failed disk,
the SP rebuilds the group using the information stored on the
working disks. Performance is degraded while the SP rebuilds the
group. However, the storage system continues to function and gives
users access to all data, including data stored on the failed disk.
2-4
The following figure shows user and parity data with the default
stripe element size of 128 sectors (65,536 bytes) in a five-disk RAID 5
group. The stripe size comprises all stripe elements. Notice that the
disk block addresses in the stripe proceed sequentially from the first
disk to the second, third, and fourth, then back to the first, and so on.
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
Stripe
size
Stripe
element
size
Stripe
Blocks
0-127
512-639 1024-1151 1536-1663
First disk
Parity
Second disk
128-255 640-767 1152-1279
Parity
2048-2175
Third disk
256-383 768-8951664-1791 2176-2303
Parity
Fourth disk
384-5111280-1407 1792-1919 2304-2431
Parity
RAID Types and Tradeoffs
…
…
…
…
2
User data
Parity data
Parity
Figure 2-2RAID 5 Group
896-1023 1408-1535 1920-2047 2432-2559
RAID 5 Groups offer excellent read performance and good write
performance. Write performance benefits greatly from
storage-system caching.
RAID 3 Group (Parallel Access Array)
A RAID 3 Group consists of five or more disks. The hardware always
reads from or writes to all the disks. A RAID 3 Group uses disk
striping. To maintain the RAID 3 performance, you can create only
one LUN per RAID 3 group.
The storage system writes parity information that lets the Group
continue operating if a disk fails. When you replace the failed disk,
the SP rebuilds the group using the information stored on the
working disks. Performance is degraded while the SP rebuilds the
group. However, the storage system continues to function and gives
users access to all data, including data stored on the failed disk.
The following figure shows user and parity data with a data block
size of 2 Kbytes in a RAID 3 Group. Notice that the byte addresses
Fifth disk
…
EMC1815
RAID Types
2-5
RAID Types and Tradeoffs
2
proceed from the first disk to the second, third, and fourth, then the
first, and so on.
Data block
First disk
Second disk
Third disk
5120-56311024-1535 3072-35837168-7679 9116-9627
Fourth disk
Fifth disk
…
…
…
User data
…
…
Parity data
EMC1816
Stripe
ze
si
Stripe
element
size
Bytes
2048-2559 4096-4607 6144-6655 8192-8603
0-511
512-1023 2560-3071 4608-5119 6656-71678604-9115
3584-40951536-20475632-6143 7680-8191 9628-10139
ParityParityParityParityParity
2-6
Figure 2-3RAID 3 Group
RAID 3 differs from RAID 5 in several important ways. First, in a
RAID 3 Group the hardware processes disk requests serially; whereas
in a RAID 5 Group the hardware can interleave disk requests. Second,
with a RAID 3 Group, the parity information is stored on one disk;
with a RAID 5 Group, it is stored on all disks. Finally, with a RAID 3
Group, the I/O occurs in small units (one sector) to each disk. A
RAID 3 Group works well for single-task applications that use I/Os
of blocks larger than 64 Kbytes.
Each RAID 3 Group requires some dedicated SP memory (6 Mbytes
recommended per group). This memory is allocated when you create
the group, and becomes unavailable for storage-system caching. For
top performance, we suggest that you do not use RAID 3 Groups
with RAID 5, RAID 1/0, or RAID 0 Groups, since SP processing
power and memory are best devoted to the RAID 3 Groups. RAID 1
mirrored pairs and individual units require less SP processing power,
and therefore work well with RAID 3 Groups.
EMC Fibre Channel Storage System Model FC4700 Configuration Planning Guide
Loading...
+ 80 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.