This productor documentis distributedunder licensesrestricting its use, copying, distribution, and decompilation. No part of this product or
document maybe reproduced inany formby anymeans withoutprior writtenauthorization ofSun andits licensors,if any.Third-party
software, includingfont technology, iscopyrighted andlicensed fromSun suppliers.
Parts ofthe productmay bederived from BerkeleyBSD systems,licensed fromthe Universityof California.UNIX isa registered trademark in
the U.S.and othercountries, exclusivelylicensed throughX/Open Company, Ltd.
Sun, SunMicrosystems, theSun logo,AnswerBook2, SolsticeDiskSuite, docs.sun.com,OpenBoot, SunSolve,JumpStart, StorTools, Sun
Enterprise, SunStorEdge, SunUltra, SunFire, SunBlade, SolsticeBackup, Netra,NFS, andSolaris are trademarks,registered trademarks, or
service marksof SunMicrosystems, Inc.in theU.S. andother countries.All SPARC trademarks are usedunder licenseand aretrademarks or
registered trademarks of SPARCInternational, Inc.in theU.S. andother countries.Products bearingSPARC trademarks arebased uponan
architecture developed by Sun Microsystems, Inc.
The OPENLOOK andSun™ GraphicalUser Interfacewas developedby SunMicrosystems, Inc.for itsusers andlicensees. Sunacknowledges
the pioneeringefforts ofXerox in researchingand developingthe conceptof visualor graphicaluser interfacesfor thecomputer industry. Sun
holds anon-exclusive licensefrom Xerox tothe XeroxGraphical UserInterface, whichlicense alsocovers Sun’s licensees who implement OPEN
LOOK GUIsand otherwisecomply withSun’s writtenlicense agreements.
Federal Acquisitions:Commercial Software—Government UsersSubject toStandard LicenseTerms and Conditions.
DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES,
INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR APARTICULAR PURPOSE OR NON-INFRINGEMENT,
ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
Ce produitou documentest distribuéavec deslicences quien restreignent l’utilisation,la copie,la distribution,et ladécompilation. Aucune
partie dece produitou documentne peutêtre reproduite sousaucune forme,par quelquemoyen quece soit,sans l’autorisationpréalable et
écrite deSun etde sesbailleurs delicence, s’ily ena. Lelogiciel détenupar destiers, etqui comprendla technologierelative aux polices de
caractères, estprotégé par un copyright et licencié par des fournisseurs de Sun.
Sun, SunMicrosystems, lelogo Sun,AnswerBook2, SolsticeDiskSuite, docs.sun.com,SunSolve, OpenBoot,JumpStart, StorTools, Sun
Enterprise, SunStorEdge, SunUltra, SunFire, SunBlade, SolsticeBackup, Netra,NFS, etSolaris sontdes marques defabrique oudes marques
déposées, oumarques deservice, deSun Microsystems, Inc.aux Etats-Uniset dansd’autres pays.Toutes les marquesSPARC sontutilisées sous
licence etsont desmarques defabrique oudes marques déposéesde SPARC International, Inc. aux Etats-Unis et dans d’autres pays. Les
produits portantles marques SPARC sont basés sur une architecturedéveloppée parSun Microsystems,Inc.
L’interfaced’utilisation graphiqueOPEN LOOKet Sun™a étédéveloppée parSun Microsystems, Inc. pour ses utilisateurs etlicenciés. Sun
reconnaît lesefforts de pionniers de Xerox pour la recherche et le développement du concept des interfaces d’utilisation visuelle ou graphique
pour l’industriede l’informatique.Sun détientune licencenon exclusivede Xeroxsur l’interfaced’utilisation graphiqueXerox, cette licence
couvrant égalementles licenciésde Sunqui mettenten placel’interface d’utilisationgraphique OPENLOOK etqui enoutre seconforment aux
licences écritesde Sun.
LA DOCUMENTATION EST FOURNIE “EN L’ETAT” ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES EXPRESSES
OU TACITES SONT FORMELLEMENTEXCLUES, DANSLA MESUREAUTORISEE PARLA LOIAPPLICABLE, YCOMPRIS NOTAMMENT
TOUTE GARANTIE IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A L’APTITUDE A UNE UTILISATION PARTICULIERE OU A
L’ABSENCE DE CONTREFAÇON.
Please
Recycle
Contents
Prefaceix
1.Array Configuration Overview1
Product Description1
Controller Card2
Interconnect Cards4
Array Configurations6
Configuration Guidelines and Restrictions8
Configuration Recommendations9
Supported Platforms9
Supported Software10
Sun Cluster Support10
2.Configuring Global Parameters13
Cache13
Configuring Cache for Performance and Redundancy14
Configuring Data Block Size15
Selecting a Data Block Size15
Enabling Mirrored Cache16
Configuring Cache Allocation16
iii
Logical Volumes16
Guidelines for Configuring Logical Volumes17
Determining How Many Logical Volumes You Need17
Determining Which RAID Level You Need18
Determining Whether You Need a Hot Spare18
Creating and Labeling a Logical Volume19
Setting the LUN Reconstruction Rate19
Using RAID Levels to Configure Redundancy20
RAID 021
RAID 121
RAID 521
Configuring RAID Levels22
3.Configuring Partner Groups23
Understanding Partner Groups23
How Partner Groups Work25
Creating Partner Groups26
4.Configuration Examples27
Direct Host Connection27
Single Host With One Controller Unit28
Single Host With Two Controller Units Configured as a Partner Group29
Host Multipathing Management Software30
Single Host With Four Controller Units Configured as Two Partner
Groups31
Single Host With Eight Controller Units Configured as Four Partner
Groups32
Hub Host Connection34
Single Host With Two Hubs and Four Controller Units Configured as Two
Partner Groups34
ivSun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Single Host With Two Hubs and Eight Controller Units Configured as Four
Partner Groups36
Dual Hosts With Two Hubs and Four Controller Units38
Dual Hosts With Two Hubs and Eight Controller Units40
Dual Hosts With Two Hubs and Four Controller Units Configured as Two
Partner Groups42
Dual Hosts With Two Hubs and Eight Controller Units Configured as Four
Partner Groups44
Switch Host Connection46
Dual Hosts With Two Switches and Two Controller Units46
Dual Hosts With Two Switches and Eight Controller Units48
5.Host Connections51
Sun Enterprise SBus+ and Graphics+
I/O Boards52
System Requirements52
Sun StorEdge PCI FC-100 Host Bus Adapter53
System Requirements53
Sun StorEdge SBus FC-100 Host Bus Adapter54
System Requirements54
Sun StorEdge PCI Single Fibre Channel Network Adapter55
System Requirements55
Sun StorEdge PCI Dual Fibre Channel Network Adapter56
System Requirements56
Sun StorEdge CompactPCI Dual Fibre Channel Network Adapter57
System Requirements57
6.Array Cabling59
Overview of Array Cabling59
Data Path59
Contentsv
Administration Path60
Connecting Partner Groups60
Workgroup Configurations62
Enterprise Configurations63
Glossary65
viSun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Figures
FIGURE 1-1Sun StorEdge T3 Array Controller Card and Ports 3
FIGURE 1-2Sun StorEdge T3+ Array Controller Card and Ports 4
FIGURE 1-3Interconnect Card and Ports 5
FIGURE 1-4Workgroup Configuration 6
FIGURE 1-5Enterprise Configuration 7
FIGURE 3-1Sun StorEdge T3 Array Partner Group 24
FIGURE 4-1Single Host Connected to One Controller Unit 28
FIGURE 4-2Single Host With Two Controller Units Configured as a Partner Group 29
FIGURE 4-3Failover Configuration 30
FIGURE 4-4Single Host With Four Controller Units Configured as Two Partner Groups 31
FIGURE 4-5Single Host With Eight Controller Units Configured as Four Partner Groups 33
FIGURE 4-6Single Host With Two Hubs and Four Controller Units Configured as Two Partner Groups 35
FIGURE 4-7Single Host With Two Hubs Configured and Eight Controller Units as Four Partner
Groups 37
FIGURE 4-8Dual Hosts With Two Hubs and Four Controller Units 39
FIGURE 4-9Dual Hosts With Two Hubs and Eight Controller Units 41
FIGURE 4-10Dual Hosts With Two Hubs and Four Controller Units Configured as Two Partner Groups 43
FIGURE 4-11Dual Hosts With Two Hubs and Eight Controller Units Configured as Four Partner
Groups 45
FIGURE 4-12Dual Hosts With Two Switches and Two Controller Units 47
FIGURE 4-13Dual Hosts With Two Switches and Eight Controller Units 49
viiiSun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Preface
The Sun StorEdge T3 and T3+ Array Configuration Guide describes the recommended
configurations for Sun StorEdge T3 and T3+ arrays for high availability, maximum
performance, and maximum storage capability. This guide is intended for Sun™
field sales and technical support personnel.
Before You Read This Book
Read the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual for
product overview information.
How This Book Is Organized
Chapter 1 describes the connection ports and Fibre Channel loops for the Sun
StorEdge T3 and T3+ array. It also describes basic rules and recommendations for
configuring the array.
Chapter 2 describes how to configure the array’s global parameters.
Chapter 3 describes how to configure arrays into partner groups to form redundant
storage systems.
Chapter 5 describes host connections for the array.
Chapter 6 describes array cabling.
ix
Using UNIX Commands
This document contains some information on basic UNIX®commands and
procedures such as booting the devices. For further information, see one or more of
the following:
■ AnswerBook2™ online documentation for the Solaris™ software environment
■ Other software documentation that you received with your system
Typographic Conventions
TypefaceMeaningExamples
AaBbCc123The names of commands, files,
and directories; on-screen
computer output
AaBbCc123
AaBbCc123Book titles, new words or terms,
What you type, when
contrasted with on-screen
computer output
words to be emphasized
Command-line variable; replace
with a real name or value
Edit your.login file.
Use ls -a to list all files.
% You have mail.
% su
Password:
Read Chapter 6 in the User’s Guide.
These are called class options.
Yo u must be superuser to do this.
To delete a file, type rm filename.
x Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Shell Prompts
ShellPrompt
C shellmachine_name%
C shell superusermachine_name#
Bourne shell and Korn shell$
Bourne shell and Korn shell superuser#
Sun StorEdge T3 and T3+ array:/:
Related Documentation
ApplicationTitlePart Number
Latest array updatesSun StorEdge T3 and T3+ Array Release
Notes
Installation overviewSun StorEdge T3 and T3+ Array Start Here816-0772
Safety proceduresSun StorEdge T3 and T3+ Array Regulatory
and Safety Compliance Manual
Site preparationSun StorEdge T3 and T3+ Array Site
Preparation Guide
Installation and ServiceSun StorEdge T3 and T3+ Array Installation,
Operation, and Service Manual
AdministrationSun StorEdge T3 and T3+ Array
Administrator’s Guide
Cabinet installationSun StorEdge T3 Array Cabinet Installation
Guide
Disk drive specifications18 Gbyte, 1-inch, 10K rpm Disk Drive
Specifications
36 Gbyte, 10K rpm Disk Drive Specifications806-6383
73 Gbyte, 10K rpm, 1.6 Inch Disk Drive
Specifications
816-1983
816-0774
816-0778
816-0773
816-0776
806-7979
806-1493
806-4800
Prefacexi
ApplicationTitlePart Number
Sun StorEdge Component
Manager installation
Using Sun StorEdge
Component Manager
software
Latest Sun StorEdge
Component Manager
Updates
Sun StorEdge Component Manager
Installation Guide - Solaris
Sun StorEdge Component Manager
Installation Guide - Windows NT
Sun StorEdge Component Manager User’s
Guide
Sun StorEdge Component Manager Release
Notes
806-6645
806-6646
806-6647
806-6648
Accessing Sun Documentation Online
You can find the Sun StorEdge T3 and T3+ array documentation and other select
product documentation for Network Storage Solutions at:
Sun is interested in improving its documentation and welcomes your comments and
suggestions. You can email your comments to Sun at:
docfeedback@sun.com
Please include the part number (816-0777-10) of your document in the subject line of
your email.
xii Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
CHAPTER
1
Array Configuration Overview
This chapter describes the Sun StorEdge T3 and T3+ arrays, the connection ports,
and Fibre Channel connections. It also describes basic rules and recommendations
for configuring the array, and it lists supported hardware and software platforms.
Note – For installation and cabling information, refer to the Sun StorEdge T3 and T3+
Array Installation, Operation, and Service Manual. For software configuration
information, refer to the Sun StorEdge T3 and T3+ Array Administrator’s Guide.
This chapter is organized as follows:
■ “Product Description” on page 1
■ “Configuration Guidelines and Restrictions” on page 8
■ “Configuration Recommendations” on page 9
■ “Supported Platforms” on page 9
■ “Sun Cluster Support” on page 10.
Product Description
The Sun StorEdge T3 array is a high-performance, modular, scalable storage device
that contains an internal RAID controller and nine disk drives with Fibre Channel
connectivity to the data host. Extensive
features include redundant components, notification of failed components, and the
ability to replace components while the unit is online. The Sun StorEdge T3+ array
provides the same features as the Sun StorEdge T3 array, and includes an updated
controller card with direct fiber-optic connectivity and additional memory for data
cache. The controller cards of both array models are described in more detail later in
this chapter.
reliability, availability, and serviceability (RAS)
1
The array can be used either as a standalone storage unit or as a building block,
interconnected with other arrays of the same type and configured in various ways to
provide a storage solution optimized to the host application. The array can be placed
on a table top or rackmounted in a server cabinet or expansion cabinet.
The array is sometimes called a controller unit, which refers to the internal RAID
controller on the controller card. Arrays without the controller card are called
expansion units. When connected to a controller unit, the expansion unit enables you
to increase your storage capacity without the cost of an additional controller. An
expansion unit must be connected to a controller unit to operate because it does not
have its own controller.
In this document, the Sun StorEdge T3 array and Sun StorEdge T3+ array are
referred to as the array, except when necessary to distinguish between models.
Note – The Sun StorEdge T3 and T3+ arrays are similar in appearance. In this
document, all illustrations labeled Sun StorEdge T3 array also apply to the Sun
StorEdge T3+ array, except when necessary to distinguish specific model features.
In these instances, the array model is specified.
Refer to the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual
for an illustrated breakdown of the array and its component parts.
Controller Card
There are two controller card versions that are specific to the array model. Both
controller cards provide the connection ports to cable the array to data and
management hosts, but the type of connectors vary between models.
The Sun StorEdge T3 array controller card contains:
■ One Fibre Channel-Arbitrated Loop (FC-AL) port, which provides data path
connectivity to the application host system. This connector on the Sun StorEdge
T3 array requires a media interface adapter (MIA) to connect a fiber-optic cable.
■ One 10BASE-T Ethernet host interface port (RJ-45). This port provides the
interface between the controller card and the management host system. An
unshielded twisted-pair Ethernet cable (category 3) connects the controller to the
site’s network hub. This interface enables the administration and management of
the array via the Sun StorEdge Component Manager software or the command-lineinterface (CLI).
■ One RJ-11 serial port. This serial port is reserved for diagnostic procedures that
can only be performed by qualified service personnel.
FIGURE 1-1 shows the location of the controller card and the connector ports on the
Sun StorEdge T3 array.
2Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Serial port (RJ-11)
10BASE-T Ethernet port (RJ-45)
FC-AL data connection port
Note: FC-AL port requires an MIA for cable connection.
FIGURE 1-1 Sun StorEdge T3 Array Controller Card and Ports
The Sun StorEdge T3+ array controller card contains:
■ One Fibre Channel-Arbitrated Loop (FC-AL) port using an LC small-form factor
(SFF) connector. The fiber-optic cable that provides data channel connectivity to
the array has an LC-SFF connector that attaches directly to the port on the
controller card. The other end of the fiber-optic cable has a standard connector
(SC) that attaches a host bust adapter (HBA), hub, or switch.
■ One 10/100BASE-T Ethernet host interface port (RJ-45). This port provides the
interface between the controller card and the management host system. A
shielded Ethernet cable (category 5) connects the controller to the site’s network
hub. This interface enables the administration and management of the array via
the Sun StorEdge Component Manager software or the command-line interface
(CLI).
■ One RJ-45 serial port. This serial port is reserved for diagnostic procedures that
can only be performed by qualified service personnel.
FIGURE 1-2 shows the Sun StorEdge T3+ array controller card and connector ports.
Chapter 1Array Configuration Overview3
Serial port (RJ-45)
10/100BASE-T Ethernet port (RJ-45)
FC-AL data connection port
(LC-SFF)
FIGURE 1-2 Sun StorEdge T3+ Array Controller Card and Ports
Interconnect Cards
The interconnect cards are alike on both array models. There are two interconnect
ports on each card: one input and one output for interconnecting multiple arrays.
The interconnect card provides switch and failover capabilities, as well as an
environmental monitor for the array. Each array contains two interconnect cards for
redundancy (thus providing a total of four interconnect ports).
FIGURE 1-3 shows the interconnect cards in a Sun StorEdge T3+ array.
4Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
FIGURE 1-3 Interconnect Card and Ports
Interconnect cards
Output
Input
Chapter 1Array Configuration Overview5
Array Configurations
Each array uses Fibre Channel-Arbitrated Loop (FC-AL) connections to connect to
the application host. An FC-AL connection is a 100-Mbyte/second serial channel
that enables multiple devices, such as disk drives and controllers, to be connected.
Two array configurations are supported:
■ Workgroup. This standalone array is a high-performance, high-RAS configuration
with a single hardware RAID cached controller. The unit is fully populated with
redundant hot-swap components and nine disk drives (
FIGURE 1-4).
Application host
FC-AL
connection
Ethernet
connection
LAN
FIGURE 1-4 Workgroup Configuration
Management host
Ethernet port
Caution – In a workgroup configuration, use a host-based mirroring solution to
protect data. This configuration does not offer the redundancy to provide cache
mirroring, and operating without a host-based mirroring solution could lead to data
loss in the event of a controller failure.
■ Enterprise. Also called a partner group, this is a configuration of two controller
units paired using interconnect cables for back-end data and administrative
connections. The enterprise configuration provides all the RAS of single controller
units, plus redundant hardware RAID controllers with mirrored caches, and
redundant host channels for continuous data availability for host applications.
In this document, the terms enterprise configuration and partner group are used
interchangeably, but apply to the same type of configuration shown in
FIGURE 1-5.
6Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Alternate
master
controller
unit
Ethernet
Interconnect
cables
Master
controller
unit
FC-AL connection
FIGURE 1-5 Enterprise Configuration
connection
Ethernet
connection
LAN
Application host
Host-bus adapters
Management host
Ethernet port
Note – Sun StorEdge T3 array workgroup and enterprise configurations require a
media-interface adapter (MIA) connected to the Fibre Channel port to connect the
fiber-optic cable. Sun StorEdge T3+ array configurations support direct FC-AL
connections. Refer to the Sun StorEdge T3 and T3+ Array Installation, Operation, andService Manual for specific information on cabling the arrays.
In an enterprise configuration, there is a master controller unit and an alternate mastercontroller unit. In all default enterprise configurations, the master controller unit is
the array positioned at the bottom of an array stack in either a rackmounted or
tabletop installation. The alternate master controller unit is positioned on top of the
master controller unit. The positioning of the master and alternate master controller
units is important for cabling the units together correctly, understanding IP address
assignments, interpreting array command-line screen output, and determining
controller failover and failback conditions.
Note – In an enterprise configuration, you can only interconnect array models of the
same type. For example, you can connect a Sun StorEdge T3+ array to another Sun
StorEdge T3+ array, but you cannot connect it to a Sun StorEdge T3 array.
Chapter 1Array Configuration Overview7
Configuration Guidelines and
Restrictions
Workgroup Configurations:
■ The media access control (MAC) address is required to assign an IP address to the
controller unit. The MAC address uniquely identifies each node of a network. The
MAC address is available on the pull-out tab on the front left side of the array.
■ A host-based mirroring solution is necessary to protect data in cache.
■ Sun StorEdge T3 array workgroup configurations are supported in Sun Cluster
2.2 environments. Sun StorEdge T3 and T3+ array workgroup configurations are
supported in Sun Cluster 3.0 environments.
Enterprise Configurations
■ Partner groups can be connected to more than one host only if the following
conditions exist:
■ The partner group must be connected to the hosts through a hub.
■ The configuration must be using Sun StorEdge Traffic Manager software for
multipathing support.
■ The configuration must be a cluster configuration using Sun Cluster 3.0
software.
■ You cannot use a daisy-chain configuration to link more than two controller units
together.
■ You can only connect arrays of the same type model in a partner group.
■ In a cluster configuration, partner groups are supported using only Sun Cluster
3.0 software. They are not supported with Sun Cluster 2.2 software.
Caution – In an enterprise configuration, make sure you to use the MAC address of
the master controller unit.
8Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Configuration Recommendations
■ Use enterprise configurations for controller redundancy.
■ Use host-based software such as VERITAS Volume Manager (VxVM), Sun
Enterprise™ Server Alternate Pathing (AP) software, or Sun StorEdge Traffic
Manager for multipathing support.
■ Connect redundant paths to separate host adapters, I/O cards, and system buses.
■ Configure active paths over separate system buses to maximize bandwidth.
Caution – The array and its global parameters must be tailored to match the I/O
workload for optimum performance. Within a partner group, both units will share
the same volume configuration, block size, and cache mode. That is, all cache
parameter settings are common to both units within a partner group.
Supported Platforms
Sun StorEdge T3 and T3+ arrays are supported on the following host platforms:
■ Sun Ultra™ 60 and Ultra 80 workstations
■ Sun Blade™ 1000 workstation
■ Sun Enterprise 10000, 6x00, 5x00, 4x00, and 3x00 servers
■ Sun Workgroup 450, 420R, 250, and 220R servers
■ Sun Fire™ F6x00, F4x10, F4x00, F3x00, and F280R servers
■ Netra™ t 1405 server
Tip – For the latest information on supported platforms, refer to the storage
solutions web site at http://www.sun.com/storage and look for details on the
Sun StorEdge T3 array product family.
Chapter 1Array Configuration Overview9
Supported Software
The following software is supported on Sun StorEdge T3 and T3+ arrays:
■ Solaris 2.6, Solaris 7, and Solaris 8 operating environments
■ VERITAS Volume Manager 3.04 and later with DMP
■ Sun Enterprise Server Alternate Pathing (AP) 2.3.1
■ Sun StorEdge Component Manager 2.1 and later
■ StorTools™ 3.3 Diagnostics
■ Sun Cluster 2.2 and 3.0 software (see “Sun Cluster Support” on page 10)
■ Sun StorEdge Data Management Center 3.0
■ Sun StorEdge Instant Image 2.0
■ Sun StorEdge Network Data Replicator (SNDR) 2.0
■ Solstice Backup™ 5.5.1
■ Solstice DiskSuite™ 4.2 and 4.2.1
Tip – For the latest information on supported software, refer to the storage solutions
web site at http://www.sun.com/storage and look for details on the Sun
StorEdge T3 array product family.
Sun Cluster Support
Sun StorEdge T3 and T3+ arrays are supported in Sun Cluster configurations with
the following restrictions:
■ Array controller firmware version 1.17b or later is required on each Sun StorEdge
T3 array.
■ Array controller firmware version 2.0 or later is required on each Sun StorEdge
T3+ array.
■ Workgroup configurations are supported in Sun Cluster 2.2 for the Sun StorEdge
T3 array only. Sun Cluster 3.0 environments support both Sun StorEdge T3 and
T3+ array models.
■ Enterprise configurations are supported only in Sun Cluster 3.0 environments.
■ Partner groups in a Sun Cluster environment must use Sun StorEdge Traffic
Manager software for multipathing support.
10Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
■ Switches are not supported.
■ Hubs must be used.
■ The Sun StorEdge SBus FC-100 (SOC+) HBA and the onboard SOC+ interface in
Sun Fire™ systems are supported.
■ On Sun Enterprise 6x00/5x00/4x00/3x00 systems, a maximum of 64 arrays are
supported per cluster.
■ On Sun Enterprise 10000 systems, a maximum of 256 arrays are supported per
cluster.
■ To ensure full redundancy, host-based mirroring software such as Solstice
DiskSuite (SDS) 4.2 or SDS 4.2.1 must be used.
■ Solaris 2.6 and Solaris 8 are the only supported operating systems.
Note – Refer to the latest Sun Cluster documentation for more information on Sun
Cluster supported array configurations and restrictions.
Chapter 1Array Configuration Overview11
12Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
CHAPTER
2
Configuring Global Parameters
When an array is shipped, the global parameters are set to default values. This
chapter describes how to reconfigure your array by changing these default values.
Caution – If you are planning an enterprise configuration using new factory units,
be sure to install and set up the units as a partner group before you power on, and
change any parameters or create/change any logical volumes. Refer to the SunStorEdge T3 and T3+ Array Installation, Operation, and Service Manual for more
information.
Note – For more information on changing array global parameters, refer to the Sun
StorEdge T3 and T3+ Array Administrator’s Guide.
The following parameters are described in this chapter:
■ “Cache” on page 13
■ “Logical Volumes” on page 16
■ “Using RAID Levels to Configure Redundancy” on page 20
Cache
Each Sun StorEdge T3 array controller unit has 256 Mbytes of data cache; each Sun
StorEdge T3+ array controller unit has 1 GByte of data cache. Writing to cache
improves write performance by staging data in cache, assembling the data into data
stripes, and then destaging the data from cache to disk, when appropriate. This
method frees the data host for other operations while cache data is being destaged,
and it eliminates the read-modify-write delays seen in non-cache systems. Read cache
improves performance by determining which data will be requested for the next
read operation and prestaging this data into cache. RAID 5 performance is also
improved by coalescing writes.
13
Configuring Cache for Performance and
Redundancy
Cache mode can be set to the following values:
■ Auto. The cache mode is determined as either write-behind or write-through,
based on the I/O profile. If the array has full redundancy available, then caching
operates in write-behind mode. If any array component is non-redundant, the
caching mode is set to write-through. Read caching is always performed. Auto
caching mode provides the best performance while retaining full redundancy
protection.
Auto is the default cache mode for Sun StorEdge T3 and T3+ arrays.
■ Write-behind. All read and write operations are written to cache. An algorithm
determines when the data is destaged or moved from cache to disk. Write-behind
cache improves performance, because a write to a high-speed cache is faster than
a write to a normal disk.
Use write-behind cache mode with a workgroup configuration when you want to
force write-behind caching to be used.
Caution – In a workgroup configuration, use a host-based mirroring solution to
protect data. This configuration does not offer the redundancy to provide cache
mirroring, and operating without a host-based mirroring solution could lead to data
loss in the event of a controller failure.
■ Write-through. This cache mode forces write-through caching to be used. In
write-through cache mode, data is written through cache in a serial manner and is
then written to the disk. Write-through caching does not improve write
performance. However, if a subsequent read operation needs the same data, the
read performance is improved, because the data is already in cache.
■ None. No reads or writes are cached.
Note – For full redundancy in an enterprise configuration, set the cache mode and
the mirror variable to Auto. This ensures that the cache is mirrored between
controllers and that write-behind cache mode is in effect. If a failure occurs, the data
is synchronized to disk, and then write-through mode takes effect. Once the problem
has been corrected and all internal components are again optimal, the system will
revert to operating in write-behind cache mode.
14Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Configuring Data Block Size
The data block size is the amount of data written to each drive when striping data
across drives. (The block size is also known as the stripe unit size.) The block size
can be changed only when there are no volumes defined. The block size can be
configured as 16 Kbytes, 32 Kbytes, or 64 Kbytes. The default block size is 64 Kbytes.
A cache segment is the amount of data being read into cache. A cache segment is
1/8 of a data block. Therefore, cache segments can be 2 Kbytes, 4 Kbytes, or
8 Kbytes. Because the default block size is 64 Kbytes, the default cache segment size
is 8 Kbytes.
Note – The array data block size is independent of I/O block size. Alignment of the
two is not required.
Selecting a Data Block Size
If the I/O initiated from the host is 4 Kbytes, a data block size of 64 Kbytes would
force 8 Kbytes of internal disk I/O, wasting 4 Kbytes of the cache segment.
Therefore, it would be best to configure 32-Kbyte block sizes, causing 4-Kbyte
physical I/O from the disk. If sequential activity occurs, full block writes (32 Kbytes)
will take place. For 8-Kbyte I/O or greater from the host, use 64-Kbyte blocks.
Applications benefit from the following data block or stripe unit sizes:
■ 16-Kbyte data block size
■ Online Transaction Processing (OLTP)
■ Internet service provider (ISP)
■ Enterprise Resource Planning (ERP)
■ 32-Kbyte data block size
■ NFS™ file system, version 2
■ Attribute-intensive NFS file system, version 3
■ 64-Kbyte data block size
■ Data-intensive NFS file system, version 3
■ Decision Support Systems (DSS)
■ Data Warehouse (DW)
■ High Performance Computing (HPC)
Chapter 2Configuring Global Parameters15
Note – The data block size must be configured before any logical volumes are
created on the units. Remember, this block size is used for every logical volume
created on the unit. Therefore it is important to have similar application data
configured per unit.
Data block size is universal throughout a partner group. Therefore, you cannot
change it after you have created a volume. To change the data block size, you must
first delete the volume(s), change the data block size, and then create new volume(s).
Caution – Unless you back up and restore the data on these volumes, it will be lost.
Enabling Mirrored Cache
By enabling mirrored cache, you can safeguard cached data if a controller fails.
Note – Mirrored cache is possible only in a redundant enterprise configuration.
Configuring Cache Allocation
Cache is allocated based on the read/write mix and it is dynamically adjusted by the
controller firmware, based on the I/O profile of the application. If the application
profile is configured for a 100% read environment, then 100% of the cache is used for
reads. If the application profile has a high number of writes, then the upper limit for
writes is set to 80%.
Logical Volumes
Also called a logical unit number (LUN), a logical volume is one or more disk drives
that are grouped together to form a single unit. Each logical volume is represented to
the host as a logical unit number. Using the
you can view the logical volumes presented by the array. You can use this disk space
as you would any physical disk, for example, to perform the following operations:
■ Install a file system
■ Use the device as a raw device (without any file system structure)
■ Partition the device
16Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
format utility on the application host,
Note – Individual physical disk drives are not visible from the application host.
Refer to the Sun StorEdge T3 and T3+ Array Administrator’s Guide for more
information on creating logical volumes.
Guidelines for Configuring Logical Volumes
Use the following guidelines when configuring logical volumes:
■ The array’s native volume management can support a maximum of two volumes
per array unit.
■ The minimum number of drives is based on the RAID level, as follows:
■ RAID 0 and RAID 1 require a minimum of two drives.
■ RAID 5 requires a minimum of three drives.
■ Drive number 9 can be designated as a hot spare. If designated, drive number 9
will be the hot spare for all volumes in the array.
■ A partial drive configuration is not allowed.
■ Volumes cannot span array units.
Consider the following questions when configuring logical volumes:
■ How many logical volumes do you need (one or two)?
■ What RAID level do you require?
■ Do you need a hot spare?
Determining How Many Logical Volumes You Need
You can configure a volume into seven partitions (also known as slices) using the
format(1M) utility. Alternatively, you can configure virtually a large number of
partitions (also known as subdisks) using VERITAS Volume Manager. Therefore,
arrays are best configured as one large volume.
Applications benefit from the following logical volume or LUN configurations:
■ Two LUNs per array
■ OLTP
■ ISP
■ ERP
■ NFS, version 2
■ Attribute-intensive NFS, version 3
■ One LUN per array
Chapter 2Configuring Global Parameters17
■ Data-intensive NFS, version 3
■ DSS
■ DW
■ HPC
Note – If you are creating new volumes or changing the volume configuration, you
must first manually rewrite the label of the previous volume using the autoconfigure
option of the
format(1M) UNIX host command. For more information on this
procedure, refer to the Sun StorEdge T3 and T3+ Array Administrator’s Guide.
Caution – Removing and reconfiguring the volume will destroy all data previously
stored there.
Determining Which RAID Level You Need
For a new array installation, the default configuration is 8+1 RAID 5, without a hot
spare.
In general, RAID 5 is efficiently managed by the RAID controller hardware. This
efficiency is apparent when compared to RAID 5 software solutions such as
VERITAS Volume Manager.
The following applications benefit most from the RAID controller hardware of the
array:
■ Data-intensive NFS file system, version 3
■ DSS
■ DW
■ HPC
Note – For more information about RAID levels, see “Using RAID Levels to
Configure Redundancy” later in this chapter.
Determining Whether You Need a Hot Spare
If you choose to include a hot-spare disk drive in your configuration, you must
specify it when you create the first volume in the array. If you want to add a hot
spare at a later date, you must remove the existing volume(s) and recreate the
configuration.
18Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Note – Only one hot spare is allowed per array and it is only usable for the array in
which it is configured. The hot spare must be configured as drive 9.
Drive 9 will be the hot spare in the unit. So, for example, should a drive failure occur
on drive 7, drive 9 is synchronized automatically with the entire LUN to reflect the
data on drive 7. Once the failed drive (7) is replaced, the controller unit will
automatically copy the data from drive 9 to the new drive, and drive 9 will become
a hot spare again.
Tip – Although they are not required, hot spares are always recommended for
mission-critical configurations because they allow the controller unit to reconstruct
the data from the RAID group and only take a performance hit while the
reconstruction is taking place. If a hot spare is not used, the controller unit remains
in write-through cache mode until the failed drive is replaced and reconstruction is
complete (which could take an extended period of time). During this time, the array
is operating in degraded mode.
If there is no hot spare, the reconstruction of the data will begin when the failed
drive is replaced, provided RAID 1 or RAID 5 is used.
Creating and Labeling a Logical Volume
You must set the RAID level and the hot-spare disk when creating a logical volume.
For the Solaris operating system to recognize a volume, it must be labeled with the
format or fmthard command.
Caution – Removing and reconfiguring a logical volume will destroy all data
previously stored there.
Setting the LUN Reconstruction Rate
Note – When a failed drive is disabled, the volume is operating without further
redundancy protection, so the failed drive needs to be replaced as soon as possible.
Chapter 2Configuring Global Parameters19
If the volume has a hot spare configured and that drive is available, the data on the
disabled drive is reconstructed on the hot-spare drive. When this operation is
complete, the volume is operating with full redundancy protection, so another drive
in the volume may fail without loss of data.
After a drive has been replaced, the original data is automatically reconstructed on
the new drive. If no hot spare was used, the data is regenerated using the RAID
redundancy data in the volume. If the failed drive data has been reconstructed onto
a hot spare, once the reconstruction has completed, a copy-back operation begins
where the hot spare data is copied to the newly replaced drive.
You can also configure the rate at which data is reconstructed, so as not to interfere
with application performance. Reconstruction rate values are low, medium, and high
as follows:
■ Low is the slowest and has the lowest impact on performance
■ Medium is the default
■ High is the fastest and has the highest impact on performance
Note – Reconstruction rates can be changed while a reconstruction operation is in
process. However, the changes don’t take effect until the current reconstruction has
completed.
Using RAID Levels to Configure
Redundancy
The RAID level determines how the controller reads and writes data and parity on
the drives. The Sun StorEdge T3 and T3+ arrays can be configured with RAID level
0, RAID level 1 (1+0) or RAID level 5. The factory-configured LUN is a RAID 5 LUN.
Note – The default RAID level (5) can result in very large volumes; for example, 128
Gbytes in a configuration of single 7+1 RAID 5 LUN plus hot spare, with 18 Gbyte
drives. Some applications cannot use such large volumes effectively. The following
two solutions can be used separately or in combination:
■ First, use the partitioning utility available on the data host’s operating system. In
the Solaris environment, use the
distinct partitions per volume. Note that in the case of the configuration described
above, if each partition is equal in size, this will result in 18-Gbyte partitions,
which still may be too large to be used efficiently by legacy applications.
20Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
format utility, which can create up to seven
■ Second, you can use third-party software on the host system to create as many
partitions as desired from a given volume. In the Solaris environment, you can
use VERITAS Volume Manager or Solaris Logical Volume Management (SLVM)
formerly known as Solstice DiskSuite (SDS) for this purpose.
Note – For information on using the format utility, refer to the format (1M) man
page. For more information on third-party software or VERITAS Volume Manager,
refer to the documentation for that product.
RAID 0
Data blocks in a RAID 0 volume are striped across all the drives in the volume in
order. There is no parity data, so RAID 0 uses the full capacity of the drives. There is,
however, no redundancy. If a single drive fails, all data on the volume is lost.
RAID 1
Each data block in a RAID 1 volume is mirrored on two drives. If one of the
mirrored pair fails, the data from the other drive is used. Because the data is
mirrored in a RAID 1 configuration, the volume has only half the capacity of the
assigned drives. For example, if you create a 4-drive RAID 1+0 volume with
18-Gbyte drives, the resulting data capacity is 4 x 18 / 2 = 36 Gbytes.
RAID 5
In a RAID 5 configuration, data is striped across the drives in the volumes in
segments, with parity information being striped across the drives, as well. Because
of this parity, if a single drive fails, data can be recovered from the remaining drives.
Two drive failures cause all data to be lost. A RAID 5 volume has the data capacity
of all the drives in the logical unit, less one. For example, a 5-drive RAID 5 volume
with 18-Gbyte drives has a capacity of (5 - 1) x 18 = 72 Gbytes.
Chapter 2Configuring Global Parameters21
Configuring RAID Levels
The Sun StorEdge T3 and T3+ arrays are preconfigured at the factory with a single
LUN, RAID level 5 redundancy and no hot spare. Once a volume has been
configured, you cannot reconfigure it to change its size, RAID level, or hot spare
configuration. You must first delete the volume and create a new one with the
configuration values you want.
22Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
CHAPTER
3
Configuring Partner Groups
Sun StorEdge T3 and T3+ arrays can be interconnected in partner groups to form a
redundant and larger storage system.
Note – The terms partner group and enterprise configuration refer to the same type of
configuration and are used interchangeably in this document.
Note – Partner groups are not supported in Sun Cluster 2.2 configurations.
This chapter describes how to configure array partner groups, and it includes the
following sections:
■ “Understanding Partner Groups” on page 23
■ “How Partner Groups Work” on page 25
■ “Creating Partner Groups” on page 26
Understanding Partner Groups
In a partner group, there is a master controller unit and an alternate master controllerunit. The master controller unit is the array positioned at the bottom of an array
stack in either a rackmounted or tabletop installation. The alternate master controller
unit is positioned on top of the master controller unit. Array units are connected
using the interconnect cards and interconnect cables. A partner group is shown in
FIGURE 3-1.
23
Alternate
master
controller
unit
Interconnect
cables
Master
controller
unit
FC-AL connection
FIGURE 3-1 Sun StorEdge T3 Array Partner Group
Note – Sun StorEdge T3 arrays require a media-interface adapter (MIA) connected
to the Fibre Channel port on the controller card to connect the fiber-optic cable. Sun
StorEdge T3+ array configurations support direct FC-AL connections.
Ethernet
connection
LAN
Ethernet
connection
Application host
Host-bus adapters
Management host
Ethernet port
When two units are connected together, they form a redundant partner group. This
group provides controller redundancy. Because the controller is a single point of
failure in a standalone configuration, this redundancy allows an application host to
access data even if a controller fails. This configuration offers multipath and LUN
failover features.
The partner group connection also allows for a single point of control. The bottom
unit will assume the role of the master, and from its Ethernet connections, it will be
used to monitor and administer the unit installed above it.
The master controller unit will set the global variables within this storage system,
including cache block size, cache mode, and cache mirroring.
Note – For information about setting or changing these parameters, refer to the Sun
StorEdge T3 and T3+ Array Administrator’s Guide.
24Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Any controller unit will boot from the master controller unit’s drives. All
configuration data, including syslog information, is located on the master
controller unit’s drives.
How Partner Groups Work
If the master controller unit fails and the “heartbeat” between it and the alternate
master stops, this failure causes a controller failover, where the alternate master
assumes the role of the master controller unit. The new master (formerly the
alternate master) takes the IP address and the MAC address from the old master and
begins to function as the administrator of the storage system. It will also be able to
access the former master controller unit’s drives. The former master controller unit’s
drives will still be used to store syslog information, system configuration
information, and bootcode. Should it become necessary to reboot the storage system
while the master controller unit is inactive, the alternate master will use the former
master controller unit’s drives to boot.
Note – After the failed master controller is back online, it remains the alternate
master controller and, as a result, the original configuration has been modified from
its original state.
In a redundant partner group configuration, the units can be set to do a path failover
operation. Normally the volumes or LUNs that are controlled by one unit are not
accessible to the controller of the other. The units can be set so that if a failure in one
controller occurs, the remaining one will accept I/O for the devices that were
running on the failed controller. To enable this controller failover operation,
multipathing software, such as VERITAS Volume Manager, Sun StorEdge Traffic
Manager software, or Solaris Alternate Pathing (AP) software must be installed on
the data application host.
Note – In order for a feature such VERITAS DMP to access a LUN through both
controllers in a redundant partner group, the
rw to enable this feature. If you are using Sun StorEdge Traffic Manager, the
mp_support parameter must be set to mpxio. For information on setting the
mp_support parameter and options, refer to the Sun StorEdge T3 and T3+ Array
Administrator’s Guide.
mp_support parameter must be set to
Chapter 3Configuring Partner Groups25
Creating Partner Groups
Partner groups can be created in two ways:
■ From new units
■ From existing standalone units
Instructions for installing new array units and connecting them to create partner
groups can be found in the Sun StorEdge T3 and T3+ Array Installation, Operation, andService Manual.
To configure existing standalone arrays with data into a partner group, you must go
through a qualified service provider. Contact your SunService representative for
more information.
Caution – The procedure to reconfigure the arrays into a partner group involves
deleting all data from array disks and restoring the data after the completing the
reconfiguration. There is the potential risk of data loss or data corruption if the
procedure is not performed properly.
26Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
CHAPTER
4
Configuration Examples
This chapter includes sample reference configurations for Sun StorEdge T3 and T3+
arrays. Although there are many supported configurations, these reference
configurations provide the best solution for many installations.:
■ “Direct Host Connection” on page 27
■ “Hub Host Connection” on page 34
■ “Switch Host Connection” on page 46
Direct Host Connection
This section contains examples of the following configurations:
■ “Single Host With One Controller Unit” on page 28
■ “Single Host With Two Controller Units Configured as a Partner Group” on
page 29
■ “Single Host With Four Controller Units Configured as Two Partner Groups” on
page 31
■ “Single Host With Eight Controller Units Configured as Four Partner Groups” on
page 32
27
Single Host With One Controller Unit
FIGURE 4-1 shows one application host connected through an FC-AL cable to one
array controller unit. The Ethernet cable connects the controller to a management
host via a LAN on a public or separate network, and requires an IP address.
Note – This configuration is not recommended for RAS functionality because the
controller is a single point of failure. In this type of configuration, use a host-based
mirroring solution to protect data in cache.
Controller unit
Application host
HBA
FC-AL
connection
Ethernet
connection
Management host
LAN
FIGURE 4-1 Single Host Connected to One Controller Unit
Note – For the Sun StorEdge T3 array, you must insert a media interface adapter
(MIA) into the FC-AL connection port on the array controller card to connect the
fiber-optic cable. This is detailed in the Sun StorEdge T3 and T3+ Array Installation,Operation, and Service Manual.
28Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Ethernet port
Single Host With Two Controller Units
Configured as a Partner Group
FIGURE 4-2 shows one application host connected through FC-AL cables to one array
partner group, which consists of two Sun StorEdge T3+ arrays. The Ethernet
connection from the master controller unit is on a public or separate network and
requires an IP address for the partner group. In the event of a failover, the alternate
master controller unit will use the master controller unit’s IP address and MAC
address.
Alternate
master
controller
unit
Interconnect
cables
Master
controller
unit
FC-AL connection
Ethernet
connection
LAN
FIGURE 4-2 Single Host With Two Controller Units Configured as a Partner Group
Ethernet
connection
Application host
HBAs
Management host
Ethernet port
This configuration is a recommended enterprise configuration for RAS functionality
because there is no single point of failure. This configuration supports Dynamic
Multi-Pathing (DMP) by VERITAS Volume Manager, the Alternate Pathing (AP)
software in the Solaris operating environment, or Sun StorEdge Traffic Manager
software for failover only.
The following three global parameters must be set on the master controller unit, as
follows:
■ mp_support = rw or mpxio
■ cache mode = auto
■ cache mirroring = auto
For information on setting these parameters, refer to the Sun StorEdge T3 and T3+
Array Administrator’s Guide.
Chapter 4Configuration Examples29
Host Multipathing Management Software
While Sun StorEdge T3 and T3+ arrays are redundant devices that automatically
reconfigure whenever a failure occurs on any internal component, a host-based
solution is needed for a redundant data path. Supported multipathing solutions
include:
■ The DMP feature in VERITAS Volume Manager
■ Sun Enterprise Server Alternate Pathing software
■ Sun StorEdge Traffic Manager software
During normal operation, I/O moves on the host channel connected to the controller
that owns the LUNs. This path is a primary path. During failover operation, the
multipathing software directs all I/O to the alternate channel’s controller. This path
is the failover path.
When a controller in the master controller unit fails, the alternate master controller
unit becomes the master. When the failed controller is repaired, the new controller
immediately boots, goes online and becomes the alternate master controller unit. The
former alternate master controller unit remains the master controller unit.
Note – The multipathing software solution must be installed on the application host
to achieve a fully redundant configuration.
FIGURE 4-3 shows a failover configuration.
LUN 1
Alternate
master
controller
unit
Interconnect
cables
LUN 0
Master
controller
unit
FC-AL connection
Ethernet
connection
LAN
FIGURE 4-3 Failover Configuration
30Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Ethernet
connection
Application host
HBA
HBA
Management host
Primary LUN 1
Failover LUN 0
Primary LUN 0
Failover LUN 1
Ethernet port
Single Host With Four Controller Units
Configured as Two Partner Groups
FIGURE 4-4 shows one application host connected through FC-AL cables to four
arrays configured as two separate partner groups. This configuration can be used for
capacity and I/O throughput requirements. Host-based Alternate Pathing software
is required for this configuration.
Note – This configuration is a recommended enterprise configuration for RAS
functionality because the controller is not a single point of failure.
The following three parameters must be set on the master controller unit, as follows:
■ mp_support = rw or mpxio
■ cache mode = auto
■ cache mirroring = auto
For information on setting these parameters, refer to the Sun StorEdge T3 and T3+
Array Administrator’s Guide.
Alternate
master
controller
unit
Interconnect
cables
Master
controller
unit
FC-AL
FC-AL
Ethernet
Application host
HBA
HBA
HBA
HBA
Ethernet
port
LAN
FIGURE 4-4 Single Host With Four Controller Units Configured as Two Partner Groups
Chapter 4Configuration Examples31
Management host
Single Host With Eight Controller Units
Configured as Four Partner Groups
FIGURE 4-5 shows one application host connected through FC-AL cables to eight Sun
StorEdge T3+ arrays, forming four partner groups. This configuration is the
maximum allowed in a 72-inch cabinet. This configuration can be used for footprint
and I/O throughput.
Note – This configuration is a recommended enterprise configuration for RAS
functionality because the controller is not a single point of failure.
The following three parameters must be set on the master controller unit, as follows:
■ mp_support = rw or mpxio
■ cache mode = auto
■ cache mirroring = auto
Note – For information on setting these parameters, refer to the Sun StorEdge T3 and
T3+ Array Administrator’s Guide.
Host-based multipathing software is required for this configuration.
32Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Alternate
master
controller
unit
Interconnect
cables
Master
controller
unit
FC-AL
FC-AL
Ethernet
Application host
HBA
HBA
HBA
HBA
HBA
HBA
HBA
HBA
Ethernet
port
LAN
FIGURE 4-5 Single Host With Eight Controller Units Configured as Four Partner Groups
Chapter 4Configuration Examples33
Management host
Hub Host Connection
The following sample configurations are included in this section:
■ “Single Host With Two Hubs and Four Controller Units Configured as Two
Partner Groups” on page 34
■ “Single Host With Two Hubs and Eight Controller Units Configured as Four
Partner Groups” on page 36
■ “Dual Hosts With Two Hubs and Four Controller Units” on page 38
■ “Dual Hosts With Two Hubs and Eight Controller Units” on page 40
■ “Dual Hosts With Two Hubs and Four Controller Units Configured as Two
Partner Groups” on page 42
■ “Dual Hosts With Two Hubs and Eight Controller Units Configured as Four
Partner Groups” on page 44
Single Host With Two Hubs and Four Controller
Units Configured as Two Partner Groups
FIGURE 4-6 shows one application host connected through FC-AL cables to two hubs
and two array partner groups. The Ethernet connection on the master controller unit
is on a public or separate network and requires an IP address for the partner group.
In the event of a failover, the alternate master controller unit will use the master
controller unit’s IP address and MAC address.
Note – This configuration is a recommended enterprise configuration for RAS
functionality because the controller is not a single point of failure.
Note – There are no hub port position dependencies when connecting arrays to a
hub. Arrays can be connected to any available port on the hub.
Each array needs to be assigned a unique target address using the port set
command. These target addresses can be any number between 1 and 125. At the
factory, the array target addresses are set starting with target address 1 for the
bottom array and continuing to the top array. Use the port list command to
verify that all arrays have a unique target address. Refer to Appendix A of the SunStorEdge T3 and T3+ Array Administrator’s Guide for further details.
34Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
The following three parameters must be set on the master controller unit, as follows:
■ mp_support = rw or mpxio
■ cache mode = auto
■ cache mirroring = auto
Note – For information on setting these parameters, refer to the Sun StorEdge T3 and
T3+ Array Administrator’s Guide
Host-based multipathing software is required for this configuration.
Hub
Application host
HBA
Alternate
master
controller
unit
Interconnect
cables
Master
controller
unit
Hub
HBA
Ethernet
FC-AL
FC-AL
Ethernet
port
Management host
LAN
FIGURE 4-6 Single Host With Two Hubs and Four Controller Units Configured as Two
Partner Groups
Chapter 4Configuration Examples35
Single Host With Two Hubs and Eight Controller
Units Configured as Four Partner Groups
FIGURE 4-7 shows one application host connected through FC-AL cables to two hubs
and to eight Sun StorEdge T3+ arrays, forming four partner groups. This
configuration is the maximum allowed in a 72-inch cabinet. This configuration can
be used for footprint and I/O throughput.
Note – This configuration is a recommended enterprise configuration for RAS
functionality because the controller is not a single point of failure.
Note – There are no hub port position dependencies when connecting arrays to a
hub. An array can be connected to any available port on the hub.
Each array needs to be assigned a unique target address using the port set
command. These target addresses can be any number between 1 and 125. At the
factory, the array target addresses are set starting with target address 1 for the
bottom array and continuing to the top array. Use the port list command to
verify that all arrays have a unique target address. Refer to Appendix A of the SunStorEdge T3 and T3+ Array Administrator’s Guide for further details.
The following three parameters must be set on the master controller unit, as follows:
■ mp_support = rw or mpxio
■ cache mode = auto
■ cache mirroring = auto
Note – For information on setting these parameters, refer to the Sun StorEdge T3 and
T3+ Array Administrator’s Guide
Host-based multipathing software is required for this configuration.
36Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Alternate
master
controller
unit
Interconnect
cables
Hub
Application host
HBA
HBA
Hub
Ethernet
FC-AL
Master
controller
unit
FC-AL
LAN
Ethernet
port
Management host
FIGURE 4-7 Single Host With Two Hubs Configured and Eight Controller Units as Four
Partner Groups
Chapter 4Configuration Examples37
Dual Hosts With Two Hubs and Four Controller
Units
FIGURE 4-8 shows two application hosts connected through FC-AL cables to two hubs
and four Sun StorEdge T3+ arrays. This configuration, also known as a multi-initiatorconfiguration, can be used for footprint and I/O throughput. The following
limitations should be evaluated when proceeding with this configuration:
■ Avoid the risk caused by any array or data path single point of failure using host-
based mirroring software such as VERITAS Volume Manager or Solaris Volume
Manager.
■ When configuring more than a single array to share a single FC-AL loop, as with
a hub, array target addresses need to be set to unique values.
This configuration is not a recommended for RAS functionality because the
controller is a single point of failure.
Note – There are no hub port position dependencies when connecting arrays to a
hub. An array can be connected to any available port on the hub.
Each array needs to be assigned a unique target address using the port set
command. These target addresses can be any number between 1 and 125. At the
factory, the array target addresses are set starting with target address 1 for the
bottom array and continuing to the top array. Use the port list command to
verify that all arrays have a unique target address. Refer to Appendix A of the SunStorEdge T3 and T3+ Array Administrator’s Guide for further details.
The following two parameters must be set on the master controller unit, as follows:
■ cache mode = auto
■ cache mirroring = auto
Note – For information on setting these parameters, refer to the Sun StorEdge T3 and
T3+ Array Administrator’s Guide.
38Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Hub
Application host 1
HBA
Controller
unit
Hub
HBA
Application host 2
HBA
HBA
FC-AL
Ethernet
Ethernet
port
LAN
FIGURE 4-8 Dual Hosts With Two Hubs and Four Controller Units
Management host
Chapter 4Configuration Examples39
Dual Hosts With Two Hubs and Eight Controller
Units
FIGURE 4-9 shows two application hosts connected through FC-AL cables to two hubs
and eight Sun StorEdge T3+ arrays. This configuration, also known as a multi-initiator configuration, can be used for footprint and I/O throughput. The following
limitations should be evaluated when proceeding with this configuration
■ Avoid the risk caused by any array or data path single point of failure using host-
based mirroring software such as VERITAS Volume Manager or Solaris Volume
Manager.
Note – This configuration, running host-based mirroring features from VERITAS
Volume Manager or Solaris Logical Volume Manager, represents four arrays of data
mirrored to the other four trays using host-based mirroring.
■ When configuring more than a single array to share a single FC-AL loop, as with
a hub, array target addresses need to be set to unique values.
This configuration is not a recommended for RAS functionality because the
controller is a single point of failure.
Note – There are no hub port position dependencies when connecting arrays to a
hub. An array can be connected to any available port on the hub.
Each array needs to be assigned a unique target address using the port set
command. These target addresses can be any number between 1 and 125. At the
factory, the array target addresses are set starting with target address 1 for the
bottom array and continuing to the top array. Use the port list command to
verify that all arrays have a unique target address. Refer to Appendix A of the SunStorEdge T3 and T3+ Array Administrator’s Guide for further details.
The following two parameters must be set on the master controller unit, as follows:
■ cache mode = auto
■ cache mirroring = auto
Note – For information on setting these parameters, refer to the Sun StorEdge T3 and
T3+ Array Administrator’s Guide.
40Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Hub
Application host 1
HBA
Controller
unit
Hub
FC-AL
HBA
Application host 2
HBA
HBA
Ethernet
Ethernet
port
LAN
FIGURE 4-9 Dual Hosts With Two Hubs and Eight Controller Units
Management Host
Chapter 4Configuration Examples41
Dual Hosts With Two Hubs and Four Controller
Units Configured as Two Partner Groups
FIGURE 4-8 shows two application hosts connected through FC-AL cables to two hubs
and four Sun StorEdge T3+ arrays forming two partner groups. This multi-initiatorconfiguration can be used for footprint and I/O throughput.
Note – This configuration is a recommended enterprise configuration for RAS
functionality because the controller is not a single point of failure.
Note – There are no hub port position dependencies when connecting arrays to a
hub. An array can be connected to any available port on the hub.
Each array needs to be assigned a unique target address using the port set
command. These target addresses can be any number between 1 and 125. At the
factory, the array target addresses are set starting with target address 1 for the
bottom array and continuing to the top array. Use the port list command to
verify that all arrays have a unique target address. Refer to Appendix A of the SunStorEdge T3 and T3+ Array Administrator’s Guide for further details.
The following three parameters must be set on the master controller unit, as follows:
■ mp_support = rw or mpxio
■ cache mode = auto
■ cache mirroring = auto
Note – For information on setting these parameters, refer to the Sun StorEdge T3 and
T3+ Array Administrator’s Guide
Host-based multipathing software is required for this configuration.
42Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Hub
Application host 1
HBA
Alternate
master
controller
unit
Interconnect
cables
Master
controller
unit
Hub
HBA
Application host 2
HBA
HBA
FC-AL
Ethernet
FC-AL
Ethernet
port
LAN
FIGURE 4-10 Dual Hosts With Two Hubs and Four Controller Units Configured as Two
Management host
Partner Groups
Chapter 4Configuration Examples43
Dual Hosts With Two Hubs and Eight Controller
Units Configured as Four Partner Groups
FIGURE 4-9 shows two application hosts connected through FC-AL cables to two hubs
and eight Sun StorEdge T3+ arrays forming four partner groups. This multi-initiator
configuration can be used for footprint and I/O throughput.
This configuration is a recommended enterprise configuration for RAS functionality
because the controller is not a single point of failure.
Note – There are no hub port position dependencies when connecting Sun StorEdge
T3 and T3+ arrays to a hub. An array can be connected to any available port on the
hub.
When configuring more than one partner group or a single array to share a single
FC-AL loop, as with a hub, array target addresses need to be set to unique values.
Assign the array target address using the port set command. These target
addresses can be any number between 1 and 125. At the factory, the array target
addresses are set starting with target address 1 for the bottom array and continuing
to the top array. Use the port list command to verify that all arrays have a
unique target address. Refer to Appendix A of the Sun StorEdge T3 and T3+ ArrayAdministrator’s Guide for further details.
The following two parameters must be set on the master controller unit, as follows:
■ mp_support = rw or mpxio
■ cache mode = auto
■ cache mirroring = auto
Note – For information on setting these parameters, refer to the Sun StorEdge T3 and
T3+ Array Administrator’s Guide
Host-based multipathing software is required for this configuration.
44Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Hub
Application host 1
HBA
Alternate
master
controller
unit
Interconnect
cables
Master
controller
unit
Hub
HBA
Application host 2
HBA
HBA
FC-AL
Ethernet
FC-AL
LAN
Management host
Ethernet
port
FIGURE 4-11 Dual Hosts With Two Hubs and Eight Controller Units Configured as Four
Partner Groups
Chapter 4Configuration Examples45
Switch Host Connection
This section contains the following example configurations:
■ “Dual Hosts With Two Switches and Two Controller Units” on page 46
■ “Dual Hosts With Two Switches and Eight Controller Units” on page 48
Dual Hosts With Two Switches and Two
Controller Units
FIGURE 4-12 shows two application hosts connected through FC-AL cables to two
switches and two Sun StorEdge T3+ arrays. This multi-initiator configuration can be
used for footprint and I/O throughput.
Note – This configuration is not a recommended for RAS functionality because the
controller is a single point of failure.
Evaluate the following limitations before proceeding with this configuration:
■ Avoid the risk caused by any array or data path single point of failure using host-
based mirroring software such as VERITAS Volume Manager or Solaris Volume
Manager.
■ When configuring more than a single array to share a single FC-AL loop, as with
a hub, array target addresses need to be set to unique values.
Each array needs to be assigned a unique target address using the port set
command. These target addresses can be any number between 1 and 125. At the
factory, the array target addresses are set starting with target address 1 for the
bottom array and continuing to the top array. Use the port list command to
verify that all arrays have a unique target address. Refer to Appendix A of the SunStorEdge T3 and T3+ Array Administrator’s Guide for further details.
The following two parameters must be set on the master controller unit, as follows:
■ cache mode = auto
■ cache mirroring = auto
Note – For information on setting these parameters, refer to the Sun StorEdge T3 and
T3+ Array Administrator’s Guide.
46Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Switch
Controller
unit
FC-AL
Switch
LAN
HBA
HBA
HBA
HBA
Ethernet
Management Host
Application host A
Application host B
Ethernet
port
FIGURE 4-12 Dual Hosts With Two Switches and Two Controller Units
Chapter 4Configuration Examples47
Dual Hosts With Two Switches and Eight
Controller Units
FIGURE 4-13 shows two application hosts connected through FC-AL cables to two
switches and eight Sun StorEdge T3+ arrays. This multi-initiator configuration, can
be used for footprint and I/O throughput.
Note – This configuration is not a recommended for RAS functionality because the
controller is a single point of failure.
The following limitations should be evaluated when proceeding with this
configuration:
■ Avoid the risk caused by any array or data path single point of failure using host-
based mirroring software such as VERITAS Volume Manager or Solaris Logical
Volume Manager.
■ When configuring more than a single array to share a single FC-AL loop, as with
a hub, array target addresses need to be set to unique values.
Each array needs to be assigned a unique target address using the port set
command. These target addresses can be any number between 1 and 125. At the
factory, the array target addresses are set starting with target address 1 for the
bottom array and continuing to the top array. Use the port list command to
verify that all arrays have a unique target address. Refer to Appendix A of the SunStorEdge T3 and T3+ Array Administrator’s Guide for further details.
The following two parameters must be set on the master controller unit, as follows:
■ cache mode = auto
■ cache mirroring = auto
Note – For information on setting these parameters, refer to the Sun StorEdge T3 and
T3+ Array Administrator’s Guide.
48Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Switch
Controller
unit
Switch
FC-AL
HBA
HBA
HBA
HBA
Ethernet
Application host 1
Application host 2
LAN
Management host
FIGURE 4-13 Dual Hosts With Two Switches and Eight Controller Units
Chapter 4Configuration Examples49
Ethernet
port
50Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
CHAPTER
5
Host Connections
This chapter describes the host bus adapters (HBAs) that are supported by Sun
StorEdge T3 and T3+ arrays:
■ “Sun Enterprise SBus+ and Graphics+ I/O Boards” on page 52
■ “Sun StorEdge PCI FC-100 Host Bus Adapter” on page 53
■ “Sun StorEdge SBus FC-100 Host Bus Adapter” on page 54
■ “Sun StorEdge PCI Single Fibre Channel Network Adapter” on page 55
The SBus+ and Graphics+ I/O boards each provide mounting for two GigabitInterface Converters (GBICs). For more detailed information about these I/O boards,
refer to the Sun Enterprise 6x00/5x00/4x00/3x00 Systems SBus+ and Graphics+ I/OBoards Installation Guide, part number 805-2704.
6x00/5x00/4x00/3x00 SBus+ I/O board.
FIGURE 5-1 shows an Enterprise
FIGURE 5-1 Sun Enterprise 6x00/5x00/4x00/3x00 SBus+ I/O Board
System Requirements
Your system must meet the following hardware and software requirements:
■ Sun Enterprise 6x00/5x00/4x00/3x00 system
■ An available I/O board slot
■ OpenBoot
■ A release of the Solaris operating environment that supports this board. The first
release that supports this board is the Solaris 2.6 operating environment.
52Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
TM
PROM, version 3.2.10 or later
Sun StorEdge PCI FC-100 Host Bus
Adapter
The Sun StorEdge PCI FC-100 host bus adapter is a 33-MHz, 100 Mbytes/second,
single-loop Fibre Channel PCI host bus adapter with an onboard GBIC. This host bus
adapter is PCI Version 2.1-compliant. For more detailed information about this
product, refer to the Sun StorEdge PCI FC-100 Host Adapter Installation Manual, part
number 805-3682.
FIGURE 5-2 Sun StorEdge PCI FC-100 Host Bus Adapter
FIGURE 5-2 shows a Sun StorEdge PCI FC-100 host bus adapter.
System Requirements
Your system must meet the following hardware and software requirements:
■ An available PCI port
■ A release of the Solaris operating environment that supports this board. The first
release that supports this board is the Solaris 2.6 operating environment
Chapter 5Host Connections53
Sun StorEdge SBus FC-100 Host Bus
Adapter
The Sun StorEdge SBus FC-100 host bus adapter is a single-width Fibre Channel
SBus card with a Sun Serial Optical Channel (SOC+) ASIC (application-specific
integrated circuit). You can connect up to two loops to each card, using hotpluggable GBICs. For more detailed information about this product, refer to the SunStorEdge SBus FC-100 Host Adapter Installation and Service Manual, part number
802-7572.
FIGURE 5-3 Sun StorEdge SBus FC-100 Host Bus Adapter
FIGURE 5-3 shows a Sun StorEdge SBus FC-100 host bus adapter.
System Requirements
Your system must meet the following hardware and software requirements:
■ An available SBus port
■ A release of the Solaris operating environment that supports this board. The first
release that supports this board is the Solaris 2.6 operating environment.
54Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Sun StorEdge PCI Single Fibre Channel
Network Adapter
The Sun StorEdge PCI Single Fibre Channel network adapter is a Fibre Channel PCI
card with one onboard optical receiver. This network adapter is PCI Version
2.1-compliant. For more detailed information about this product, refer to the SunStorEdge PCI Single Fibre Channel Network Adapter Installation Guide, part number
806-7532-xx.
adapter.
FIGURE 5-4 shows a Sun StorEdge PCI Single Fibre Channel network
FIGURE 5-4 Sun StorEdge PCI Single Fibre Channel Network Adapter
System Requirements
Your system must meet the following hardware and software requirements:
■ An available PCI port
■ A release of the Solaris operating environment that supports this board. The first
release that supports this board is the Solaris 7 11/99 operating environment.
Chapter 5Host Connections55
Sun StorEdge PCI Dual Fibre Channel
Network Adapter
The Sun StorEdge PCI Dual Fibre Channel network adapter is a Fibre Channel PCI
card with two onboard optical transceivers. This network adapter is PCI Version
2.1-compliant. For more detailed information about this product, refer to the SunStorEdge PCI Dual Fibre Channel Network Adapter Installation Guide, part number
806-4199-xx.
adapter.
FIGURE 5-6 shows a Sun StorEdge PCI Dual Fibre Channel network
FIGURE 5-5 Sun StorEdge PCI Dual Fibre Channel Network Adapter
System Requirements
Your system must meet the following hardware and software requirements:
■ An available PCI slot
■ A release of the Solaris operating environment that supports this board. The first
release that supports this board is the Solaris 7 11/99 operating environment.
56Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Sun StorEdge CompactPCI Dual Fibre
Channel Network Adapter
The Sun StorEdge CompactPCI Dual Fibre Channel network adapter has two 1-Gbit
Fibre Channel ports on a cPCI card. For more detailed information about this
product, refer to the Sun StorEdge CompactPCI Dual Fibre Channel Network AdapterInstallation Guide, part number 816-0241-xx.
CompactPCI Dual Fibre Channel network adapter.
FIGURE 5-6 shows a Sun StorEdge
FIGURE 5-6 Sun StorEdge CompactPCI Dual Fibre Channel Network Adapter
System Requirements
Your system must meet the following hardware and software requirements:
■ An available cPCI port
■ OpenBoot PROM version 5.1 or later
■ Solaris 8 operating environment
Chapter 5Host Connections57
58Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
CHAPTER
6
Array Cabling
This chapter describes the array configurations supported by the Sun StorEdge T3
and T3+ arrays, and it includes the following sections:
■ “Overview of Array Cabling” on page 59
■ “Workgroup Configurations” on page 62
■ “Enterprise Configurations” on page 63
Overview of Array Cabling
Sun StorEdge T3 and T3+ arrays have the following connections:
■ One FC-AL interface to the application host
■ One Ethernet interface to the management host (via a LAN) for administration
purposes
■ One serial interface to be used for service tasks by qualified service personnel
only
■ Interconnect ports for configuring arrays into partner groups
Data Path
For the data path (FC-AL) connection, there are three ways that the array can
connect to the host:
■ Direct attached mode to the data host
■ Hub connection, where the FC-AL from the array is connected to a hub on the
same network as the data host
59
■ Switch connection where the FC-AL from the array is connected to a switch on
the same network as the data host.
Administration Path
For the administration path, each controller unit has an Ethernet connector. For each
installed controller, an Ethernet connection and IP address are required. The
administration server uses this link to set up and manage the arrays using Sun
StorEdge Component Manager software.
Note – In a partner group, only one of the two Ethernet connections is active at any
time. The second Ethernet connection is used for redundancy.
Connecting Partner Groups
The array also has two interconnect cards that are used to connect the array in a
partner group. These interconnect cards have two ports (in and out).
Note – Partner groups are not supported in Sun Cluster 2.2.
Note – In a workgroup (standalone) configuration, these interconnect cards cannot
be used to connect to the administrative console or to the application host. These
interconnect cards are used solely for ensuring redundancy and failover mechanisms
in partner groups.
FIGURE 6-1 and FIGURE 6-2 show a Sun StorEdge T3 and T3+ array with a controller
card and interconnect cards.
60Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Controller card
Serial port (RJ-11)
10BASE-T Ethernet port (RJ-45)
Interconnect
cards
FIGURE 6-1 Sun StorEdge T3 Array Controller Card and Interconnect Cards
FC-AL data connection port
Note: FC-AL port requires an MIA for cable connection.
Controller card
Serial port (RJ-45)
Interconnect
cards
10/100BASE-T Ethernet port (RJ-45)
FC-AL data connection port
(LC-SFF)
FIGURE 6-2 Sun StorEdge T3+ Array Controller Card and Interconnect Cards
Chapter 6Array Cabling61
Workgroup Configurations
The following configuration rules apply to array workgroup configurations
(
FIGURE 6-3):
■ The interconnect ports, which are used only in partner group configurations,
cannot be used for host connections.
■ The FC-AL connection provides a data path to the application host.
■ The Ethernet connection provides a link to the management host.
■ The serial port is used solely for diagnostics and service by qualified service
personnel only.
■ Fiber-optic cable lengths between 2 and 500 meters, using short-wave laser and
50-micron fiber-optic cable, are supported.
Application host
FC-AL
connection
Ethernet
connection
LAN
FIGURE 6-3 Array Workgroup Configuration
62Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Management host
Ethernet port
Enterprise Configurations
The following rules configuration rules apply to enterprise (partner group)
configurations (
■ The interconnect ports, which are used only in enterprise configurations, cannot
be used for host connections.
■ The FC-AL connection provides a data path to the application host.
■ The Ethernet connection provides a link to the management host.
■ The serial port is used solely for diagnostics and service by qualified service
personnel only.
■ Fiber-optic cable lengths between 2 and 500 meters, using short-wave laser and
50-micron fiber-optic cable, are supported.
This configuration is optimal because it provides full redundancy to the application
hosts. Failover mechanisms are provided within the arrays, but the application host
has to provide data-path failover mechanisms, such as Dynamic Multi-Pathing from
VERITAS Volume Manager or Alternate Pathing from Sun Enterprise Server
Alternate Pathing.
Alternate
master
controller
unit
Interconnect
cables
FIGURE 6-4):
Ethernet
connection
Application host
Master
controller
unit
FC-AL connection
Ethernet
connection
FIGURE 6-4 Enterprise Configuration
LAN
Host-bus adapters
Management host
Ethernet port
Chapter 6Array Cabling63
64Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Glossary
A
administrative
domainPartner groups (interconnected controller units) that share common
administration through a master controller.
alternate master
controller unitAlso called “alternate master unit,” the secondary array unit in a partner group
that provides failover capability from the master controller unit.
Alternate Pathing
(AP)A mechanism that reroutes data to the other array controller in a partner group
upon failure in the host data path. Alternate Pathing requires special software
to perform this function.
auto cache modeThe default cache mode for the Sun StorEdge T3 and T3+ array. In a fully
redundant configuration, cache is set to write-behind mode. In a nonredundant
configuration, cache is set to write-through mode. Read caching is always
performed.
auto disableThe Sun StorEdge T3 and T3+ array default that automatically disables a disk
drive that has failed.
B
bufferingData that is being transferred between the host and the drives.
65
C
command-line interface
(CLI)The interface between the Sun StorEdge T3 and T3+ array’s pSOS operating
controller unitA Sun StorEdge T3 and T3+ array that includes a controller card. It can be use
G
Dynamic Multi-Pathing
(DMP)A VERITAS Volume Manager feature that provides an Alternate Pathing
E
system and the user in which the user types commands to administer the
array.
as a standalone unit or configured with other Sun StorEdge T3 and T3+ arrays.
mechanism for rerouting data in the event of a controller failover.
enterprise
configurationOne or more partner groups (pair of interconnected controller units) in a
system configuration.
erasable programmable
read-only memory
(EPROM)Memory stored on the controller card; useful for stable storage for long periods
without electricity while still allowing reprogramming.
expansion unitA Sun StorEdge T3 and T3+ array without a controller card. It must be
connected to a controller unit to be operational.
66Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
F
Fibre Channel
Arbitrated Loop
(FC-AL)A 100 Mbyte/s serial channel that enables connection of multiple devices (disk
drives and controllers).
field-replaceable unit
(FRU)A component that is easily removed and replaced by a field service engineer or
a system administrator.
FLASH memory device
(FMD)A device on the controller card that stores EPROM firmware.
G
Gigabit Interface
Converter (GBIC)An adapter used on an SBus card to convert fiber-optic signal to copper.
gigabyte (GB or
Gbyte)One gigabyte is equal to one billion bytes (1Χ109).
graphical user interface
(GUI)A software interface that enables configuration and administration of the Sun
StorEdge T3 and T3+ array using a graphic application.
H
host bus adapter
(HBA)An adapter that resides on the host.
hot spareA drive in a RAID 1 or RAID 5 configuration that contains no data and acts as
a standby in case another drive fails.
hot-swappableThe characteristic of a field-replaceable unit (FRU) to be removed and replaced
while the system remains powered on and operational.
67
I
input/output operations
per second (IOPS)A performance measurement of the transaction rate.
interconnect cableAn FC-AL cable with a unique switched-loop architecture that is used to
interconnect multiple Sun StorEdge T3 and T3+ arrays.
interconnect cardAn array component that contains the interface circuitry and two connectors
for interconnecting multiple Sun StorEdge T3 and T3+ arrays.
L
LCAn industry standard name used to describe a connector standard. The Sun
StorEdge T3+ array uses an LC-SFF connector for the host FC-AL connection.
light-emitting diode
(LED)A device that converts electrical energy into light that is used to display
activity.
logical unit number
(LUN)One or more drives that can be grouped into a unit; also called a volume.
M
master controller unitAlso called a “master unit,” the main controller unit in a partner-group
configuration.
media access control
(MAC) addressA unique address that identifies a storage location or a device.
media interface adapter
(MIA)An adapter that converts fiber-optic light signals to copper.
megabyte (MB or
Mbyte)One megabyte is equal to one million bytes (1Χ106).
megabytes per second
(MB/s)A performance measurement of the sustained data transfer rate.
68Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
multi-initiator
configurationA supported array configuration that connects two hosts to one or more array
P
parityAdditional information stored with data on a disk that enables the controller to
partner groupA pair of interconnected controller units. Expansion units interconnected to the
power and cooling unit
(PCU)A FRU component in the Sun StorEdge T3 and T3+ array. It contains a power
pSOSAn operating system built into the Sun StorEdge T3 and T3+ array RAID
administrative domains through hub or switch connections.
rebuild data after a drive failure.
pair of controller units can also be part of the partner group.
supply, cooling fans, and an integrated UPS battery. There are two power and
cooling units in a Sun StorEdge T3 and T3+ array.
Controller firmware, which provides interfaces between the mounted RAID
volumes and the database environment.
Q
quiesceTo halt all drive activity.
R
read cachingData for future retrieval, to reduce disk I/O as much as possible.
redundant array of
independent disks
(RAID)A configuration in which multiple drives are combined into a single virtual
drive to improve performance and reliability.
reliability, availability,
serviceability (RAS)A term to describe product features that include high availability, easily
serviced components, and very dependable.
69
reverse address
resolution protocol
(RARP)A utility in the Solaris operating environment that enables automatic
S
SCAn industry standard name used to describe a connector standard.
Simple Network
Management Protocol
(SNMP)A network management protocol designed to give a user the capability to
small form factor
(SFF)An industry standard describing a type of connector. An LC-SFF connector is
synchronous dynamic
random access memory
(SDRAM)A form of dynamic random access memory (DRAM) that can run at higher
system areaLocated on the disk drive label, the space that contains configuration data, boot
assignment of the array IP address from the host.
remotely manage a computer network.
used for the host FC-AL connection to the Sun StorEdge T3+ array.
clock speeds than conventional DRAM.
firmware, and file-system information.
U
uninterruptable power
source (UPS)A component within the power and cooling unit. It supplies power from a
battery in the case of an AC power failure.
V
volumeAlso called a logical unit or LUN, a volume is one or more drives that can be
grouped into a unit for data storage.
70Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
W
workgroup
configurationA standalone array connected to a host system.
world wide name
(WWN)A number used to identify array volumes in both the array system and Solaris
environment.
write cachingData used to build up stripes of data, eliminating the read-modify-write
overhead. Write caching improves performance for applications that are
writing to disk.
71
72Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Index
A
administration path, 60
alternate master controller unit
in a partner group, 7
Alternate Pathing (AP)
in configuration recommendations, 9
in partner group configuration, 30
auto cache mode, 14
C
cabling overview, 59
cache
allocation, configuring, 16
for improving performance, 13
mirrored, enabling, 16
setting cache modes, 14
cache segment, 15
cluster support
See SunCluster 2.2 support
configurations
direct host
single host with eight controllers, 32 to 33
single host with four controllers, 31
single host with one controller, 28
single host with two controllers, 29
hot spare, 19
hub host
dual hosts with eight controllers, 40 to 41, 44
to 45
dual hosts with four controllers, 38 to 39, 42 to
43
single host with eight controllers, 36 to 37
single host with four controllers, 34 to 35
restrictions and recommendations, 8
switch host
dual hosts with eight controllers, 48 to 49
dual hosts with two controllers, 46 to 47
connections
Ethernet, 2, 3
FC-AL, 3, 6
serial, 2, 3
controller card
Sun StorEdge T3 array controller, 3
Sun StorEdge T3+ array controller, 4
controller units, 2
D
data block size
definition, 15
data path, 59
Dynamic Multi-Pathing (DMP)
in configuration recommendations, 9
in partner group configuration, 30
73
E
enterprise configuration
configuration rules, 63
description, 6
See partner group
Ethernet
administration path, 60
connection, 2, 3
expansion units, 2
F
FC-AL
connections, 6
data path, 59
Fibre Channel-Arbitrated Loop (FC-AL)
See FC-AL
H
HBA
SOC+, 54
Sun StorEdge CompactPCI Dual Fibre Channel
network adapter, 57
Sun StorEdge PCI Dual Fibre Channel network
adapter, 56
Sun StorEdge PCI FC-100, 53
Sun StorEdge PCI Single Fibre Channel network
adapter, 55
Sun StorEdge SBus FC-100, 54
hot spare
default value, 22
determining whether to use, 18
I
I/O boards
Sun Enterprise SBus+ and Graphics+, 52
interconnect cards
description, 4 to 5
in partner groups, 60
L
logical unit (LUN)
See LUNs
LUNs
and applications, 17
creating and labeling, 19
default value, 22
definition, 16
determining how many are needed, 17
guidelines for configuring, 17
reconstruction rate, setting, 19
viewing, 16
M
MAC address, 8
master controller unit
in a partner group, 7, 25
parameters controlled by, 9
media access control (MAC) address
See MAC address
N
network adapter
See HBA
P
parameters, tailored to I/O workload, 9
partner groups
configuration rules, 63
creating, 26
description, 6
direct host
single host with eight controllers, 32 to 33
single host with four controllers, 31
single host with two controllers, 29
how they work, 25
hub host
dual hosts with eight controllers, 44 to 45
dual hosts with four controllers, 42 to 43
single host with eight controllers, 36 to 37
single host with four controllers, 34 to 35
multipathing software, 25
74Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
sharing parameter settings, 9
using AP, 30
using DMP, 30
using multipathing software, 30
platforms supported, 9
R
RAID
and applications, 18
configuring for redundancy, 20
default level, 22
determining level required, 18
levels, defined, 21
S
single controller configuration, 6
SOC+ HBA, 54
software supported, 10
stripe unit size
See data block size
Sun Cluster 2.2 support, 10
Sun Enterprise SBus+ and Graphics+ I/O boards
See I/O boards
Sun StorEdge CompactPCI Dual Fibre Channel
network adapter, 57
Sun StorEdge PCI Dual Fibre Channel network
adapter, 56
Sun StorEdge PCI FC-100 HBA, 53
Sun StorEdge PCI Single Fibre Channel network
adapter, 55
Sun StorEdge SBus FC-100 HBA, 54
Sun StorEdge T3 array controller card and ports, 3
Sun StorEdge T3 array overview, 1 to 6
Sun StorEdge T3+ array controller card and