IBM TotalStorage DS6000 Attachment Manual

Page 1
IBM TotalStorage DS6000
Host Systems Attachment Guid e
GC26-7680-03

Page 2
Page 3
IBM TotalStorage DS6000
Host Systems Attachment Guid e
GC26-7680-03

Page 4
Note: Before using this information and the product it supports, read the information in the Safety and environmental notices and Notices sections.
Fourth Edition (May 2005 )
This edition replaces GC26-7680-02 and all previous versions of GC26-7680.
© Copyright International Business Machines Corporation 2004, 2005. All rights reserved.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Page 5
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . .xi
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
About this guide . . . . . . . . . . . . . . . . . . . . . . . .xv
Safety and environmental notices . . . . . . . . . . . . . . . . xvii
Safety notices . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Environmental notices . . . . . . . . . . . . . . . . . . . . . . xvii
Product recycling . . . . . . . . . . . . . . . . . . . . . . . xvii
Disposing of products . . . . . . . . . . . . . . . . . . . . . xvii
Conventions used in this guide . . . . . . . . . . . . . . . . . . . xvii
Related information . . . . . . . . . . . . . . . . . . . . . . . xviii
DS6000 series library . . . . . . . . . . . . . . . . . . . . . xviii
Other IBM publications . . . . . . . . . . . . . . . . . . . . . xix
Ordering IBM publications . . . . . . . . . . . . . . . . . . . xxiv
Web sites . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
How to send your comments . . . . . . . . . . . . . . . . . . . xxv
Summary of Changes for GC26-7680-03 IBM TotalStorage DS6000 Host
Systems Attachment Guide . . . . . . . . . . . . . . . . . . xxvii
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . .1
Overview of the DS6000 series models . . . . . . . . . . . . . . . .1
DS6800 (Model 1750-511) . . . . . . . . . . . . . . . . . . . .2
DS6000 expansion enclosure (Model 1750-EX1) . . . . . . . . . . . .3
Performance features . . . . . . . . . . . . . . . . . . . . . . .4
Data availability features . . . . . . . . . . . . . . . . . . . . . .5
RAID implementation . . . . . . . . . . . . . . . . . . . . . .5
Overview of Copy Services . . . . . . . . . . . . . . . . . . . .6
FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . .6
Subsystem Device Driver for open-systems . . . . . . . . . . . . . .8
Multiple allegiance for FICON hosts . . . . . . . . . . . . . . . . .8
DS6000 Interfaces . . . . . . . . . . . . . . . . . . . . . . . .8
IBM TotalStorage DS Storage Manager . . . . . . . . . . . . . . .8
DS Open application programming interface . . . . . . . . . . . . .9
DS command-line interface . . . . . . . . . . . . . . . . . . . .9
Software Requirements . . . . . . . . . . . . . . . . . . . . . .10
Host systems that DS6000 series supports . . . . . . . . . . . . . .10
Fibre channel host attachments . . . . . . . . . . . . . . . . . .10
Attaching a DS6000 series to an open-systems host with fibre channel
adapters . . . . . . . . . . . . . . . . . . . . . . . . .11
FICON-attached S/390 and zSeries hosts that the storage unit supports . . .11 General information about attaching to open-systems host with fibre-channel
adapters . . . . . . . . . . . . . . . . . . . . . . . . . .12
Fibre-channel architecture . . . . . . . . . . . . . . . . . . . .12
Fibre-channel cables and adapter types . . . . . . . . . . . . . . .15
Fibre-channel node-to-node distances . . . . . . . . . . . . . . .15
LUN affinity for fibre-channel attachment . . . . . . . . . . . . . .16
Targets and LUNs for fibre-channel attachment . . . . . . . . . . . .16
LUN access modes for fibre-channel attachment . . . . . . . . . . .16
Fibre-channel storage area networks . . . . . . . . . . . . . . . .17
© Copyright IBM Corp. 2004, 2005 iii
Page 6
Chapter 2. Attaching to a Apple Macintosh server . . . . . . . . . . .19
Supported fibre-channel adapters for the Apple Macintosh server . . . . . .19
Chapter 3. Attaching to a Fujitsu PRIMEPOWER host system . . . . . .21
Supported fibre-channel adapters for PRIMEPOWER . . . . . . . . . . .21
Fibre-channel attachment requirements for PRIMEPOWER . . . . . . . .21
Installing the Emulex adapter card for a PRIMEPOWER host system . . . . .21
Downloading the Emulex adapter driver for a PRIMEPOWER host system . . .22
Installing the Emulex adapter driver for a PRIMEPOWER host system . . . .22
Configuring host device drivers for PRIMEPOWER . . . . . . . . . . .22
Parameter settings for the Emulex LP9002L adapter . . . . . . . . . . .24
Setting parameters for Emulex adapters . . . . . . . . . . . . . . .26
Chapter 4. Attaching to a Hewlett-Packard AlphaServer Tru64 UNIX host 27
Attaching to an HP AlphaServer Tru64 UNIX host system with fibre-channel
adapters . . . . . . . . . . . . . . . . . . . . . . . . . .27
Supported fibre-channel adapters for the HP AlphaServer Tru64 UNIX host
system . . . . . . . . . . . . . . . . . . . . . . . . . .27
Supported operating system levels for fibre-channel attachment to an HP
Tru64 UNIX host . . . . . . . . . . . . . . . . . . . . . .27
Fibre-channel Tru64 UNIX attachment requirements . . . . . . . . . .27
Fibre-channel Tru64 UNIX attachment considerations . . . . . . . . . .28
Supporting the AlphaServer Console for Tru64 UNIX . . . . . . . . . .28
Attaching the HP AlphaServer Tru64 UNIX host to a storage unit using
fibre-channel adapters . . . . . . . . . . . . . . . . . . . .29
Installing the KGPSA-xx adapter card in an Tru64 UNIX host system . . . .30
Setting the mode for the KGPSA-xx host adapter . . . . . . . . . . .31
Setting up the storage unit to attach to an HP AlphaServer Tru64 UNIX host
system with fibre-channel adapters . . . . . . . . . . . . . . . .32
Configuring the storage for fibre-channel Tru64 UNIX hosts . . . . . . .36
Removing persistent reserves for Tru64 UNIX 5.x . . . . . . . . . . . .37
Limitations for Tru64 UNIX . . . . . . . . . . . . . . . . . . . .40
Chapter 5. Attaching to a Hewlett-Packard AlphaServer OpenVMS host 41
Supported fibre-channel adapters for the HP AlphaServer OpenVMS host
system . . . . . . . . . . . . . . . . . . . . . . . . . . .41
Fibre-channel OpenVMS attachment requirements . . . . . . . . . . . .41
Fibre-channel OpenVMS attachment considerations . . . . . . . . . . .41
Supported OpenVMS feature codes . . . . . . . . . . . . . . . . .42
Supported microcode levels for the HP OpenVMS host system . . . . . . .42
Supported switches for the HP OpenVMS host system . . . . . . . . . .42
Attaching the HP AlphaServer OpenVMS host to a storage unit using
fibre-channel adapters . . . . . . . . . . . . . . . . . . . . .42
Supported operating system levels for fibre-channel attachment to an HP
OpenVMS host . . . . . . . . . . . . . . . . . . . . . . .42
Confirming the installation of the OpenVMS operating system . . . . . . .43
Installing the KGPSA-xx adapter card in an OpenVMS host system . . . . .44
Setting the mode for the KGPSA-xx host adapter in an OpenVMS host system 44 Setting up the storage unit to attach to an HP AlphaServer OpenVMS host
system with fibre-channel adapters . . . . . . . . . . . . . . . . .45
Adding or modifying AlphaServer fibre-channel connections for the OpenVMS
host . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Defining OpenVMS fibre-channel adapters to the storage unit . . . . . . .45
Configuring fibre-channel host adapter ports for OpenVMS . . . . . . . .46
OpenVMS fibre-channel considerations . . . . . . . . . . . . . . .46
OpenVMS UDID Support . . . . . . . . . . . . . . . . . . . .46
iv DS6000 Host Systems Attachment Guide
Page 7
OpenVMS LUN 0 - Command Control LUN . . . . . . . . . . . . .48
Confirming fibre-channel switch connectivity for OpenVMS . . . . . . . .48
Confirming fibre-channel storage connectivity for OpenVMS . . . . . . .49
OpenVMS World Wide Node Name hexadecimal representations . . . . .50
Verifying the fibre-channel attachment of the storage unit volumes for
OpenVMS . . . . . . . . . . . . . . . . . . . . . . . . .50
Configuring the storage for fibre-channel OpenVMS hosts . . . . . . . . .51
OpenVMS fibre-channel restrictions . . . . . . . . . . . . . . . . .51
Troubleshooting fibre-channel attached volumes for the OpenVMS host system 52
Chapter 6. Attaching to a Hewlett-Packard Servers (HP-UX) host . . . . .55
Attaching with fibre-channel adapters . . . . . . . . . . . . . . . . .55
Supported fibre-channel adapters for HP-UX hosts . . . . . . . . . . .55
Fibre-channel attachment requirements for HP-UX hosts . . . . . . . .55
Installing the fibre-channel adapter drivers for HP-UX 11.i, and HP-UX 11iv2 56 Setting the queue depth for the HP-UX operating system with fibre-channel
adapters . . . . . . . . . . . . . . . . . . . . . . . . .56
Configuring the storage unit for clustering on the HP-UX 11iv2 operating system 56
Chapter 7. Attaching to an IBM iSeries host . . . . . . . . . . . . .59
Attaching with fibre-channel adapters to the IBM iSeries host system . . . . .59
Supported fibre-channel adapter cards for IBM iSeries hosts . . . . . . .59
Fibre-channel attachment requirements for IBM iSeries hosts . . . . . . .59
Fibre-channel attachment considerations for IBM iSeries hosts . . . . . .59
Host limitations for IBM iSeries hosts . . . . . . . . . . . . . . . .60
IBM iSeries hardware . . . . . . . . . . . . . . . . . . . . .61
IBM iSeries software . . . . . . . . . . . . . . . . . . . . . .61
General information for configurations for IBM iSeries hosts . . . . . . .61
Recommended configurations for IBM iSeries hosts . . . . . . . . . .62
Running the Linux operating system on an IBM i5 server . . . . . . . . .64
Supported fibre-channel adapters for IBM i5 servers running the Linux
operating system . . . . . . . . . . . . . . . . . . . . . .64
Running the Linux operating system in a guest partition on an IBM i5 servers 65
Planning to run Linux in a hosted or nonhosted guest partition . . . . . .65
Creating a guest partition to run Linux . . . . . . . . . . . . . . .66
Managing Linux in a guest partition . . . . . . . . . . . . . . . .67
Ordering a new server or upgrading an existing server to run a guest partition 68
Chapter 8. Attaching to an IBM NAS Gateway 500 host . . . . . . . . .69
Supported adapter cards for IBM NAS Gateway 500 hosts . . . . . . . . .69
Finding the worldwide port name . . . . . . . . . . . . . . . . . .69
Obtaining WWPNs using a Web browser . . . . . . . . . . . . . .69
Obtaining WWPNs through the command-line interface . . . . . . . . .69
Multipathing support for NAS Gateway 500 . . . . . . . . . . . . . .71
Multipath I/O and SDD considerations for NAS Gateway 500 . . . . . . .71
Host attachment multipathing scripts . . . . . . . . . . . . . . . .71
Multipathing with the Subsystem Device Driver . . . . . . . . . . . .73
Chapter 9. Attaching to an IBM RS/6000 or IBM eServer pSeries host . . .75
Installing the 1750 host attachment package on IBM pSeries AIX hosts . . . .75
Preparing for installation of the host attachment package on IBM pSeries AIX
hosts . . . . . . . . . . . . . . . . . . . . . . . . . . .75
Installing the host attachment package on IBM pSeries AIX hosts . . . . .75
Upgrading the host attachment package on IBM pSeries AIX hosts . . . . .76
Attaching with fibre-channel adapters . . . . . . . . . . . . . . . . .76
Supported fibre-channel adapter cards for IBM pSeries hosts . . . . . . .76
Contents v
Page 8
Fibre-channel attachment requirements for IBM pSeries hosts . . . . . .77
Fibre-channel attachment considerations for IBM pSeries hosts . . . . . .77
Verifying the configuration of the storage unit for fibre-channel adapters on
the AIX host system . . . . . . . . . . . . . . . . . . . . .77
Making SAN changes for IBM pSeries hosts . . . . . . . . . . . . .78
Support for fibre-channel boot . . . . . . . . . . . . . . . . . . .79
Prerequisites for setting up the IBM pSeries host as a fibre-channel boot
device . . . . . . . . . . . . . . . . . . . . . . . . . .79
Fibre-channel boot considerations for IBM pSeries hosts . . . . . . . .79
Supported IBM RS/6000 or IBM pSeries hosts for fibre-channel boot . . . .79
Supported levels of firmware for fibre-channel boot on IBM pSeries hosts 80 Supported levels of fibre-channel adapter microcode on IBM pSeries hosts 80 Installation mechanisms that PSSP supports for boot install from
fibre-channel SAN DASD on IBM pSeries hosts . . . . . . . . . . .80
Support for disk configurations for RS/6000 for a fibre-channel boot install 80 Support for fibre-channel boot when a disk subsystem is attached on IBM
pSeries hosts . . . . . . . . . . . . . . . . . . . . . . .80
Attaching to multiple RS/6000 or pSeries hosts without the HACMP host system 81
Considerations for attaching to multiple RS/6000 or pSeries hosts without the
HACMP host system . . . . . . . . . . . . . . . . . . . . .81
Saving data on the storage unit when attaching multiple RS/6000 or pSeries
host systems to the storage unit . . . . . . . . . . . . . . . . .81
Restoring data on the storage unit when attaching multiple RS/6000 or
pSeries host systems to the storage unit . . . . . . . . . . . . .82
Running the Linux operating system on an IBM pSeries host . . . . . . . .82
Attachment considerations for running the Linux operating system on an IBM
pSeries host . . . . . . . . . . . . . . . . . . . . . . . .82
Hardware requirements for the Linux operating system on the pSeries host 83 Software requirement for the Linux operating system on the pSeries host 83 Preparing to install the Subsystem Device Driver for the Linux operating
system on the pSeries host . . . . . . . . . . . . . . . . . .84
Installing the Subsystem Device Driver on the pSeries host running the Linux
operating system . . . . . . . . . . . . . . . . . . . . . .84
Upgrading the Subsystem Device Driver for the Linux operating system on
the pSeries host . . . . . . . . . . . . . . . . . . . . . .85
Verifying the Subsystem Device Driver for the Linux operating system on the
pSeries host . . . . . . . . . . . . . . . . . . . . . . . .85
Configuring the Subsystem Device Driver . . . . . . . . . . . . . .86
Migrating with AIX 5L Version 5.2 . . . . . . . . . . . . . . . . . .86
Storage unit migration issues when you upgrade to AIX 5L Version 5.2
maintenance release 5200-01 . . . . . . . . . . . . . . . . .86
Storage unit migration issues when you remove the AIX 5L Version 5.2 with
the 5200-01 Recommended Maintenance package support . . . . . . .88
Chapter 10. Attaching to an IBM S/390 or IBM eServer zSeries host . . . .89
Migrating from a FICON bridge to a native FICON attachment . . . . . . .89
Migrating from a FICON bridge to a native FICON attachment on zSeries
hosts: FICON bridge overview . . . . . . . . . . . . . . . . .89
Migrating from a FICON bridge to a native FICON attachment on zSeries
hosts: FICON bridge configuration . . . . . . . . . . . . . . . .89
Migrating from a FICON bridge to a native FICON attachment on zSeries
hosts: Mixed configuration . . . . . . . . . . . . . . . . . . .90
Migrating from a FICON bridge to a native FICON attachment on zSeries
hosts: Native FICON configuration . . . . . . . . . . . . . . . .91
Attaching with FICON adapters . . . . . . . . . . . . . . . . . . .92
Configuring the storage unit for FICON attachment on zSeries hosts . . . .92
vi DS6000 Host Systems Attachment Guide
Page 9
Attachment considerations for attaching with FICON adapters on zSeries
hosts . . . . . . . . . . . . . . . . . . . . . . . . . . .92
Attaching to a FICON channel on a S/390 or zSeries host . . . . . . . .94
Registered state-change notifications (RSCNs) on zSeries hosts . . . . . .95
Linux for S/390 and zSeries . . . . . . . . . . . . . . . . . . . .96
Attaching a storage unit to an S/390 or zSeries host running Linux . . . . .96
Running Linux on an S/390 or zSeries host . . . . . . . . . . . . .96
Attaching FCP adapters on zSeries hosts . . . . . . . . . . . . . .97
Chapter 11. Attaching to an Intel host running Linux . . . . . . . . . 103
Supported adapter cards for an Intel host running Linux . . . . . . . . . 103
Attaching with fibre-channel adapters . . . . . . . . . . . . . . . . 103
Attachment requirements for an Intel host running Linux . . . . . . . . 103
Attachment considerations for an Intel host running Linux . . . . . . . . 104
Installing the Emulex adapter card for an Intel host running Linux . . . . . 105
Downloading the Emulex adapter driver for an Intel host running Linux 105
Installing the Emulex adapter driver for an Intel host running Linux . . . . 105
Installing the QLogic adapter card for an Intel host running Linux . . . . . 106
Downloading the current QLogic adapter driver for an Intel host running
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Installing the QLogic adapter driver for an Intel host running Linux . . . . 107
Defining the number of disk devices on Linux . . . . . . . . . . . . . 108
Configuring the storage unit . . . . . . . . . . . . . . . . . . . . 109
Configuring the storage unit for an Intel host running Linux . . . . . . . 109
Partitioning storage unit disks for an Intel host running Linux . . . . . . 109
Assigning the system ID to the partition for an Intel host running Linux 11 0
Creating and using file systems on the storage unit for an Intel host running
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . 111
SCSI disk considerations for an Intel host running Linux . . . . . . . . .112
Manually adding and removing SCSI disks . . . . . . . . . . . . .112
LUN identification for the Linux host system . . . . . . . . . . . . .112
SCSI disk problem identification and resolution . . . . . . . . . . . .114
Support for fibre-channel boot . . . . . . . . . . . . . . . . . . .114
Creating a modules disk for SUSE LINUX Enterprise Server 9.0 . . . . .114
Installing Linux over the SAN without an IBM Subsystem Device Driver 114
Updating to a more recent module without an IBM Subsystem Device Driver 116
Installing Linux over the SAN with an IBM Subsystem Device Driver . . . .116
Chapter 12. Attaching to an Intel host running VMware ESX server . . . 123
Supported adapter cards for an Intel host running VMware ESX Server . . . 123
Attaching with fibre-channel adapters . . . . . . . . . . . . . . . . 123
Attachment requirements for an Intel host running VMware ESX server 123
Attachment considerations for an Intel host running VMware ESX Server 124
Installing the Emulex adapter card for an Intel host running VMware ESX
Server . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Installing the QLogic adapter card for an Intel host running VMware ESX
Server . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Defining the number of disks devices on VMware ESX Server . . . . . . . 126
SCSI disk considerations for an Intel host running VMware ESX server . . . 126
LUN identification for the VMware ESX console . . . . . . . . . . . 126
Disk device discovery on VMware ESX . . . . . . . . . . . . . . 129
Persistent binding . . . . . . . . . . . . . . . . . . . . . . 129
Configuring the storage unit . . . . . . . . . . . . . . . . . . . . 130
Configuring the storage unit for an Intel host running VMware ESX Server 130 Partitioning storage unit disks for an Intel host running VMware ESX Server 130
Contents vii
Page 10
Creating and using VMFS on the storage unit for an Intel host running
VMware ESX Server . . . . . . . . . . . . . . . . . . . . 132
Copy Services considerations . . . . . . . . . . . . . . . . . . . 132
Chapter 13. Attaching to a Novell NetWare host . . . . . . . . . . . 135
Attaching with fibre-channel adapters . . . . . . . . . . . . . . . . 135
Attaching a Novell NetWare host with fibre-channel adapters . . . . . . 135
Fibre-channel attachment considerations for a Novell NetWare host . . . . 135
Installing the Emulex adapter card for a Novell NetWare host . . . . . . 135
Downloading the current Emulex adapter driver for a Novell NetWare host 136
Installing the Emulex adapter driver for a Novell NetWare host . . . . . . 136
Downloading the current QLogic adapter driver for a Novell NetWare host 136 Installing the QLogic QLA23xx adapter card for a Novell NetWare host 137
Installing the adapter drivers for a Novell NetWare host . . . . . . . . 138
Chapter 14. Attaching to a iSCSI Gateway host . . . . . . . . . . . 141
Attachment considerations . . . . . . . . . . . . . . . . . . . . 141
Attachment overview of the iSCSI Gateway host . . . . . . . . . . . 141
Attachment requirements for the iSCSI Gateway host . . . . . . . . . 141
Ethernet adapter attachment considerations for the iSCSI Gateway host 142
Configuring for storage for the iSCSI Gateway host . . . . . . . . . . . 142
iSCSI Gateway operation through the IP Service Module . . . . . . . . . 143
Chapter 15. Attaching to an IBM SAN File System . . . . . . . . . . 145
Attaching to an IBM SAN File System metadata server with Fibre-channel
adapters . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Configuring a storage unit for attachment to the SAN File System metadata
node . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Chapter 16. Attaching to an IBM SAN Volume Controller host . . . . . . 147
Attaching to an IBM SAN Volume Controller host with fibre-channel adapters 147 Configuring the storage unit for attachment to the SAN Volume Controller host
system . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Chapter 17. Attaching an SGI host system . . . . . . . . . . . . . 149
Attaching an SGI host system with fibre-channel adapters . . . . . . . . 149
Fibre-channel attachment considerations for the SGI host system . . . . . . 149
Fibre-channel attachment requirements for the SGI host system . . . . . . 149
Checking the version of the IRIX operating system for the SGI host system 150
Installing a fibre-channel adapter card for the SGI host system . . . . . . . 150
Verifying the installation of a fibre-channel adapter card for SGI . . . . . . 150
Configuring the fibre-channel adapter drivers for SGI . . . . . . . . . . 151
Installing an optical cable for SGI in a switched-fabric topology . . . . . . . 151
Installing an optical cable for SGI in an arbitrated loop topology . . . . . . 151
Confirming switch connectivity for SGI . . . . . . . . . . . . . . . . 151
Displaying zoning information for the switch . . . . . . . . . . . . . . 152
Confirming storage connectivity . . . . . . . . . . . . . . . . . . 153
Confirming storage connectivity for SGI in a switched-fabric topology . . . 153 Confirming storage connectivity for SGI in a fibre-channel arbitrated loop
topology . . . . . . . . . . . . . . . . . . . . . . . . . 154
Configuring the storage unit for host failover . . . . . . . . . . . . . 154
Configuring the storage unit for host failover . . . . . . . . . . . . 154
Confirming the availability of failover . . . . . . . . . . . . . . . 155
Making a connection through a switched-fabric topology . . . . . . . . 155
Making a connection through an arbitrated-loop topology . . . . . . . . 156
Switching I/O operations between the primary and secondary paths . . . . 156
viii DS6000 Host Systems Attachment Guide
Page 11
Configuring storage . . . . . . . . . . . . . . . . . . . . . . . 156
Configuration considerations . . . . . . . . . . . . . . . . . . 156
Configuring storage in a switched fabric topology . . . . . . . . . . . 157
Configuring storage in an arbitrated loop topology . . . . . . . . . . 159
Chapter 18. Attaching to a Sun host . . . . . . . . . . . . . . . . 163
Attaching with fibre-channel adapters . . . . . . . . . . . . . . . . 163
Supported fibre-channel adapters for Sun . . . . . . . . . . . . . 163
Fibre-channel attachment requirements for Sun . . . . . . . . . . . 163
Installing the Emulex adapter card for a Sun host system . . . . . . . . 164
Downloading the Emulex adapter driver for a Sun host system . . . . . . 165
Installing the Emulex adapter driver for a Sun host system . . . . . . . 165
Installing the AMCC PCI adapter card for Sun . . . . . . . . . . . . 166
Downloading the current AMCC PCI adapter driver for Sun . . . . . . . 166
Installing the AMCC PCI adapter driver for Sun . . . . . . . . . . . 166
Installing the AMCC SBUS adapter card for Sun . . . . . . . . . . . 166
Downloading the current AMCC SBUS adapter driver for Sun . . . . . . 166
Installing the AMCC SBUS adapter driver for Sun . . . . . . . . . . 167
Installing the QLogic adapter card in the Sun host . . . . . . . . . . 167
Downloading the QLogic adapter driver for Sun . . . . . . . . . . . 167
Installing the QLogic adapter driver package for Sun . . . . . . . . . 167
Configuring host device drivers . . . . . . . . . . . . . . . . . . 168
Configuring host device drivers for Sun . . . . . . . . . . . . . . 168
Parameter settings for the Emulex adapters for the Sun host system . . . 170
Parameter settings for the AMCC adapters for the Sun host system . . . . 171
Parameter settings for the QLogic QLA23xxF adapters . . . . . . . . . 174
Parameter settings for the QLogic QLA23xx adapters for San Surf
configuration (4.06+ driver) . . . . . . . . . . . . . . . . . . 175
Setting the Sun host system parameters . . . . . . . . . . . . . . . 177
Setting parameters for AMCC adapters . . . . . . . . . . . . . . 177
Setting parameters for Emulex or QLogic adapters . . . . . . . . . . 178
Installing the Subsystem Device Driver . . . . . . . . . . . . . . . 179
Attaching a storage unit to a Sun host using Storage Traffic Manager System 179
Configuring the Sun STMS host settings . . . . . . . . . . . . . . 179
Attaching a storage unit to a Sun host using Sun Cluster . . . . . . . . . 180
Chapter 19. Attaching to a Windows 2000 host . . . . . . . . . . . 183
Attaching with fibre-channel adapters . . . . . . . . . . . . . . . . 183
Supported fibre-channel adapters for Windows 2000 . . . . . . . . . 183
Fibre-channel attachment requirements for the Windows 2000 host system 183
Fibre-channel attachment considerations for Windows 2000 . . . . . . . 184
Installing and configuring the Emulex adapter card . . . . . . . . . . 184
Installing and configuring the Netfinity adapter card for Windows 2000 187
Updating the Windows 2000 device driver . . . . . . . . . . . . . 189
Installing and configuring the QLogic adapter cards in Windows 2000 . . . 189
Verifying that Windows 2000 is configured for storage . . . . . . . . . 192
Configuring for availability and recoverability . . . . . . . . . . . . . 192
Configuration considerations . . . . . . . . . . . . . . . . . . 192
Setting the TimeOutValue registry for Windows 2000 . . . . . . . . . 192
Installing remote fibre-channel boot support for a Windows 2000 host system 193
Configure zoning and obtain storage . . . . . . . . . . . . . . . 193
Flash QLogic host adapter . . . . . . . . . . . . . . . . . . . 194
Configure QLogic host adapters . . . . . . . . . . . . . . . . . 194
Windows 2000 Installation . . . . . . . . . . . . . . . . . . . 194
Windows 2000 Post Installation . . . . . . . . . . . . . . . . . 195
Contents ix
Page 12
Chapter 20. Attaching to a Windows Server 2003 host . . . . . . . . . 197
Attaching with fibre-channel adapters . . . . . . . . . . . . . . . . 197
Supported fibre-channel adapters for Windows Server 2003 . . . . . . . 197
Fibre-channel attachment requirements for the Windows Server 2003 host
system . . . . . . . . . . . . . . . . . . . . . . . . . 197
Fibre-channel attachment considerations for Windows Server 2003 . . . . 198
Installing and configuring the Emulex adapter card . . . . . . . . . . 198
Installing and configuring the Netfinity adapter card . . . . . . . . . . 201
Updating the Windows Server 2003 device driver . . . . . . . . . . . 203
Installing and configuring the QLogic adapter cards . . . . . . . . . . 203
Verifying that Windows Server 2003 is configured for storage . . . . . . 206
Configuring for availability and recoverability . . . . . . . . . . . . . 206
Configuration considerations . . . . . . . . . . . . . . . . . . 206
Setting the TimeOutValue registry for Windows Server 2003 . . . . . . . 206
Installing remote fibre-channel boot support for a Windows Server 2003 host
system . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Configure zoning and obtain storage . . . . . . . . . . . . . . . 207
Flash QLogic host adapter . . . . . . . . . . . . . . . . . . . 208
Configure QLogic host adapters . . . . . . . . . . . . . . . . . 208
Windows 2003 Installation . . . . . . . . . . . . . . . . . . . 208
Windows 2003 Post Installation . . . . . . . . . . . . . . . . . 209
Appendix. Locating the worldwide port name (WWPN) . . . . . . . .211
Fibre-channel port name identification . . . . . . . . . . . . . . . .211
Locating the WWPN for a Fujitsu PRIMEPOWER host . . . . . . . . . .211
Locating the WWPN for a Hewlett-Packard AlphaServer host . . . . . . . 212
Locating the WWPN for a Hewlett-Packard host . . . . . . . . . . . . 212
Locating the WWPN for an IBM eServer iSeries host . . . . . . . . . . 213
Locating the WWPN for an IBM eServer pSeries or an RS/6000 host . . . . 213
Locating the WWPN for a Linux host . . . . . . . . . . . . . . . . 214
Locating the WWPN for a Novell NetWare host . . . . . . . . . . . . 215
Locating the WWPN for an SGI host . . . . . . . . . . . . . . . . 215
Locating the WWPN for a Sun host . . . . . . . . . . . . . . . . . 216
Locating the WWPN for a Windows 2000 host . . . . . . . . . . . . . 216
Locating the WWPN for a Windows Server 2003 host . . . . . . . . . . 217
Accessibility . . . . . . . . . . . . . . . . . . . . . . . . . 219
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Terms and conditions for downloading and printing publications . . . . . . 222
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Electronic emission notices . . . . . . . . . . . . . . . . . . . . 224
Federal Communications Commission (FCC) statement . . . . . . . . 224
Industry Canada compliance statement . . . . . . . . . . . . . . 224
European community compliance statement . . . . . . . . . . . . . 224
Japanese Voluntary Control Council for Interference (VCCI) class A
statement . . . . . . . . . . . . . . . . . . . . . . . . 225
Korean Ministry of Information and Communication (MIC) statement . . . . 225
Taiwan class A compliance statement . . . . . . . . . . . . . . . 226
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . 227
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
x DS6000 Host Systems Attachment Guide
Page 13
Figures
1. Point-to-point topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
2. Switched-fabric topology . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
3. Arbitrated loop topology . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
4. Example of sd.conf file entries for fibre-channel . . . . . . . . . . . . . . . . . . .23
5. Example of a start lpfc auto-generated configuration . . . . . . . . . . . . . . . . . .24
6. Confirming the storage unit licensed machine code on an HP AlphaServer through the telnet
command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
7. Example of the sizer -v command . . . . . . . . . . . . . . . . . . . . . . . .30
8. Example of the set mode diag command and the wwidmgr -show adapter command . . . . . .31
9. Example results of the wwidmgr command. . . . . . . . . . . . . . . . . . . . . .31
10. Example of the switchshow command . . . . . . . . . . . . . . . . . . . . . . .33
11. Example of storage unit volumes on the AlphaServer console . . . . . . . . . . . . . .33
12. Example of a hex string for a storage unit volume on an AlphaServer Tru64 UNIX console 34
13. Example of a hex string that identifies the decimal volume number for a storage unit volume on
an AlphaServer console or Tru64 UNIX . . . . . . . . . . . . . . . . . . . . . .34
14. Example of hex representation of last 5 characters of a storage unit volume serial number on an
AlphaServer console . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
15. Example of the hwmgr command to verify attachment . . . . . . . . . . . . . . . . .35
16. Example of a Korn shell script to display a summary of storage unit volumes . . . . . . . .36
17. Example of the Korn shell script output . . . . . . . . . . . . . . . . . . . . . . .36
18. Example of the essvol script . . . . . . . . . . . . . . . . . . . . . . . . . .38
19. Example of the scu command . . . . . . . . . . . . . . . . . . . . . . . . . .38
20. Example of the scu command . . . . . . . . . . . . . . . . . . . . . . . . . .39
21. Example of the scu clear command . . . . . . . . . . . . . . . . . . . . . . . .39
22. Example of the scu command to show persistent reserves . . . . . . . . . . . . . . .40
23. Example of show system command to show system command on the OpenVMS operating system 43
24. Example of the product show history command to check the versions of patches already installed 43
25. Example of the set mode diag command and the wwidmgr -show adapter command . . . . . .44
26. Example results of the wwidmgr command. . . . . . . . . . . . . . . . . . . . . .45
27. Example of the switchshow command . . . . . . . . . . . . . . . . . . . . . . .49
28. Example of storage unit volumes on the AlphaServer console . . . . . . . . . . . . . .49
29. Example of a World Wide Node Name for the storage unit volume on an AlphaServer console 50
30. Example of a volume number for the storage unit volume on an AlphaServer console . . . . .50
31. Example of what is displayed when you use OpenVMS storage configuration utilities . . . . . .51
32. Example of the display for the auxiliary storage hardware resource detail for a 2766 or 2787
adapter card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
33. Example of the logical hardware resources associated with an IOP . . . . . . . . . . . .62
34. Example of the display for the auxiliary storage hardware resource detail for the storage unit 63
35. Example output from the lscfg -vpl “fcs*” |grep Network command. . . . . . . . . . . . .70
36. Example output saved to a text file . . . . . . . . . . . . . . . . . . . . . . . .70
37. Example of installed file set . . . . . . . . . . . . . . . . . . . . . . . . . . .72
38. Example environment where all MPIO devices removed . . . . . . . . . . . . . . . .72
39. Example of installed host attachment scripts . . . . . . . . . . . . . . . . . . . . .73
40. Example of extracting the SDD archive . . . . . . . . . . . . . . . . . . . . . . .74
41. Example of a list of devices displayed when you use the lsdev -Cc disk | grep 1750 command for
fibre-channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78
42. Example of a list of other devices displayed when you use the lsdisk command for fibre-channel. 78
43. Example of how to configure a FICON bridge from an S/390 or zSeries host system to a storage
unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
44. Example of how to add a FICON director and a FICON host adapter . . . . . . . . . . .91
45. Example of the configuration after the FICON bridge is removed . . . . . . . . . . . . .92
46. Example of prerequisite information for FCP Linux on zSeries . . . . . . . . . . . . . .98
47. Example of prerequisite information for FCP Linux on zSeries . . . . . . . . . . . . . .99
© Copyright IBM Corp. 2004, 2005 xi
Page 14
48. Example of prerequisite information for FCP Linux on zSeries . . . . . . . . . . . . . . 100
49. Example of prerequisite information for FCP Linux on zSeries . . . . . . . . . . . . . . 101
50. Example of a script to add more than one device . . . . . . . . . . . . . . . . . . 101
51. Example of how to add SCSI devices through the add_map command . . . . . . . . . . 102
52. Saving the module parameters in the /etc/zfcp.conf file . . . . . . . . . . . . . . . . 102
53. Example of Logical Volume Manager Multipathing . . . . . . . . . . . . . . . . . . 102
54. Example of range of devices for a Linux host . . . . . . . . . . . . . . . . . . . . 109
55. Example of different options for the fdisk utility . . . . . . . . . . . . . . . . . . .110
56. Example of a primary partition on the disk /dev/sdb . . . . . . . . . . . . . . . . . .110
57. Example of assigning a Linux system ID to the partition . . . . . . . . . . . . . . . . 111
58. Example of creating a file with the mke2fs command . . . . . . . . . . . . . . . . . 111
59. Example of creating a file with the mkfs command . . . . . . . . . . . . . . . . . .112
60. Example output for a Linux host that only configures LUN 0 . . . . . . . . . . . . . .113
61. Example output for a Linux host with configured LUNs . . . . . . . . . . . . . . . .113
62. Example of a complete linuxrc file for Red Hat . . . . . . . . . . . . . . . . . . . 121
63. Example of a complete linuxrc file for SUSE . . . . . . . . . . . . . . . . . . . . 122
64. Example of QLogic Output: . . . . . . . . . . . . . . . . . . . . . . . . . . 127
65. Example of Emulex Output: . . . . . . . . . . . . . . . . . . . . . . . . . . 128
66. Example listing of a Vmhba directory . . . . . . . . . . . . . . . . . . . . . . . 128
67. Example of Vmhba entries . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
68. Example of the different options for the fdisk utility: . . . . . . . . . . . . . . . . . . 131
69. Example of primary partition on the disk /dev/vsd71 . . . . . . . . . . . . . . . . . 131
70. Example of PCI bus slots . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
71. Example results for the switchshow command . . . . . . . . . . . . . . . . . . . 152
72. Example results for the cfgShow command . . . . . . . . . . . . . . . . . . . . . 153
73. Example of commands to turn failover on . . . . . . . . . . . . . . . . . . . . . 155
74. Example of an edited /etc/failover.conf file . . . . . . . . . . . . . . . . . . . . . 156
75. Example of an edited /etc/failover.conf file for an arbitrated loop connection . . . . . . . . . 156
76. Example commands for the IRIX switched fabric storage configuration utility . . . . . . . . 158
77. Example commands for the IRIX switched fabric storage configuration utility, part 2 . . . . . . 159
78. Example commands for the IRIX arbitrated loop storage configuration utility . . . . . . . . 160
79. Example commands for the IRIX arbitrated loop storage configuration utility, part 2 . . . . . . 161
80. Example of sd.conf file entries for fibre-channel . . . . . . . . . . . . . . . . . . . 169
81. Example of a start lpfc auto-generated configuration . . . . . . . . . . . . . . . . . 169
82. Example binding inserts for qlaxxxx.conf . . . . . . . . . . . . . . . . . . . . . . 177
83. Example of what is displayed when start the Windows 2000 host . . . . . . . . . . . . 188
84. Example of what the system displays when you start the Windows Server 2003 host . . . . . 202
85. Example of the output from the Hewlett-Packard AlphaServer wwidmgr -show command 212
86. Example of the output from the Hewlett-Packard #fgrep wwn /var/adm/messages command 212
87. Example of the output from the Hewlett-Packard fcmsutil /dev/td1 | grep world command. 213
88. Example of what is displayed in the /var/log/messages file . . . . . . . . . . . . . . . 215
89. Example of the scsiha bus_number device | command . . . . . . . . . . . . . . . 215
xii DS6000 Host Systems Attachment Guide
Page 15
Tables
1. Recommended configuration file parameters for the Emulex LP9002L adapter . . . . . . . .25
2. Maximum number of adapters you can use for an AlphaServer . . . . . . . . . . . . . .28
3. Maximum number of adapters you can use for an AlphaServer . . . . . . . . . . . . . .41
4. Host system limitations for the IBM iSeries host system . . . . . . . . . . . . . . . .60
5. Capacity and models of disk volumes for IBM iSeries . . . . . . . . . . . . . . . . .63
6. Recommended settings for the QLogic adapter card for an Intel host running Linux . . . . . . 106
7. Recommended settings for the QLogic adapter card for an Intel host running VMware ESX
Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8. Recommended settings for the QLogic QLA23xx adapter card for a Novell NetWare host 137
9. Solaris 8 minimum revision level patches for fibre-channel . . . . . . . . . . . . . . . 164
10. Recommended configuration file parameters for the Emulex LP9002DC, LP9002L, LP9002S,
LP9402DC and LP9802 adapters . . . . . . . . . . . . . . . . . . . . . . . . 170
11. Recommended configuration file parameters for a AMCC FCX-6562, AMCC FCX2-6562, AMCC
FCE-6460, or a AMCC FCE-1473 adapter . . . . . . . . . . . . . . . . . . . . . 172
12. Recommended configuration file parameters for the QLogic QLA2310F, QLA2340, and QLA2342
adapters with driver level 4.03 . . . . . . . . . . . . . . . . . . . . . . . . . 174
13. Parameter settings for the QLogic QLA23xx host adapters for San Surf Configuration (4.06+) 175
14. Recommended configuration file parameters for the Emulex LP9002L, LP9002DC, LP9402DC,
LP9802, LP10000, and LP10000DC adapters . . . . . . . . . . . . . . . . . . . . 186
15. Recommended settings for the QLogic QLA23xx adapter card for Windows 2000 . . . . . . 189
16. Recommended configuration file parameters for the Emulex LP9002L, LP9002DC, LP9402DC,
LP9802, LP10000, and LP10000DC adapters . . . . . . . . . . . . . . . . . . . . 200
17. Recommended settings for the QLogic QLA23xx adapter card for a Windows Server 2003 host 204
© Copyright IBM Corp. 2004, 2005 xiii
Page 16
xiv DS6000 Host Systems Attachment Guide
Page 17
About this guide
This guide provides information about the following host attachment issues:
v Attaching the storage unit to an open-systems host with fibre-channel adapters
v Connecting IBM fibre-channel connection (FICON) cables to your S/390 and
zSeries host systems
You
can attach the following host systems to a storage unit:
v Apple Macintosh
v Fujitsu PRIMEPOWER
v Hewlett-Packard
v IBM eServer iSeries
v IBM NAS Gateway 500
v IBM RS/6000 and IBM eServer pSeries
v IBM S/390 and IBM eServer zSeries
v IBM SAN File System
v IBM SAN Volume Controller
v Linux
v Microsoft Windows 2000
v Microsoft Windows Server 2003
v Novell NetWare
v Silicon Graphics
v Sun
For
a list of open systems hosts, operating systems, adapters and switches that
IBM supports, see the Interoperability Matrix at http://www.ibm.com/servers/storage/disk/ds6000/interop.html.
Finding attachment information in the Host Systems Attachment Guide
The following table provides information about how to find attachment information quickly in this guide.
Host Fibre-channel
Apple Macintosh Servers “Supported fibre-channel adapters for the
Apple Macintosh server” on page 19
Fujitsu PRIMEPOWER “Supported fibre-channel adapters for
PRIMEPOWER” on page 21
Hewlett-Packard AlphaServer Tru64 UNIX
®
“Supported fibre-channel adapters for the HP AlphaServer Tru64 UNIX host system” on page 27
Hewlett-Packard AlphaServer OpenVMS “Supported fibre-channel adapters for the HP
AlphaServer OpenVMS host system” on page 41
Hewlett-Packard Servers (HP-UX) “Attaching with fibre-channel adapters” on
page 55
iSCSI Gateway “Attachment overview of the iSCSI Gateway
host” on page 141
© Copyright IBM Corp. 2004, 2005 xv
Page 18
Host Fibre-channel
IBM
®
eServer
iSeries
“Attaching with fibre-channel adapters to the IBM iSeries host system” on page 59
IBM NAS Gateway 500 “Supported adapter cards for IBM NAS
Gateway 500 hosts” on page 69
IBM eServer pSeries™or IBM RS/6000
®
“Attaching with fibre-channel adapters” on page 76
IBM eServer zSeries
®
or S/390
®
“Attaching FCP adapters on zSeries hosts running Linux” on page 97 (for zSeries only)
Linux “Attaching with fibre-channel adapters” on
page 103
Microsoft
®
Windows
®
2000 “Attaching with fibre-channel adapters” on
page 183
Microsoft Windows Server 2003 “Attaching with fibre-channel adapters” on
page 197
Novell NetWare “Attaching with fibre-channel adapters” on
page 135
IBM SAN Volume Controller “Attaching to an IBM SAN Volume Controller
host with fibre-channel adapters” on page 147
Sun “Supported fibre-channel adapters for Sun”
on page 163
xvi DS6000 Host Systems Attachment Guide
Page 19
Safety and environmental notices
This section contains information about safety notices that are used in this guide and environmental notices for this product.
Safety notices
Use this process to find information about safety notices.
To find the translated text for a danger or caution notice:
1. Look for the identification number at the end of each danger notice or each caution notice. In the following examples, the numbers 1000 and 1001 are the identification numbers.
DANGER
A danger notice indicates the presence of a hazard that has the potential of causing death or serious personal injury.
1000
CAUTION:
A caution notice indicates the presence of a hazard that has the potential of causing moderate or minor personal injury.
1001
2. Find the number that matches in the IBM TotalStorage Solutions Safety Notices for IBM Versatile Storage Server and IBM TotalStorage Enterprise Storage Server, GC26-7229.
Environmental notices
This section identifies the environmental guidelines that pertain to this product.
Product recycling
This unit contains recyclable materials.
Recycle these materials at your local recycling sites. Recycle the materials according to local regulations. In some areas, IBM provides a product take-back program that ensures proper handling of the product. Contact your IBM representative for more information.
Disposing of products
This topic contains information about how to dispose of products.
This unit might contain batteries. Remove and discard these batteries, or recycle them, according to local regulations.
Conventions used in this guide
The following typefaces are used to show emphasis:
© Copyright IBM Corp. 2004, 2005 xvii
Page 20
boldface
Text in boldface represents menu items and lowercase or mixed-case command names.
italics Text in italics is used to emphasize a word. In command syntax, it is used
for variables for which you supply actual values.
monospace
Text in monospace identifies the data or commands that you type, samples of command output, or examples of program code or messages from the system.
Related information
The tables in this section list and describe the following publications:
v The publications that make up the IBM
®
TotalStorage
DS6000 series library
v Other IBM publications that relate to the DS6000 series
v Non-IBM publications that relate to the DS6000 series
See
“Ordering IBM publications” on page xxiv for information about how to order
publications in the IBM TotalStorage DS6000 series publication library. See “How to send your comments” on page xxv for information about how to send comments about the publications.
DS6000 series library
These customer publications make up the DS6000 series library.
Unless otherwise noted, these publications are available in Adobe portable document format (PDF) on a compact disc (CD) that comes with the storage unit. If you need additional copies of this CD, the order number is SK2T-8803. These publications are also available as PDF files by clicking on the Documentation link on the following Web site:
http://www-1.ibm.com/servers/storage/support/disk/ds6800/index.html
See “Ordering IBM publications” on page xxiv for information about ordering these and other IBM publications.
Title Description
Order Number
IBM TotalStorage
®
DS: Command-Line Interface User’s Guide
This guide describes the commands that you can use from the command-line interface (CLI) for managing your DS6000 configuration and Copy Services relationships. The CLI application provides a set of commands that you can use to write customized scripts for a host system. The scripts initiate predefined tasks in a Copy Services server application. You can use the CLI commands to indirectly control Remote Mirror and Copy and FlashCopy
®
configuration tasks within a Copy Services server
group.
GC26-7681 (See Note.)
IBM TotalStorage DS6000: Host Systems Attachment Guide
This guide provides guidelines for attaching the DS6000 to your host system and for migrating to fibre-channel attachment from a small computer system interface.
GC26-7680 (See Note.)
IBM TotalStorage DS6000: Introduction and Planning Guide
This guide introduces the DS6000 product and lists the features you can order. It also provides guidelines for planning the installation and configuration of the storage unit.
GC26-7679
xviii DS6000 Host Systems Attachment Guide
Page 21
Title Description
Order Number
IBM TotalStorage Multipath Subsystem Device Driver User’s Guide
This publication describes how to use the IBM Subsystem Device Driver (SDD) on open-systems hosts to enhance performance and availability on the DS6000. SDD creates redundant paths for shared logical unit numbers. SDD permits applications to run without interruption when path errors occur. It balances the workload across paths, and it transparently integrates with applications.
SC30-4096
IBM TotalStorage DS Application Programming Interface Reference
This publication provides reference information for the IBM TotalStorage DS application programming interface (API) and provides instructions for installing the Common Information Model Agent, which implements the API.
GC35-0493
IBM TotalStorage DS6000 Messages Reference
This publication provides explanations of error, information, and warning messages that are issued from the DS6000 user interfaces.
GC26-7682
IBM TotalStorage DS6000 Installation, Troubleshooting, and Recovery Guide
This publication provides reference information for installing and troubleshooting the DS6000. It also discusses disaster recovery using Copy Services.
GC26-7678
IBM TotalStorage DS6000 Quick Start Card
This is a quick start guide for use in installing and configuring the DS6000 series.
GC26-7685
Note: No hardcopy book is produced for this publication. However, a PDF file is available from the following Web
site: http://www-1.ibm.com/servers/storage/support/disk/ds6800/index.html
Other IBM publications
Other IBM publications contain additional information that is related to the DS product library.
The following list is divided into categories to help you find publications that are related to specific topics. Some of the publications are listed under more than one category. See “Ordering IBM publications” on page xxiv for information about ordering these and other IBM publications.
Title Description
Order Number
Data-copy services
z/OS DFSMS Advanced Copy Services
This publication helps you understand and use IBM Advanced Copy Services functions. It describes three dynamic copy functions and several point-in-time copy functions. These functions provide backup and recovery of data if a disaster occurs to your data center. The dynamic copy functions are peer-to-peer remote copy, extended remote copy, and coupled extended remote copy. Collectively, these functions are known as remote copy. FlashCopy, SnapShot, and concurrent copy are the point-in-time copy functions.
SC35-0428
IBM Enterprise Storage Server
This publication, from the IBM International Technical Support Organization, introduces the Enterprise Storage Server and provides an understanding of its benefits. It also describes in detail the architecture, hardware, and functions, including the advanced copy functions, of the Enterprise Storage Server.
SG24-5465
Safety and environmental notices xix
Page 22
Title Description
Order Number
Implementing Copy Services 0n S/390
This publication, from the IBM International Technical Support Organization, tells you how to install, customize, and configure Copy Services on an Enterprise Storage Server that is attached to an S/390 or zSeries host system. Copy Services functions include peer-to-peer remote copy, extended remote copy, FlashCopy®, and concurrent copy. This publication describes the functions, prerequisites, and corequisites and describes how to implement each function into your environment.
SG24-5680
IBM TotalStorage ESS Implementing Copy Services in an Open Environment
This publication, from the IBM International Technical Support Organization, tells you how to install, customize, and configure Copy Services on UNIX, Windows NT®, Windows 2000, Sun Solaris, HP-UX, Tru64, OpenVMS, and iSeries host systems. The Copy Services functions that are described include peer-to-peer remote copy and FlashCopy. This publication describes the functions and shows you how to implement them into your environment. It also shows you how to implement these functions in a high-availability cluster multiprocessing environment.
SG24-5757
Fibre channel
Fibre Channel Connection (FICON) I/O Interface: Physical Layer
This publication provides information about the fibre-channel I/O interface. This book is also available as a PDF file from the following Web site:
http://www.ibm.com/servers/resourcelink/
SA24-7172
Fibre Transport Services (FTS): Physical and Configuration Planning Guide
This publication provides information about fibre-optic and ESCON-trunking systems.
GA22-7234
IBM SAN Fibre Channel Switch: 2109 Model S08 Installation and Service Guide
This guide describes how to install and maintain the IBM SAN Fibre Channel Switch 2109 Model S08.
SC26-7350
IBM SAN Fibre Channel Switch: 2109 Model S08 User’s Guide
This guide describes the IBM SAN Fibre Channel Switch and the IBM TotalStorage ESS Specialist. It provides information about the commands and how to manage the switch with Telnet and the Simple Network Management Protocol.
SC26-7349
IBM SAN Fibre Channel Switch: 2109 Model S16 Installation and Service Guide
This publication describes how to install and maintain the IBM SAN Fibre Channel Switch 2109 Model S16. It is intended for trained service representatives and service providers.
SC26-7352
IBM SAN Fibre Channel Switch: 2109 Model S16 User’s Guide
This guide introduces the IBM SAN Fibre Channel Switch 2109 Model S16 and tells you how to manage and monitor the switch using zoning and how to manage the switch remotely.
SC26-7351
Implementing Fibre Channel Attachment on the ESS
This publication, from the IBM International Technical Support Organization, helps you install, tailor, and configure fibre-channel attachment of open-systems hosts to the Enterprise Storage Server. It provides you with a broad understanding of the procedures that are involved and describes the prerequisites and requirements. It also shows you how to implement fibre-channel attachment.
SG24-6113
Open-systems hosts
xx DS6000 Host Systems Attachment Guide
Page 23
Title Description
Order Number
ESS Solutions for Open Systems Storage: Compaq AlphaServer, HP, and Sun
This publication, from the IBM International Technical Support Organization, helps you install, tailor, and configure the Enterprise Storage Server when you attach Compaq AlphaServer (running Tru64 UNIX), HP, and Sun hosts. This book does not cover Compaq AlphaServer that is running the OpenVMS operating system. This book also focuses on the settings that are required to give optimal performance and on the settings for device driver levels. This book is for the experienced UNIX professional who has a broad understanding of storage concepts.
SG24-6119
IBM TotalStorage ESS Implementing Copy Services in an Open Environment
This publication, from the IBM International Technical Support Organization, tells you how to install, customize, and configure Copy Services on UNIX or Windows 2000 host systems. The Copy Services functions that are described include peer-to-peer remote copy and FlashCopy. This publication describes the functions and shows you how to implement them into your environment. It also shows you how to implement these functions in a high-availability cluster multiprocessing environment.
SG24-5757
Implementing Fibre Channel Attachment on the ESS
This publication, from the IBM International Technical Support Organization, helps you install, tailor, and configure fibre-channel attachment of open-systems hosts to the Enterprise Storage Server. It gives you a broad understanding of the procedures that are involved and describes the prerequisites and requirements. It also shows you how to implement fibre-channel attachment.
SG24-6113
S/390 and zSeries hosts
Device Support Facilities: User’s Guide and Reference
This publication describes the IBM Device Support Facilities (ICKDSF) product that are used with IBM direct access storage device (DASD) subsystems. ICKDSF is a program that you can use to perform functions that are needed for the installation, the use, and the maintenance of IBM DASD. Yo u can also use it to perform service functions, error detection, and media maintenance.
GC35-0033
z/OS Advanced Copy Services
This publication helps you understand and use IBM Advanced Copy Services functions. It describes three dynamic copy functions and several point-in-time copy functions. These functions provide backup and recovery of data if a disaster occurs to your data center. The dynamic copy functions are peer-to-peer remote copy, extended remote copy, and coupled extended remote copy. Collectively, these functions are known as remote copy. FlashCopy, SnapShot, and concurrent copy are the point-in-time copy functions.
SC35-0428
DFSMS/MVS V1: Remote Copy Guide and Reference
This publication provides guidelines for using remote copy functions with S/390 and zSeries hosts.
SC35-0169
Fibre Transport Services (FTS): Physical and Configuration Planning Guide
This publication provides information about fibre-optic and ESCON-trunking systems.
GA22-7234
Implementing ESS Copy Services on S/390
This publication, from the IBM International Technical Support Organization, tells you how to install, customize, and configure Copy Services on an Enterprise Storage Server that is attached to an S/390 or zSeries host system. Copy Services functions include peer-to-peer remote copy, extended remote copy, FlashCopy, and concurrent copy. This publication describes the functions, prerequisites, and corequisites and describes how to implement each function into your environment.
SG24-5680
Safety and environmental notices xxi
Page 24
Title Description
Order Number
ES/9000, ES/3090: IOCP User Guide Volume A04
This publication describes the Input/Output Configuration Program that supports the Enterprise Systems Connection (ESCON) architecture. It describes how to define, install, and configure the channels or channel paths, control units, and I/O devices on the ES/9000 processors and the IBM ES/3090 Processor Complex.
GC38-0097
IOCP User’s Guide, IBM e(logo)server zSeries 800 and 900
This publication describes the Input/Output Configuration Program that supports the zSeries 800 and 900 servers. This publication is available in PDF format by accessing ResourceLink at the following Web site:
www.ibm.com/servers/resourcelink/
SB10-7029
IOCP User’s Guide, IBM e(logo)server zSeries
This publication describes the Input/Output Configuration Program that supports the zSeries server. This publication is available in PDF format by accessing ResourceLink at the following Web site:
www.ibm.com/servers/resourcelink/
SB10-7037
S/390: Input/Output Configuration Program User’s Guide and ESCON Channel-to-Channel Reference
This publication describes the Input/Output Configuration Program that supports ESCON architecture and the ESCON multiple image facility.
GC38-0401
IBM z/OS Hardware Configuration Definition User’s Guide
This guide provides conceptual and procedural information to help you use the z/OS Hardware Configuration Definition (HCD) application. It also explains:
v How to migrate existing IOCP/MVSCP definitions v How to use HCD to dynamically activate a new configuration v How to resolve problems in conjunction with MVS/ESA HCD
SC33-7988
OS/390: Hardware Configuration Definition User’s Guide
This guide provides detailed information about the input/output definition file and about how to configure parallel access volumes. This guide discusses how to use Hardware Configuration Definition for both OS/390
®
and z/OS
V1R1.
SC28-1848
OS/390 V2R10.0: MVS System Messages Volume 1 (ABA - ASA)
This publication lists OS/390 MVS
system messages ABA to ASA. GC28-1784
Using IBM 3390 Direct Access Storage in a VM Environment
This publication provides device-specific information for the various models of the 3390 and describes methods you can use to manage storage efficiently using the VM operating system. It provides guidance on managing system performance, availability, and space through effective use of the direct access storage subsystem.
GG26-4575
Using IBM 3390 Direct Access Storage in a VSE Environment
This publication helps you use the 3390 in a VSE environment. It includes planning information for adding new 3390 units and instructions for installing devices, migrating data, and performing ongoing storage management activities.
GC26-4576
Using IBM 3390 Direct Access Storage in an MVS Environment
This publication helps you use the 3390 in an MVS environment. It includes device-specific information for the various models of the 3390 and illustrates techniques for more efficient storage management. It also offers guidance on managing system performance, availability, and space utilization through effective use of the direct access storage subsystem.
GC26-4574
z/Architecture Principles of Operation
This publication provides a detailed definition of the z/Architecture™. It is written as a reference for use primarily by assembler language programmers and describes each function at the level of detail needed to prepare an assembler language program that relies on a particular function. However, anyone concerned with the functional details of z/Architecture will find this publication useful.
SA22-7832
xxii DS6000 Host Systems Attachment Guide
Page 25
Title Description
Order Number
SAN
IBM OS/390 Hardware Configuration Definition User’s Guide
This guide explains how to use the Hardware Configuration Data application to perform the following tasks:
v Define new hardware configurations
v View and modify existing hardware configurations
v Activate configurations
v Query supported hardware
v Maintain input/output definition files (IODFs)
v Compare two IODFs or compare an IODF with an actual configuration
v Print reports of configurations
v Create graphical reports of a configuration
v Migrate existing configuration data
SC28-1848
IBM SAN Fibre Channel Switch: 2109 Model S08 Installation and Service Guide
This guide describes how to install and maintain the IBM SAN Fibre Channel Switch 2109 Model S08.
SC26-7350
IBM SAN Fibre Channel Switch: 2109 Model S08 User’s Guide
This guide describes the IBM SAN Fibre Channel Switch and the IBM TotalStorage ESS Specialist. It provides information about the commands and how to manage the switch with Telnet and the Simple Network Management Protocol (SNMP).
SC26-7349
IBM SAN Fibre Channel Switch: 2109 Model S16 Installation and Service Guide
This publication describes how to install and maintain the IBM SAN Fibre Channel Switch 2109 Model S16. It is intended for trained service representatives and service providers.
SC26-7352
IBM SAN Fibre Channel Switch: 2109 Model S16 User’s Guide
This guide introduces the IBM SAN Fibre Channel Switch 2109 Model S16 and tells you how to manage and monitor the switch using zoning and how to manage the switch remotely.
SC26-7351
Implementing Fibre Channel Attachment on the ESS
This publication, from the IBM International Technical Support Organization, helps you install, tailor, and configure fibre-channel attachment of open-systems hosts to the Enterprise Storage Server. It provides you with a broad understanding of the procedures that are involved and describes the prerequisites and requirements. It also shows you how to implement fibre-channel attachment.
SG24-6113
Storage management
Device Support Facilities: User’s Guide and Reference
This publication describes the IBM Device Support Facilities (ICKDSF) product used with IBM direct access storage device (DASD) subsystems. ICKDSF is a program that you can use to perform functions that are needed for the installation, the use, and the maintenance of IBM DASD. You can also use it to perform service functions, error detection, and media maintenance.
GC35-0033
IBM TotalStorage Solutions Handbook
This handbook, from the IBM International Technical Support Organization, helps you understand what makes up enterprise storage management. The concepts include the key technologies that you must know and the IBM subsystems, software, and solutions that are available today. It also provides guidelines for implementing various enterprise storage administration tasks so that you can establish your own enterprise storage management environment.
SG24-5250
Safety and environmental notices xxiii
Page 26
Ordering IBM publications
This section tells you how to order copies of IBM publications and how to set up a profile to receive notifications about new or changed publications.
IBM publications center
The publications center is a worldwide central repository for IBM product publications and marketing material.
The IBM publications center offers customized search functions to help you find the publications that you need. Some publications are available for you to view or download free of charge. You can also order publications. The publications center displays prices in your local currency. Yo u can access the IBM publications center through the following Web site:
http://www.ibm.com/shop/publications/order
Publications notification system
The IBM publications center Web site offers you a notification system for IBM publications.
If you register, you can create your own profile of publications that interest you. The publications notification system sends you a daily e-mail that contains information about new or revised publications that are based on your profile.
If you want to subscribe, you can access the publications notification system from the IBM publications center at the following Web site:
http://www.ibm.com/shop/publications/order
Web sites
The following Web sites provide information about the IBM TotalStorage DS6000 series and other IBM storage products.
Type of Storage Information Web Site
Concurrent Copy for S/390 and zSeries host systems
http://www.storage.ibm.com/software/sms/sdm/
Copy Services command-line interface (CLI)
http://www-1.ibm.com/servers/storage/support/software/cscli.html
DS6000 series publications http://www-1.ibm.com/servers/storage/support/disk/ds6800/index.html
Click Documentation.
FlashCopy for S/390 and zSeries host systems
http://www.storage.ibm.com/software/sms/sdm/
Host system models, operating systems, and adapters that the storage unit supports
http://www.ibm.com/servers/storage/disk/ds6000/interop.html
Click Interoperability matrix.
IBM Disk Storage Feature Activation (DSFA)
http://www.ibm.com/storage/dsfa
IBM storage products http://www.storage.ibm.com/
IBM TotalStorage DS6000 series http://www-1.ibm.com/servers/storage/disk/ds6000
IBM version of the Java (JRE) that is often required for IBM products
http://www-106.ibm.com/developerworks/java/jdk/
xxiv DS6000 Host Systems Attachment Guide
Page 27
Type of Storage Information Web Site
Multiple Device Manager (MDM) http://www.ibm.com/servers/storage/support/
Click Storage Virtualization.
Remote Mirror and Copy (formerly PPRC) for S/390 and zSeries host systems
http://www.storage.ibm.com/software/sms/sdm/
SAN fibre channel switches http://www.ibm.com/storage/fcswitch/
Storage Area Network Gateway and Router
http://www-1.ibm.com/servers/storage/support/san/index.html
Subsystem Device Driver (SDD) http://www-1.ibm.com/servers/storage/support/software/sdd.html
z/OS Global Mirror (formerly XRC) for S/390 and zSeries host systems
http://www.storage.ibm.com/software/sms/sdm/
How to send your comments
Your feedback is important to help us provide the highest quality information. If you have any comments about this information or any other DS6000 series documentation, you can submit them in the following ways:
v e-mail
Submit your comments electronically to the following e-mail address:
starpubs@us.ibm.com
Be sure to include the name and order number of the book and, if applicable, the specific location of the text you are commenting on, such as a page number or table number.
v Mail
Fill out the Readers’ Comments form (RCF) at the back of this book. Return it by mail or give it to an IBM representative. If the RCF has been removed, you can address your comments to:
International Business Machines Corporation RCF Processing Department Department 61C 9032 South Rita Road TUCSON AZ 85775-4401
Safety and environmental notices xxv
Page 28
xxvi DS6000 Host Systems Attachment Guide
Page 29
Summary of Changes for GC26-7680-03 IBM TotalStorage DS6000 Host Systems Attachment Guide
This document contains terminology, maintenance, and editorial changes. Technical changes or additions to the text and illustrations are indicated by a vertical line to the left of the change. This summary of changes describes new functions that have been added to this release.
Changed Information
This edition includes the following changed information:
v Support is added for SUSE SLES 9 and Red Hat Enterprise Linux 4.0 for select
hosts.
v The Hewlett-Packard OpenVMS chapter DS CLI command example was
updated.
v The URL to locate the most current information on host attachment firmware and
device driver information, including driver downloads, changed to:
http://knowledge.storage.ibm.com/servers/storage/support/hbasearch/interop/hbaSearch.do
© Copyright IBM Corp. 2004, 2005 xxvii
Page 30
xxviii DS6000 Host Systems Attachment Guide
Page 31
Chapter 1. Introduction
The IBM
®
TotalStorage
®
DS6000 series is a member of the family of DS products and is built upon 2 Gbps fibre channel technology that provides RAID-protected storage with advanced functionality, scalability, and increased addressing capabilities.
The DS6000 series offers a high reliability and performance midrange storage solution through the use of hot-swappable redundant RAID controllers in a space efficient modular design. The DS6000 series provides storage sharing and consolidation for a wide variety of operating systems and mixed server environments.
The DS6000 series offers high scalability while maintaining excellent performance. With the DS6800 (Model 1750-511), you can install up to 16 disk drive modules (DDMs). The minimum storage capability with 8 DDMs is 584 GB. The maximum storage capability with 16 DDMs for the DS6800 model is 4.8 TB.
If you want to connect more than 16 disks, you use the optional DS6000 expansion enclosures (Model 1750-EX1) that allow a maximum of 224 DDMs per storage system and provide a maximum storage capability of 67 TB.
The DS6800 measures 5.25-in. high and is available in a 19-in. rack mountable package with an optional modular expansion enclosure of the same size to add capacity to help meet your growing business needs.
The DS6000 series addresses business efficiency needs through its heterogeneous connectivity, high performance and manageability functions, thereby helping to reduce total cost of ownership.
The DS6000 series offers the following major features:
v PowerPC 750GX processors
v Dual active controllers provide continuous operations through the use of two
processors that form a pair to back up the other
v A selection of 2 GB fibre channel (FC) disk drives, including 73 GB, 146 GB, and
300 GB sizes with speeds of 10 000 or 15 000 revolutions per minute (RPM)
v 2 GB fibre channel and FICON host attachments of up to 8 ports, which can be
configured with an intermix of Fibre Channel Protocol (FCP) and FICON
v Fibre channel arbitrated loop (FC AL) switched device attachment of up to 2 dual
loops
v Storage virtualization
v Battery backed mirrored cache
v Fully redundant power and cooling system
v Disaster Recovery and Copy Service solutions
Overview of the DS6000 series models
The DS6000 series offers a base enclosure model with storage and optional expansion enclosures.
© Copyright IBM Corp. 2004, 2005 1
Page 32
DS6800 (Model 1750-511)
The DS6800 offers the following features:
v Two FC controller cards
v PowerPC 750GX 1 GHz processor
v 4 GB of cache
v Two battery backup units (one per each controller card)
v Two ac/dc power supplies with imbedded enclosure cooling units
v Eight 2 Gb/sec. device ports
v Connectivity with the availability of two to eight fibre channel/FICON host ports.
The host ports auto-negotiate to either 2 Gbps or 1 Gbps link speeds.
v Attachment to 13 DS6000 expansion enclosures.
f2d00067
f2d00068
The DS6800 is a self-contained 3U enclosure that can be mounted in a standard 19-inch rack. The DS6800 comes with authorization for up to 16 internal FC DDMs, offering up to 4.8 TB of storage capability. The DS6800 allows up to 13 DS6000 expansion enclosures to be attached. A storage system supports up to 224 disk drives for a total of up to 67.2 TB of storage.
The DS6800 system offers connectivity with the availability of two to eight Fibre Channel/FICON host ports. The 2 GB fibre channel/FICON host ports, which are offered in long-wave and shortwave, auto-negotiate to either 2 Gbps or 1 Gbps link speeds. This flexibility supports the ability to exploit the potential benefits offered by higher performance, 2 Gbps SAN-based solutions, while also maintaining compatibility with existing 1 Gbps infrastructures. In addition, with the maximum of eight host ports enabled, the DS6800 system can be configured with an intermix of
2 DS6000 Host Systems Attachment Guide
Page 33
Fibre Channel Protocol (FCP) and FICON. This can help protect your investment in fibre channel adapters, and increase your ability to migrate to new servers.
The DS6800 system offers connectivity support across a broad range of server environments, including IBM eServer™, zSeries®, iSeries™, and pSeries
®
servers as well as servers from Sun Microsystems, Hewlett-Packard, and other Intel-based providers. This rich support of heterogeneous environments and attachments, along with the flexibility to easily partition the DS6800 system storage capacity among the attached environments, helps support storage consolidation requirements and dynamic, changing environments.
DS6000 expansion enclosure (Model 1750-EX1)
The DS6000 series expansion enclosure contains the following features:
v Two expansion controller cards. Each controller card provides the following:
2 inbound ports (2 Gb/sec.)
2 outbound ports (2 Gb/sec.)
1 FC switch per controller card
v
Controller disk enclosure that holds up to 16 FC DDMs
v Two ac/dc power supplies with imbedded enclosure cooling units
v Supports attachment to DS6800
f2d00067
f2d00069
The 3U DS6000 expansion enclosure can be mounted in a standard 19-inch rack. The front of the enclosure contains the docking sites where you can install up to 16 DDMs.
Chapter 1. Introduction 3
Page 34
The DDMs are installed in a horizontal position with a locking handle. The rear of the enclosure provides the docking sites for the power supplies and the controller cards.
You can attach the DS6800 and expansion enclosure by using the controller card interfaces at the rear of the enclosure. A system display panel is also located at the rear of the enclosure.
Performance features
The DS6000 series is built upon 2 Gbps fibre channel technology that can help bring high availability RAID-protected storage with scalable capacity, increased addressing capabilities, and connectivity to a wide range of storage area network (SAN) applications.
The DS6000 series provides the following technology and hardware to meet today’s on demand business environments:
Integrated RAID controller technology
The DS6000 series features IBM’s 32-bit PowerPC microprocessor, a fourth generation processing technology.
High availability
The DS6000 series is designed with component redundancy to eliminate single points of hardware failure, and no single point of repair other than the enclosure.
Industry standard fibre channel disk drives
The DS6000 series offers a selection of 2 Gb fibre channel disk drives, including 300 GB drives, allowing the DS6000 series to scale up to 67 TB of capacity.
LUN and volume management
LUN and volume creation and deletion is nondisruptive. When you delete a LUN or volume, the capacity can immediately be reused. You can configure LUN and volumes to span arrays, which allows larger LUNs and volumes.
Addressing capabilities
The DS6000 series allows:
v Up to 32 logical subsystems
v Up to 8192 logical volumes
v Up to 1040 volume groups
Simplified
storage management for zSeries with z/OS
The DS6000 series supports a new 65 520 cylinder 3390 volume. This volume option has a capacity of approximately 55.7 GB. It helps relieve addressing constraints, improve disk resource utilization, and improve storage administrator productivity by providing the ability to consolidate multiple disk volumes into a single address.
System management
The DS6000 series provides online and offline configuration capability features and a graphical user interface (GUI) designed to offer increased ease of use.
A single command line interface (CLI) supports both logical configuration and copy services.
Online Information Center
The online Information Center is an information database that provides you
4 DS6000 Host Systems Attachment Guide
Page 35
the opportunity to quickly familiarize yourself with the major aspects of the DS6000 series and to easily recognize the topics for which you might require more information. It provides information regarding user assistance for tasks, concepts, reference, user scenarios, tutorials, and other types of user information. Using the wonderful search tool and because the information is all in one place rather than across multiple publications, you can access the information that you need more efficiently and effectively.
Data availability features
This section provides information about data availability features that are supported by DS6000 series.
The DS6000 series provides the following features:
v RAID implementation
v Copy services
v Availability support for open systems, iSeries, zSeries, and pSeries hosts
v Component redundancy to eliminate single points of hardware failure, and no
single point of repair other than the enclosure.
RAID implementation
RAID implementation improves data storage reliability and performance.
With RAID implementation, the DS6000 series offers fault-tolerant data storage by storing the same data in different places on multiple disk drive modules (DDMs). By placing data on multiple disks, input/output operations can overlap in a balanced way to improve the basic reliability and performance of the attached storage devices.
Physical capacity for the DS6000 series can be configured as RAID 5, RAID 10, or a combination of both. RAID 5 can offer excellent performance for most applications, while RAID 10 can offer better performance for selected applications, in particular, high random write content applications in the open systems environment. Each array in the DS6000 series is composed of four drives.
RAID 5 overview
RAID 5 is a method of spreading volume data across multiple disk drives. The DS6000 series supports RAID 5 arrays.
RAID 5 provides faster performance by striping data across a defined set of DDMs. Data protection is provided by parity, which redundantly saves the data to the same DDMs.
RAID 10 overview
RAID 10 provides high availability by combining features of RAID 0 and RAID 1. The DS6000 series supports RAID 10 arrays.
RAID 0 optimizes performance by striping volume data across multiple disk drives at a time. RAID 1 provides disk mirroring which duplicates data between two disk drives. By combining the features of RAID 0 and RAID 1, RAID 10 provides a second optimization for fault tolerance. Data is striped across half of the disk drives in the RAID 1 array, and the other half of the array mirrors the first set of disk drives. Access to data is preserved if one disk in each mirrored pair remains available.
Chapter 1. Introduction 5
Page 36
RAID-10 offers faster data reads and writes than RAID 5 because it does not need to manage parity. However, with half of the DDMs in the group used for data and the other half to mirror that data, RAID 10 disk groups have less capacity than RAID 5 disk groups.
Overview of Copy Services
Copy Services is a collection of functions that provide disaster recovery, data migration, and data duplication functions. Copy Services runs on the DS6000 series and supports open systems and zSeries environments.
Many design characteristics of the DS6000 series and data copying and mirroring capabilities of Copy Services features contribute to the protection of your data, 24 hours a day and seven days a week. A brief description of each of these licensed features is provided below.
You can manage Copy Services functions through a command-line interface called IBM TotalStorage DS CLI and a Web-based interface called IBM TotalStorage DS Storage Manager. The DS Storage Manager allows you to set up and manage the following types of data-copy features from any point from which network access is available:
Point-in-time copy
The point-in-time copy feature, which includes FlashCopy, enables you to create full volume copies of data using source and target volumes that span logical subsystems within a single storage unit. After the FlashCopy function completes, you can immediately access both the source and target copies.
Remote Mirror and Copy
The remote mirror and copy feature copies data between volumes on two or more storage units. When your host system performs I/O operations, the source volume is copied or mirrored to the target volume automatically. After you create a remote mirror and copy relationship between a source volume and a target volume, the target volume continues to be updated with changes from the source volume until you remove the relationship between the volumes.
The following functions of remote mirror and copy are available:
v Metro Mirror
v Global Copy
v Global Mirror
Consider
the following Copy Services information:
v The feature activation codes for Copy Services features (remote mirror and copy
and point-in-time copy) must be obtained and enabled on the DS Storage Manager before you can begin using the features.
FlashCopy
The IBM TotalStorage FlashCopy feature provides a point-in-time copy capability for logical volumes. FlashCopy creates a physical point-in-time copy of the data, with minimal interruption to applications, and makes it possible to access immediately both the source and target copies.
The primary objective of FlashCopy is to create a copy of a source volume on the target volume. This copy is called a point-in-time copy. When you initiate a FlashCopy operation, a FlashCopy relationship is created between the source
6 DS6000 Host Systems Attachment Guide
Page 37
volume and target volume. A FlashCopy relationship is a mapping of a FlashCopy source volume and a FlashCopy target volume. This mapping allows a point-in-time copy of the source volume to be copied to the target volume. The FlashCopy relationship exists between the volume pair from the time that you initiate a FlashCopy operation until the DS6000 copies all data from the source volume to the target volume or until you delete the FlashCopy relationship, if it is a persistent FlashCopy.
The point-in-time copy that is created by FlashCopy is typically used when you need a copy of the production data to be produced with minimal application downtime. It can be used for online back up, testing of new applications, or for creating a database for data-mining purposes. The copy looks exactly like the original source volume and is an instantly available, binary copy.
FlashCopy supports the following copy options:
Data Set FlashCopy
Data Set FlashCopy allows a FlashCopy of a data set in a zSeries environment.
Multiple relationship FlashCopy
Multiple relationship FlashCopy allows a source to have FlashCopy relationships with multiple targets simultaneously. This flexibility allows you to establish up to 12 FlashCopy relationships on a given logical unit number (LUN), volume, or data set, without needing to first wait for or cause previous relationships to end.
Refresh target volume (also known as incremental FlashCopy)
Refresh target volume provides the ability to refresh a LUN or volume involved in a FlashCopy relationship. When a subsequent FlashCopy operation is initiated, only data that updates the target and the source to the same point-in-time state is copied. The direction of the refresh can also be reversed. The LUN or volume that was defined as the target can now become the source for the LUN or the volume that was defined as the source (now the target).
Persistent FlashCopy
Persistent FlashCopy allows the FlashCopy relationship to remain even after the FlashCopy operation completes. You must explicitly delete the relationship.
Establish FlashCopy on existing Remote Mirror and Copy source
The establish FlashCopy on an existing Remote Mirror and Copy source volume option allows you to establish a FlashCopy relationship where the target volume is also the source of an existing remote mirror and copy source volume. This enables you to create full or incremental point-in-time copies at a local site and then use remote mirroring commands to copy the data to the remote site.
This feature is represented by the Establish target on existing Metro
Mirror source selection in the GUI.
Consistency group commands
Consistency group commands allow the DS6000 to freeze I/O activity to a LUN or volume until you issue the FlashCopy consistency group command. Consistency groups help create a consistent point-in-time copy across multiple LUNs or volumes, and even across multiple DS6000 systems. This function is available through the use of command-line interface commands.
Chapter 1. Introduction 7
Page 38
Inband commands over remote mirror link
In a remote mirror environment, inband commands are issued to a source volume of a remote mirror and copy volume pair on a local storage unit and sent across paths (acting as a conduit) to a remote storage unit to enable a FlashCopy pair to be established at the remote site. This eliminates the need for a network connection to the remote site solely for the management of FlashCopy. This function is available through the use of command-line interface commands.
Subsystem Device Driver for open-systems
The IBM TotalStorage Multi-path Subsystem Device Driver (SDD) supports open-systems hosts.
The Subsystem Device Driver (SDD) resides in the host server with the native disk device driver for the storage unit. It uses redundant connections between the host server and disk storage in the DS6000 series to provide enhanced performance and data availability.
Multiple allegiance for FICON hosts
The DS6000 series provides multiple allegiance facility support for FICON hosts.
The multiple allegiance facility enables the storage unit to accept concurrent I/O requests for a volume from multiple channel paths. This enables the storage unit to process requests from separate FICON hosts in parallel. Parallel processing of requests improves throughput and performance. The multiple allegiance facility does not require any user action.
DS6000 Interfaces
This section describes the following interfaces:
v IBM TotalStorage DS Storage Manager
v DS Open application programming interface
v DS Command-Line Interface (CLI)
IBM TotalStorage DS Storage Manager
The IBM TotalStorage DS Storage Manager is a program interface that is used to perform logical configurations and Copy Services management functions.
The DS Storage Manager program is installed as a GUI (graphical mode) or as an unattended (silent mode) installation for the supported operating systems. It can be accessed from any location that has network access using a Web browser. It offers you the following choices:
Simulated configuration
You install this component on your PC or the Master Console which provides the ability to create or modify logical configurations when your storage unit is disconnected from the network. After creating the configuration, you can save it and then apply it to a network attached storage unit at a later time.
Real-time configuration
This component is preinstalled on your HMC. It provides you the ability to create logical configurations and use Copy Services features when your storage unit is attached to the network. This component provides you with real-time (online) configuration support.
8 DS6000 Host Systems Attachment Guide
Page 39
DS Open application programming interface
The DS Open application programming interface (API) is a nonproprietary storage management client application that supports routine LUN management activities, such as LUN creation, mapping and masking, and the creation or deletion of RAID5 and RAID10 volume spaces. The DS Open API also enables Copy Services functions such as FlashCopy and Remote Mirror and Copy (formally known as peer-to-peer remote copy).
The IBM TotalStorage DS Open API helps integrate DS configuration management support into storage resource management (SRM) applications, which allow customers to benefit from existing SRM applications and infrastructures. The DS Open API also enables the automation of configuration management through customer-written applications. Either way, the DS Open API presents another option for managing storage units by complementing the use of the IBM TotalStorage DS Storage Manager web-based interface and the DS command-line interface.
You must implement the DS Open API through the IBM TotalStorage Common Information Model (CIM) agent, a middleware application that provides a CIM-compliant interface. The DS Open API uses the CIM technology to manage proprietary devices as open system devices through storage management applications. The DS Open API allows these storage management applications to communicate with a storage unit.
DS command-line interface
The IBM TotalStorage DS Command-Line Interface (CLI) enables open systems hosts to invoke and manage FlashCopy and Metro and Global Mirror functions through batch processes and scripts.
The command-line interface provides a full-function command set that allows you to check your storage unit configuration and perform specific application functions when necessary.
Note: Before you can use the DS CLI commands, you must ensure the following:
v Your Storage Management Console must be equipped with the DS
Storage Manager graphical user interface (GUI).
v The GUI must have been installed as a Full Management Console
installation management type.
v Your storage unit must be configured (part of DS Storage Manager
postinstallation instructions).
v You must activate your license activation codes (part of DS Storage
Manager postinstallation instructions) before you can use the CLI commands associated with Copy Services functions.
v You cannot install the DS CLI on a Windows 64-bit operating system.
The
following list highlights a few of the specific types of functions that you can
perform with the DS command-line interface:
v Check and verify your storage unit configuration
v Check the current Copy Services configuration that is used by the storage unit
v Create new logical storage and Copy Services configuration settings
v Modify or delete logical storage and Copy Services configuration settings
Chapter 1. Introduction 9
Page 40
Software Requirements
To see current information on servers, operating systems, I/O adapters, and connectivity products supported by the DS6000 series, click Interoperability Matrix at the following DS6000 series Web site:
http://www.ibm.com/servers/storage/disk/ds6000/interop.html
Host systems that DS6000 series supports
The DS6000 series provides a variety of host attachments so that you can consolidate storage capacity and workloads for open-systems hosts and zSeries hosts. The storage unit can be configured for fibre-channel adapters, for support of fibre-channel protocol (FCP) and fibre connection (FICON) protocol.
For fibre channel attachments, you can establish zones. The zones must contain a single port that is attached to a system adapter with the desired number of ports that are attached to the storage unit. By establishing zones, you reduce the possibility of interactions between system adapters in switched configurations. You can establish the zones by using either of two zoning methods:
v Port number
v Worldwide port name (WWPN)
You
can configure switch ports and hub ports that are attached to the storage unit
in more than one zone. This enables multiple system adapters to share access to the storage unit fibre channel ports. Shared access to an storage unit fibre channel port might come from host platforms that support a combination of bus adapter types and the operating systems. For information about host systems, operating system levels, host bus adapters, cables, and fabric support that IBM supports, see the DS6000 series Interoperability Matrix at:
http://www.ibm.com/servers/storage/disk/ds6000/interop.html.
Fibre channel host attachments
Fibre channel technology supports increased performance, scalability, availability, and distance for attaching storage subsystems to network servers. Fibre channel technology supports applications that require large amounts of disk storage that is shared by two or more servers. Yo u can use fibre channel to connect large amounts of disk storage to a server or cluster of servers.
The DS6000 series provides a fibre channel connection when you install a fibre-channel SFP (shortwave or longwave) in the DS6800 model.
Fibre channel architecture provides a variety of communication protocols on the storage server. The servers that are interconnected are referred to as nodes. Each node has one or more ports.
A storage unit is a node in a fibre channel network. Each port on a DS6800 fibre channel SFP is a fibre channel port. A host is also a node in a fibre channel network. Each port attaches to a serial-transmission medium that provides duplex communication with the node at the other end of the medium.
There are three basic topologies supported by fibre channel interconnection architecture:
10 DS6000 Host Systems Attachment Guide
Page 41
Point-to-point
You can use the point-to-point topology to interconnect ports directly.
Switched fabric
The switched-fabric topology provides the necessary switching functions to support communication between multiple nodes. You can use a fabric to support communication between multiple nodes.
Arbitrated loop
A fibre channel arbitrated loop (FC-AL) is a ring topology where two or more ports can be interconnected. You can use the FC-AL to interconnect up to 127 hosts on a loop. An arbitrated loop may be connected to a fabric, known as a public loop. When the loop is not connected to a fabric, it is referred to as a private loop.
Attaching a DS6000 series to an open-systems host with fibre channel adapters
You can attach a DS6000 series to an open-systems host with fibre-channel adapters.
Fibre channel is a 1 Gbps or 2 Gbps, full-duplex, serial communications technology to interconnect I/O devices and host systems that are separated by tens of kilometers.
The IBM TotalStorage DS6000 series supports SAN connections at 1 Gbps to 4 Gbps with 2 Gbps host bus adapters. The DS6000 series negotiates automatically and determines whether it is best to run at 1 Gbps link or 2 Gbps link. The IBM TotalStorage DS6000 series detects and operates at the higher link speed.
Fibre channel transfers information between the sources and the users of the information. This information can include commands, controls, files, graphics, video, and sound. Fibre-channel connections are established between fibre-channel ports that reside in I/O devices, host systems, and the network that interconnects them. The network consists of elements like switches, bridges, and repeaters that are used to interconnect the fibre-channel ports.
FICON-attached S/390 and zSeries hosts that the storage unit supports
You can attach the DS6000 storage unit to FICON-attached S/390 and zSeries hosts.
Each storage unit fibre-channel adapter has four ports. Each port has a unique world wide port name (WWPN). You can configure the port to operate with the FICON
®
upper-layer protocol. When configured for FICON, the fibre-channel port supports connections to a maximum of 128 FICON hosts. On FICON, the fibre-channel adapter can operate with fabric or point-to-point topologies. With fibre-channel adapters that are configured for FICON, the storage unit provides the following configurations:
v Either fabric or point-to-point topologies
v A maximum of 509 channel connections per fibre-channel port
v A maximum of 2000 logical paths on each fibre-channel port
v A maximum of 2000 N-port logins per storage image
v Access to all 64 control-unit images (16 384 CKD devices) over each FICON port
v 1750 Model 511
v Logical Subsystems 32
Chapter 1. Introduction 11
Page 42
v Logical Volumes 8192
v Volume Groups 1040
Note: FICON host channels support more devices than the 4096 possible devices
on a storage unit. This enables you to attach other control units or other storage units to the same host channel up to the limit that the host supports.
The storage unit supports the following operating systems for S/390 and zSeries hosts:
v Transaction Processing Facility (TPF)
v Virtual Storage Extended/Enterprise Storage Architecture (VSE/ESA™)
v z/OS
®
v z/VM
®
v Linux
For details about models, versions of operating systems, and releases that the storage unit supports for these host systems, see the Interoperability Matrix at http://www.ibm.com/servers/storage/disk/ds6000/interop.html.
General information about attaching to open-systems host with fibre-channel adapters
You can attach a storage unit to host systems with fibre channel-adapters.
Fibre-channel architecture
Fibre-channel architecture provides communications protocols on the storage unit.
The storage unit provides a fibre-channel connection in the storage unit. For more information about hosts and operating systems that the storage unit supports on the fibre-channel adapters, see the Interoperability Matrix at http://www.ibm.com/servers/storage/disk/ds6000/interop.html.
Fibre-channel architecture provides a variety of communication protocols on the storage unit. The units that are interconnected are referred to as nodes. Each node has one or more ports.
A storage unit is a node in a fibre-channel network. Each port on a storage unit fibre-channel host adapter is a fibre-channel port. A host is also a node in a fibre-channel network. Each port attaches to a serial-transmission medium that provides duplex communication with the node at the other end of the medium.
Storage unit architecture supports these basic interconnection topologies (network structure):
v Point-to-point
v Switched fabric
Point-to-point topology
The point-to-point topology, also known as direct connect, enables you to interconnect ports directly. Figure 1 on page 13 shows an illustration of a point-to-point topology.
12 DS6000 Host Systems Attachment Guide
Page 43
The storage unit supports direct point-to-point topology at a maximum distance of 500 m (1500 ft) at 1 Gb and 300 m (984 ft) at 2 Gb with the shortwave adapter. The storage unit supports direct point-to-point topology at a maximum distance of 10 km (6.2 mi) with the longwave adapter.
Switched-fabric topology
The switched-fabric topology provides the underlying structure that enables you to interconnect multiple nodes. You can use a fabric that provides the necessary switching functions to support communication between multiple nodes.
You can extend the distance that the storage unit supports up to 300 km (186.3 miles) with a storage area network (SAN) or other fabric components.
The storage unit supports increased connectivity with the use of fibre-channel (SCSI-FCP and FICON) directors. Specific details on status, availability, and configuration options that are supported by the storage unit are available on http://www-1.ibm.com/servers/storage/disk/ds6000.
The storage unit supports the switched-fabric topology with point-to-point protocol. You should configure the storage unit fibre-channel adapter to operate in point-to-point mode when you connect it to a fabric topology. See Figure 2 on page
14.
1udncf
1
2
Legend
v 1– is the host system.
v 2– is the storage unit.
Figure 1. Point-to-point topology
Chapter 1. Introduction 13
Page 44
Arbitrated loop topology
Fibre Channel-Arbitrated Loop (FC-AL) is a ring topology that enables you to interconnect a set of nodes. The maximum number of ports that you can have on an FC-AL is 127. See Figure 3 on page 15.
The storage unit supports FC-AL as a private loop. It does not support the fabric-switching functions in FC-AL.
The storage unit supports up to 127 hosts or devices on a loop. However, the loop goes through a loop initialization process (LIP) whenever you add or remove a host or device from the loop. LIP disrupts any I/O operations currently in progress. For this reason, you should only have a single host and a single storage unit on any loop.
Note: The storage unit does not support FC-AL topology on adapters that are
configured for FICON protocol.
1udnd5
1
2
2
1
3
3
3
Legend
v 1– is the host system.
v 2– is the storage unit.
v 3– is a switch.
Figure 2. Switched-fabric topology
14 DS6000 Host Systems Attachment Guide
Page 45
Note: IBM supports only the topologies for point-to-point and arbitrated loop.
Unconfigure the port to change the topology.
Fibre-channel cables and adapter types
This section provides information about fibre-channel cables and adapter types.
A storage unit fibre-channel adapter and FICON host adapter provides four ports with a standard connector. The cables include a standard connector for attachment to the host system with the following items:
v DS6000 adapter
v The four port FC card for DS6000 provides four ports, each using a duplex LC
(Lucent) Connector.
v Lucent connector
See
IBM TotalStorage DS6000 Introduction and Planning Guide for detailed
information about fibre-channel cables and adapter types. This document also includes information about cable features and optional cables.
Fibre-channel node-to-node distances
The storage unit supports fibre-channel adapter for extended node-to-node distances.
See IBM TotalStorage DS6000 Introduction and Planning Guide for a list of longwave and shortwave adapter cables and their distances.
For fibre-channel, the maximum distance between the following items is 11 km (6 mi):
v Fabric switches
v Link extenders
v Host fibre-channel port
1udnbs
1
2
1
2
Legend
v 1– is the host system.
v 2– is the storage unit.
Figure 3. Arbitrated loop topology
Chapter 1. Introduction 15
Page 46
v Storage unit fibre-channel port
The maximum distance might be greater than 11 km (6 mi) when a link extender provides target initiator functions or controller emulation functions.
You should not use link extenders with emulation functions on links over which Remote Mirror and Copy operations are performed. This is because of the additional path delay introduced by these units.
LUN affinity for fibre-channel attachment
You can use a WWPN to associate a LUN for fibre-channel attachment.
For fibre-channel attachment, LUNs have an affinity to the host’s fibre-channel adapter through the worldwide port name (WWPN) for the host adapter. In a switched fabric configuration, a single fibre-channel host adapter could have physical access to multiple fibre-channel ports on the storage unit. In this case, you can configure the storage unit to allow the host to use either some or all of the physically accessible fibre-channel ports on the storage unit.
Targets and LUNs for fibre-channel attachment
You can attach a fibre-channel host adapter to a LUN.
For fibre-channel attachment, each fibre-channel host adapter can architecturally attach up to 2 LUNs. The storage unit supports 64K LUNs and a maximum LUN size of 2TB divided into a maximum of 16 logical subsystems each with up to 256 LUNs. You can configure no more than 256 of the LUNs in the storage unit to be accessible by that host.
LUN access modes for fibre-channel attachment
Several modes allow LUN access for fibre-channel attachment.
The following sections describe the LUN access modes for fibre-channel.
Fibre-channel access modes
The fibre-channel architecture allows any fibre-channel initiator to access any fibre-channel device, without access restrictions. However, in some environments this kind of flexibility can represent a security exposure. Therefore, the IBM TotalStorage DS6000 allows you to restrict this type of access when IBM sets the access mode for your storage unit during initial configuration. There are two types of LUN access modes:
1. Access-any mode
The access-any mode allows all fibre-channel attached host systems that do not have an access profile to access all non-iSeries open system logical volumes that you have defined in the storage unit.
Note: If you connect the storage unit to more than one host system with
multiple platforms and use the access-any mode without setting up an access profile for the hosts, the data in the LUN used by one open-systems host might be inadvertently corrupted by a second open-systems host. Certain host operating systems insist on overwriting specific LUN tracks during the LUN discovery phase of the operating system start process.
2. Access-restricted mode
16 DS6000 Host Systems Attachment Guide
Page 47
The access-restricted mode prevents all fibre-channel-attached host systems that do not have an access profile to access volumes that you defined in the storage unit. This is the default mode.
Your
IBM service support representative (SSR) can change the logical unit number
(LUN) access mode. Changing the access mode is a disruptive process. Shut down and restart both clusters of the storage unit.
Access profiles
Any fibre-channel-attached host system that has an access profile can access only those volumes that are defined in the profile. Depending on the capability of the particular host system, an access profile can contain up to 256 or up to 4096 volumes.
The setup of an access profile is transparent to you when you use the IBM TotalStorage DS Storage Manager to configure the hosts and volumes in the storage unit. Configuration actions that affect the access profile are as follows:
v When you define a new fibre-channel-attached host system in the IBM
TotalStorage DS Storage Manager by specifying its worldwide port name (WWPN), the access profile for that host system is automatically created. Initially the profile is empty. That is, it contains no volumes. In this state, the host cannot access any logical volumes that are already defined in the storage unit.
v When you add new logical volumes to the storage unit, the new volumes are
assigned to the host. The new volumes are created and automatically added to the access profile.
v When you assign volumes to fibre-channel-attached hosts, the volumes are
added to the access profile.
v When you remove a fibre-channel-attached host system from the IBM
TotalStorage DS Storage Manager, you delete the host and its access profile.
The anonymous host
When you run the storage unit in access-any mode, the IBM TotalStorage DS Storage Manager displays a dynamically created pseudo-host called anonymous. This is not a real host system connected to the storage server. It is intended to represent all fibre-channel-attached host systems that are connected to the storage unit that do not have an access profile defined. This is a reminder that logical volumes defined in the storage unit can be accessed by hosts which have not been identified to the storage unit.
Fibre-channel storage area networks
Fibre-channel storage area networks connect servers and storage devices.
A Fibre-channel storage area network (SAN) is a specialized, high-speed network that attaches servers and storage devices. With a SAN, you can perform an any-to-any connection across the network using interconnect elements such as routers, gateways and switches. With a SAN, you can eliminate the connection between a server, storage, and the concept that the server effectively owns and manages the storage devices.
The SAN also eliminates any restriction to the amount of data that a server can access. This restriction is limited by the number of storage devices that can be attached to the individual server. Instead, a SAN introduces the flexibility of
Chapter 1. Introduction 17
Page 48
networking to enable one server or many heterogeneous servers to share a common storage utility. This might comprise many storage devices, including disk, tape, and optical storage. You can locate the storage utility far from the servers that use it.
Fibre-channel SANs, however, provide the capability to interconnect open systems and storage in the same network as S/390 and zSeries host systems and storage. You can map the protocols for attaching open systems and S/390 and zSeries host systems to the FC-4 layer of the fibre-channel architecture.
18 DS6000 Host Systems Attachment Guide
Page 49
Chapter 2. Attaching to a Apple Macintosh server
This chapter provides information about attaching an Apple Macintosh server to a storage unit.
Supported fibre-channel adapters for the Apple Macintosh server
You can use these fibre-channel adapters with the Apple Macintosh server.
You can attach a storage unit to an Apple Macintosh server with the following fibre-channel adapters:
v Apple fibre channel PCI-X card, 065-5136
v ATTO 3300
v ATTO 3321
For
information about servers, operating systems, adapters, and switches that IBM
supports, see the Interoperability Matrix at http://www.ibm.com/servers/storage/disk/ds6000/interop.html.
© Copyright IBM Corp. 2004, 2005 19
Page 50
20 DS6000 Host Systems Attachment Guide
Page 51
Chapter 3. Attaching to a Fujitsu PRIMEPOWER host system
This chapter describes how to attach a Fujitsu PRIMEPOWER host to a storage unit.
Supported fibre-channel adapters for PRIMEPOWER
The following adapter card is supported for the PRIMEPOWER host system:
v Emulex LP9002L adapter card
Fibre-channel attachment requirements for PRIMEPOWER
This section lists the requirements for attaching the storage unit to your PRIMEPOWER host system.
v Ensure that there are enough fibre-channel adapters installed in the server to
manage the total number of LUNs that you want to attach.
v Ensure that you can reference the documentation for your host system and the
IBM TotalStorage DS6000 Information Center that is integrated with the IBM TotalStorage DS Storage Manager.
v Review device driver installation documents and configuration utility documents
for any PRIMEPOWER patches that you might need.
v See the Interoperability Matrix at
http://www.ibm.com/servers/storage/disk/ds6000/interop.html for details about the release level for your operating system.
Either
you or an IBM service support representative must perform the following
tasks to install and configure a storage unit:
1. Install the storage unit by using the procedures in the IBM TotalStorage DS6000
Installation, Troubleshooting, and Recovery Guide.
2. Define the fibre-channel host system with the worldwide port name identifiers.
For the list of worldwide port names, see “Locating the worldwide port name (WWPN),” on page 211.
3. Define the fibre-port configuration if you did not do it when you installed the
storage unit or fibre-channel adapters.
4. Configure the host system for the storage unit by using the instructions in your
host system publications.
Installing the Emulex adapter card for a PRIMEPOWER host system
This section tells you how to attach a storage unit to a PRIMEPOWER host system with an Emulex adapter card.
Single- and dual-port, fibre-channel interfaces with an Emulex adapter card support the following public and private loop modes:
v Target
v Public initiator
v Private initiator
v Target and public initiator
v Target and private initiator
© Copyright IBM Corp. 2004, 2005 21
Page 52
1. Record the IEEE number that is printed on the card. You can use the IEEE number to determine the WWPN.
2. Refer to the installation instructions provided by your host adapter vendor for their specific installation instructions. See http://www.emulex.com for the latest documentation.
Downloading the Emulex adapter driver for a PRIMEPOWER host system
This section provides instructions to download the current Emulex fibre-channel adapter driver.
1. Restart your host system.
2. Go to http://www.ibm.com/servers/storage/disk/ds6000/interop.html.
3. Click Interoperability matrix.
4. Click DS6000 interoperability matrix.
5. Find the section for the current version of the driver and firmware that you want.
6. Go to http://www.emulex.com.
7. Click Driver, downloads, and documentation from the left navigation pane. Click OEM software and documentation.
8. Click IBM.
9. Click the link for the adapter that corresponds to the firmware, driver, and documentation that you need to install and download the adapter driver.
Installing the Emulex adapter driver for a PRIMEPOWER host system
This section provides instructions to install the Emulex fibre-channel adapter driver.
1. Go to http://www.emulex.com.
2. From the left navigation pane, click Drivers, Software, and Manuals.
3. Click IBM.
4. Determine the driver you want, and click the link for the adapter driver.
5. Determine the operating system for the driver that you want to download and click Version x-x.xxxx, where x-x.xxxx is the version number for the adapter driver.
Note: If you are installing the fibre-channel adapter for the first time, you must
specify the correct topology. Yo u must also select the appropriate device mapping driver.
Configuring host device drivers for PRIMEPOWER
Perform the following steps to update the PRIMEPOWER driver configuration file. This procedure gives you access to target and LUN pairs that are configured on the storage unit.
Note: Do not change or remove entries in kernel /drv/sd.conf for preexisting
devices. Doing so can cause your system to become inoperable.
1. Change to the directory by typing: cd /kernel/drv
2. Backup the sd.conf file in this subdirectory.
3. Edit the sd.conf file to add support for the target and LUN pairs that are configured on the host system.
22 DS6000 Host Systems Attachment Guide
Page 53
Note: Do not add duplicate target and LUN pairs.
Figure 4 shows the lines that you must add to the file to access LUNs 0 - 49 on target 0 for fibre-channel.
Figure 5 on page 24 shows the start lpfc auto-generated configuration.
Note: You will delete anything that you put within this auto-generated section if
you issue the pkgrm command to remove the lpfc driver package. You might want to add additional lines to probe for additional LUNs or targets. Delete any lines that represent lpfc targets or LUNs that are not used.
name="sd" class="scsi"
target=0 lun=0;
name="sd" class="scsi"
target=0 lun=1;
name="sd" class="scsi"
target=0 lun=2;
name="sd" class="scsi"
target=0 lun=3;
name="sd" class="scsi"
target=0 lun=4;
name="sd" class="scsi"
target=0 lun=5;
name="sd" class="scsi"
target=0 lun=6;
name="sd" class="scsi"
target=0 lun=7;
name="sd" class="scsi"
target=0 lun=8;
name="sd" class="scsi"
target=0 lun=9;
name="sd" class="scsi"
target=0 lun=10; . . . name="sd" class="scsi"
target=0 lun=48; name="sd" class="scsi"
target=0 lun=49;
Figure 4. Example of sd.conf file entries for fibre-channel
Chapter 3. Attaching to a Fujitsu PRIMEPOWER host system 23
Page 54
4. Type either:
a. reboot -- -r from the Open Windows window to shutdown and restart the
PRIMEPOWER host system with the kernel reconfiguration option
b. boot -r from the OK prompt after you shutdown
The
fibre-channel adapters that are supported for attaching the storage unit to a
PRIMEPOWER host are capable of full-fabric support. Ensure that all fibre-channel driver configurations include worldwide port name, worldwide node name, port ID, or host adapter binding of target LUN pairs.
Binding of target LUN pairs implements the PRIMEPOWER fibre-channel host adapter configuration file that is installed by the adapter software package. Refer to the manufacturer’s adapter documentation and utilities for detailed configuration instructions.
You can tune fibre-channel host adapter configuration files for host system reliability and performance.
Parameter settings for the Emulex LP9002L adapter
You can use these recommended configuration settings for your Emulex adapter on a PRIMEPOWER host system.
Table 1 on page 25 provides configuration settings that are recommended for the Emulex LP9002L adapter. For the most current information about fibre-channel adapter parameter settings, see:
http://knowledge.storage.ibm.com/servers/storage/support/hbasearch/interop/hbaSearch.do
name="sd" parent="lpfc" target=0 lun=0; name="sd" parent="lpfc" target=1 lun=0; name="sd" parent="lpfc" target=2 lun=0; name="sd" parent="lpfc" target=3 lun=0; name="sd" parent="lpfc" target=4 lun=0; name="sd" parent="lpfc" target=5 lun=0; name="sd" parent="lpfc" target=6 lun=0; name="sd" parent="lpfc" target=7 lun=0; name="sd" parent="lpfc" target=8 lun=0; name="sd" parent="lpfc" target=9 lun=0; name="sd" parent="lpfc" target=10 lun=0; name="sd" parent="lpfc" target=11 lun=0; name="sd" parent="lpfc" target=12 lun=0; name="sd" parent="lpfc" target=13 lun=0; name="sd" parent="lpfc" target=14 lun=0; name="sd" parent="lpfc" target=15 lun=0; name="sd" parent="lpfc" target=16 lun=0; name="sd" parent="lpfc" target=17 lun=0; name="sd" parent="lpfc" target=17 lun=1; name="sd" parent="lpfc" target=17 lun=2; name="sd" parent="lpfc" target=17 lun=3;
Figure 5. Example of a start lpfc auto-generated configuration
24 DS6000 Host Systems Attachment Guide
| | |
|
| |
Page 55
Table 1. Recommended configuration file parameters for the Emulex LP9002L adapter
Parameters Recommended settings
automap 1: Default. SCSI IDs for all FCP nodes without persistent bindings are
automatically generated. If new FCP devices are added to the network when the system is down, there is no guarantee that these SCSI IDs will remain the same when the system is restarted. If one of the FCP binding methods is specified, then automap devices use the same mapping method to preserve SCSI IDs between link down and link up. If no bindings are specified, a value of 1 forces WWNN binding, a value of 2 forces WWPN binding, and a value of 3 forces DID binding. If automap is 0, only devices with persistent bindings are recognized by the system.
fcp-on 1: Default. Turn on FCP.
lun-queue-depth 30: The default value that the driver uses to limit the number of
outstanding commands per FCP LUN. This value is global, affecting each LUN recognized by the driver, but can be overridden on a per-LUN basis. Yo u might have to configure RAID using the per-LUN tunable throttles.
no-device-delay 0: Default. Implies no delay whatsoever.
1: Recommended.
2: Setting a long delay value might permit I/O operations to build up, each with a pending time-out, which could result in the exhaustion of critical PRIMEPOWER kernel resources. In this case, you might see a fatal message such as, PANIC: Timeout table overflow.
network-on 0: Default. Recommended for fabric. Do not turn on IP networking.
1: Turn on IP networking.
scan-down 0: Recommended. Causes the driver to use an inverted ALPA map,
effectively scanning ALPAs from high to low as specified in the FC-AL annex.
2: Arbitrated loop topology.
tgt-queue-depth 0: Recommended. The default value that the driver uses to limit the
number of outstanding commands per FCP target. This value is global, affecting each target recognized by the driver, but can be overridden on a per-target basis. Yo u might have to configure RAID using the per-target tunable throttles.
topology 2: Recommended for fabric. Point-to-point topology only.
4: Recommended for nonfabric. Arbitrated-loop topology only.
xmt-que-size 256: Default. Size of the transmit queue for mbufs (128 - 10240).
zone-rscn 0: Default.
1: Recommended for fabric. Check the name server for RSCNs.
Setting zone-rscn to 1 causes the driver to check with the name server to see if an N_Port ID received from an RSCN applies. If soft zoning is used with Brocade fabrics, this parameter should be set to 1.
Chapter 3. Attaching to a Fujitsu PRIMEPOWER host system 25
|
Page 56
Setting parameters for Emulex adapters
This section provides instructions for setting parameters for Emulex adapters for your PRIMEPOWER host system.
1. Type cd /etc to change to the /etc subdirectory.
2. Backup the system file in the subdirectory.
3. Edit the system file and set the following parameters for servers with configurations that only use Emulex or QLogic adapters.
sd_io_time
This parameter specifies the time-out value for disk operations. Add the following line to the /etc/system file to set the sd_io_time parameter for the storage unit LUNs: set sd:sd_io_time=0x78
sd_retry_count
This parameter specifies the retry count for disk operations. Add the following line to the /etc/system file to set the sd_retry_count parameter for the storage unit LUNs: set sd:sd_retry_count=5
maxphys
This parameter specifies the maximum number of bytes you can transfer for each transaction. The default value is 12 6976 (124 KB). If the I/O block size that you requested exceeds the default value, the request is broken into more than one request. You should tune the value to the application requirements. For maximum bandwidth, set the maxphys parameter by adding the following line to the /etc/system file:
set maxphys=1048576 (1 MB)
Note:
Do not set the value for maxphys greater than 1048576 (1 MB).
Doing so can cause the system to hang.
26 DS6000 Host Systems Attachment Guide
Page 57
Chapter 4. Attaching to a Hewlett-Packard AlphaServer Tru64 UNIX host
This chapter describes how to attach a Hewlett-Packard (HP) AlphaServer Tru64 UNIX host to a storage unit.
Attaching to an HP AlphaServer Tru64 UNIX host system with fibre-channel adapters
This section describes the host system requirements and provides the procedures to attach a storage unit to an HP AlphaServer with fibre-channel adapters.
Supported fibre-channel adapters for the HP AlphaServer Tru64 UNIX host system
This section lists the fibre-channel adapters that are supported for storage unit attachment to an HP AlphaServer Tru64 UNIX host system.
You can attach a storage unit to an HP AlphaServer Tru64 UNIX host system with the following fibre-channel adapters:
v KGPSA-CA adapter card
v KGPSA-DA adapter card
v KGPSA-EA adapter card
Note:
You do not need the Subsystem Device Driver because Tru64 UNIX
manages multipathing.
For information about adapters that you can use to attach the storage unit to the HP host and the HP AlphaServer models, see the Interoperability Matrix at http://www.ibm.com/servers/storage/disk/ds6000/interop.html.
Supported operating system levels for fibre-channel attachment to an HP Tru64 UNIX host
This section lists the supported operating system levels for storage unit fibre-channel attachment to an HP AlphaServer Tru64 UNIX host system.
You can attach a clustered HP AlphaServer Tru64 host system with level 5.0A, 5.1,
5.1A, and 5.1B with the following topologies:
v Switched fabric
v Point-to-point
Fibre-channel Tru64 UNIX attachment requirements
This section lists the attachment requirements to attach a storage unit to a Tru64 UNIX host system with fibre-channel adapters.
You must comply with the following requirements when attaching the storage unit to your host system:
v Ensure that you can reference the documentation for your host system and the
IBM TotalStorage DS6000 Information Center that is integrated with the IBM TotalStorage DS Storage Manager.
© Copyright IBM Corp. 2004, 2005 27
Page 58
v See the Interoperability Matrix at
http://www.ibm.com/servers/storage/disk/ds6000/interop.html for details about the release level for your operating system.
v You must install the storage unit by using the procedures in the IBM TotalStorage
DS6000 Installation, Troubleshooting, and Recovery Guide.
Fibre-channel Tru64 UNIX attachment considerations
This section lists the attachment considerations when attaching a storage unit to an HP AlphaServer Tru64 UNIX host system.
Table 2 lists the maximum number of adapters you can have for an AlphaServer.
Note: For a list of open systems hosts, operating systems, adapters and switches
that IBM supports, see the Interoperability Matrix at http://www.ibm.com/servers/storage/disk/ds6000/interop.html.
Table 2. Maximum number of adapters you can use for an AlphaServer
AlphaServer name Maximum number of adapters
800 2
1200 4
2100 4
4000, 4000a 4
4100 4
8200, 8400 8
DS10, DS20, DS20E 2
ES40, ES45, ES47 4
GS60, GS60E, GS140 8
GS80, GS160, GS320 8
Supporting the AlphaServer Console for Tru64 UNIX
The following sections describe how to support the AlphaServer Console for Tru64 UNIX when you are attaching to a storage unit with fibre-channel adapters.
Supported microcode levels for the HP Tru64 UNIX host system
This topic lists the supported storage unit microcode levels when you attach to an HP AlphaServer Tru64 UNIX host system.
Support for the AlphaServer console that recognizes the storage unit LUNs with fibre is available for the current version of the storage unit that is LMC. To determine which level of LMC is running on the storage unit, type telnet xxxxx, where xxxxx is the storage unit cluster name.
Figure 6 on page 29 shows an example of the output from the telnet command.
28 DS6000 Host Systems Attachment Guide
Page 59
Supported switches for the HP Tru64 UNIX host system
This section lists the switches that are supported when you attach a storage unit to an HP AlphaServer Tru64 UNIX host system.
For more information on supported switches, see the Interoperability Matrix at http://www.ibm.com/servers/storage/disk/ds6000/interop.html
IBM supports cascaded switches in configurations up to a maximum of 8 switches with a maximum of 3 interswitch hops for any path. Use two hops for normal operations with the third hop reserved for backup paths.
Attaching the HP AlphaServer Tru64 UNIX host to a storage unit using fibre-channel adapters
To attach a storage unit to a Tru64 UNIX operating system, perform the following tasks:
v Confirm the installation of the operating system
v Confirm the installation of the host adapter
v Set up the storage unit
v Confirm switch connectivity
v Confirm storage connectivity
v Configure storage
v Review tuning recommendations
v Review unsupported utilities
Confirming the installation of the Tru64 UNIX operating system
This section lists the steps you must perform to confirm the installation of Tru64 UNIX.
telnet xxxxx (where xxxxx is the cluster name)
IBM TotalStorage DS6000
Model 921 SN 75-99999 Cluster Enclosure 1
OS Level 4.3.2.15 Code EC 1.5.0.107 EC Installed on: Jan 10 2005
tbld1212 SEA.rte level = 2.6.402.678 SEA.ras level = 2.6.402.678
licensed machine code - Property of IBM.
1750 licensed machine code
(C) IBM Corporation 1997, 2004. All rights reserved.
US Government Users Restricted Rights - Use, duplication or disclosure
Restricted by GSA ADP Schedule Contract with IBM Corporation.
Login:
Figure 6. Confirming the storage unit licensed machine code on an HP AlphaServer through the telnet command
Chapter 4. Attaching to a Hewlett-Packard AlphaServer Tru64 UNIX host 29
Page 60
If you use the storage unit volumes member boot disks for a clustered or nonclustered configuration, install the operating system from the console level. Yo u can use the storage unit LUN as a boot disk only for the Tru64 5.x operating system.
1. Confirm the installation of the appropriate version of Tru64 UNIX. For the Tru64 UNIX 5.x operating system, use the sizer -v command to confirm installation. Figure 7 shows an example of what is displayed when you use the sizer -v command.
2. Ensure that you are at patch level 7 and that all kernel options are active. For Tru64 UNIX cluster configurations, you must install patch 399.00 with patch security (SSRT0700U). For information about other patches for this version, refer to the documentation from HP.
Installing the KGPSA-xx adapter card in an Tru64 UNIX host system
This section lists the steps you must perform to install the KGPSA-xx adapter card.
1. Shut down the HP AlphaServer host system.
2. Use the procedures provided by the manufacturer of the adapter card to install the KGPSA-xx host adapter.
3. Restart the host (nonclustered configurations) or each cluster member (clustered configurations).
4. Bring each host system to a halt condition at the console level.
5. Type set mode diag at the HP AlphaServer console to place the console in diagnostic mode.
6. Type wwidmgr-show adapter to confirm that you installed each adapter properly.
7. If necessary, update the adapter firmware. See Figure 8 on page 31 for an example of what you see when you type set
mode diag and wwidmgr -show adapter.
# sizer -v v5.x # Compaq Tru64 UNIX V5.1A (Rev. 1885); Tue Sept 24 14:12:40 PDT 2002
Figure 7. Example of the sizer -v command
30 DS6000 Host Systems Attachment Guide
Page 61
Figure 8 shows the worldwide node name (WWNN). Yo u need the worldwide port name (WWPN) to configure the storage unit host attachment. To determine the WWPN for the KGPSA adapters, replace the “2” in the WWNN with a “1”.
Setting the mode for the KGPSA-xx host adapter
This task describes setting the mode for the KGPSA-xx host adapter.
You must install the KGPSA-xx host adapter before you can set the mode.
The default KGPSA mode setting is FABRIC, so directly attaching the AlphaServer to the storage unit using fibre-channel KGPSA-xx adapters does not work without modification. You must change the mode setting to LOOP mode.
1. Type # shutdown -h now to shutdown the operating system.
2. Type init to initialize the system.
3. Type wwidmgr -show adapter to check the mode. Figure 9 shows example
output from the wwidmgr command.
4. Type one of the following commands to set the mode of the KGPSA host adapter:
a. For FABRIC mode, type wwidmgr -set adapter -item 9999 -topo fabric b. For LOOP mode, type wwidmgr -set adapter -item 9999 -topo loop
5.
Type init to initialize the system.
6. Type wwidmgr -show adapter to check the mode.
7. Use the IBM TotalStorage DS Storage Manager to set your port attributes to match your host settings:
a. For arbitrated loop: Set your port attribute to Direct Connect.
b. For point-to-point: Set your port attribute to Switched Fabric.
P00>>>set mode diag Console is in diagnostic mode P00>>>wwidmgr -show adapter polling for units on kgpsa0, slot 9, bus 0, hose0... kgpsaa0.0.0.9.0 PGA0 WWN 2000-0000-c922-69bf polling for units on kgpsa1, slot 10, bus 0, hose0... kgpsab0.0.0.10.0 PGB0 WWN 2000-0000-c921-df4b item adapter WWN Cur. Topo Next Topo [ 0] kgpsab0.0.0.10.0 2000-0000-c921-df4b FABRIC FABRIC [ 1] kgpsaa0.0.0.9.0 2000-0000-c922-69bf FABRIC FABRIC [9999] All of the above. P00>>>
Figure 8. Example of the set mode diag command and the wwidmgr -show adapter command
item adapter WWN Cur. Topo Next Topo [ 0] pga0.0.4.1 2000-0000-C923-1765 FABRIC FABRIC
Figure 9. Example results of the wwidmgr command.
Chapter 4. Attaching to a Hewlett-Packard AlphaServer Tru64 UNIX host 31
Page 62
Setting up the storage unit to attach to an HP AlphaServer Tru64 UNIX host system with fibre-channel adapters
The following sections tell you how to set up a storage unit to attach to an HP AlphaServer Tru64 UNIX host.
Adding or modifying AlphaServer fibre-channel connections
This section provides the reference you need to add or modify AlphaServer fibre-channel connections.
To add, remove, or modify the AlphaServer connections, use the IBM TotalStorage DS Storage Manager. When you add a connection, it is necessary to specify the worldwide port name of the host connection. See “Locating the worldwide port name (WWPN),” on page 211 for procedures on how to locate the WWPN for each KGPSA adapter card.
Configuring fibre-channel host adapter ports for the Tru64 UNIX host system
This section provides the reference you need to configure host adapter ports for fibre-channel storage unit connectivity to an HP AlphaServer Tru64 UNIX host system.
To configure the host adapter ports, use the IBM TotalStorage DS Storage Manager or see the IBM TotalStorage DS6000 Command-Line Interface User’s Guide.
Adding and assigning volumes to the Tru64 UNIX host system
This section provides information you need to add and assign volumes to the Tru64 UNIX host system.
To set up disk groups, create volumes and assign them to the Tru64 connections, use the IBM TotalStorage DS Storage Manager or see the IBM TotalStorage DS6000 Command-Line Interface User’s Guide.
Confirming fibre-channel switch connectivity for Tru64 UNIX
This section lists the steps you need to perform to confirm fibre-channel switch connectivity for Tru64 UNIX.
1. Open a telnet session and log in to the switch as an administrator.
2. Confirm that each host adapter has performed a fabric login to the switch.
3. Type switchshow to confirm that each storage unit host adapter has performed a
fabric login to the switch. Figure 10 on page 33 shows an example of what displays when you type the switchshow command.
32 DS6000 Host Systems Attachment Guide
Page 63
Confirming fibre-channel storage connectivity for Tru64 UNIX
This section lists the steps you must perform to confirm fibre-channel storage connectivity for an HP Tru64 UNIX host system.
1. Reset the host (nonclustered configurations) or each cluster member (clustered configurations).
2. Bring each host system to a halt condition at the console level.
3. Type set mode diag at the HP AlphaServer console (if required by the host) to place the console in diagnostic mode.
Type wwidmgr -show wwid to display the information about the storage unit volume at the console level. You can use this information to identify the volumes that are attached to an AlphaServer. Figure 11 shows an example of information about the storage unit volumes that you can see at the AlphaServer console.
4. Type wwidmgr -show adapter to confirm storage attachment.
See “Tru64 UNIX UDID hexadecimal representations” on page 34 for an explanation of the UDID.
snj2109f16h4:osl> switchshow switchName: snj2109f16h4 switchType: 9.1 switchState: Online switchRole: Principal switchDomain: 1 switchId: fffc01 switchWwn: 10:00:00:60:69:50:0c:3e switchBeacon: OFF port 0: id N1 Online F-Port 50:05:07:63:00:c9:91:62 port 1: id N1 Online F-Port 10:00:00:00:c9:22:d2:08 port 2: id N1 Online F-Port 10:00:00:00:c9:22:6a:63 port 3: id N1 Online F-Port 50:00:1f:e1:00:00:2b:11 port 4: id N2 No_Light port 5: id N1 Online F-Port 10:00:00:00:c9:22:d4:69 port 6: id N1 Online F-Port 10:00:00:00:c9:22:67:38 port 7: id N1 Online L-Port 1 private, 3 phantom port 8: id N2 No_Light port 9: id N1 Online F-Port 10:00:00:00:c9:22:69:bf port 10: id N1 Online F-Port 10:00:00:00:c9:21:df:4b port 11: id N1 Online F-Port 50:05:07:63:00:cf:8d:7e port 12: id N2 No_Light port 13: id N1 Online F-Port 50:05:07:63:00:c7:91:1c port 14: id N1 Online F-Port 50:05:07:63:00:cd:91:62 port 15: -- N2 No_Module snj2109f16h4:osl>
Figure 10. Example of the switchshow command
P00>>>set mode diag Console is in diagnostic mode P00>>>wwidmgr -show wwid [0] UDID: -1 WWID:01000010:6000-1fe1-4942-4d20-0000-0000-28b1-5660 (ev:none) [1] UDID: -1 WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2881-5660 (ev:none) [2] UDID: -1 WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660 (ev:none) P00>>>
Figure 11. Example of storage unit volumes on the AlphaServer console
Chapter 4. Attaching to a Hewlett-Packard AlphaServer Tru64 UNIX host 33
Page 64
Tru64 UNIX UDID hexadecimal representations
This section describes UDID representations for storage unit volumes.
The UDID for each volume appears as -1, signifying that the UDID is undefined. With the supported storage unit LMC, all UDIDs for storage unit volumes are undefined.
The underscore in Figure 12 highlights the hex string that identifies a storage unit volume that is attached to an AlphaServer.
The third and fourth quartet of the UDID number is always the value “4942-4d20”. This is the string IBM in hex and represents a storage unit volume.
The underscore in Figure 13 highlights an example of a hex string that identifies the decimal volume number of the storage unit volume. The first three characters of the next to last quartet of numbers is the hex string representation. Figure 13 shows that the storage unit volume number is decimal 282.
Figure 14 shows a hex representation of the last 5 characters of the storage unit volume serial number.
Preparing to boot from the storage unit for the Tru64 UNIX host system
This section lists the steps you must perform to prepare to boot from the storage unit.
Use the wwidmgr command to set up each device that you use for booting or dumping. After you set up a device, the console retains the information that is needed to access the device in nonvolatile memory. Rerun the wwidmgr command if the system configuration changes and the nonvolatile information is no longer valid.
1. Display the WWIDs of all assigned storage unit volumes with the wwidmgr
-show wwid command.
01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660
Figure 12. Example of a hex string for a storage unit volume on an AlphaServer Tru64 UNIX console
01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660
Figure 13. Example of a hex string that identifies the decimal volume number for a storage unit volume on an AlphaServer console or Tru64 UNIX
01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660
Figure 14. Example of hex representation of last 5 characters of a storage unit volume serial number on an AlphaServer console
34 DS6000 Host Systems Attachment Guide
Page 65
2. Determine which storage unit volume that you want to use as a boot or dump device by decoding the serial number as described in “Tru64 UNIX UDID hexadecimal representations” on page 34.
3. Assign a unit number with the wwidmgr -quickset -item i -unit u command, where i is the wwidmgr item number and u is the unit number you choose. You can find the item number inside the square brackets of the output from the
wwidmgr -show wwid command.
4. Reinitialize the server with the init command.
When you make changes with the wwidmgr command, they do not take effect until the next system initialization.
5. Use the show device command to verify that the system displays the disk as console device DGAu, with the unit number that you defined.
After the initialization, the console show device command displays each fibre-channel adapter followed by the paths through that adapter to each of the defined fibre-channel disks. The path-independent OpenVMS device name for each fibre-channel disk is displayed in the second column.
6. Set the default boot device console variable, bootdef_dev, to match the console device name of the boot disk. In a multipath environment, use the asterisk (*) as a wildcard to make all boot paths available. An example of the multipath command is set bootdef_dev DGAu.*, where u is the unit number and * denotes all possible paths.
Verifying the fibre-channel attachment of the storage unit volumes to Tru64 UNIX
This section contains steps you must perform to verify the fibre-channel attachment of the storage unit volumes to the Tru64 UNIX host system.
For Tru64 UNIX 5.x:
1. Use the hwmgr command to verify the attachment of the storage unit volumes for Tru64 5.x. Figure 15 shows an example of the commands you can use to verify the attachment of the storage unit volumes.
# hwmgr -view dev -cat disk
HWID: Device Name Mfg Model Location
-----------------------------------------------------------------------------­54: /dev/disk/floppy0c 3.55in floppy fdi0-unit-0 60: /dev/disk/dsk1c DEC RZ2DD-LS (C) DEC bus-2-targ-0-lun-0 63: /dev/disk/cdrom0c COMPAQ CDR-8435 bus-5-targ-0-lun-0 66: /dev/disk/dsk5c IBM 2105F20 bus-0-targ-253-lun-0 67: /dev/disk/dsk6c IBM 2105F20 bus-0-targ-253-lun-1 68: /dev/disk/dsk7c IBM 2105F20 bus-0-targ-253-lun-2
: :
# hwmgr –get attributes –id 66
66:
name = SCSI-WWID:01000010:6000-1fe1-0000-2b10-0009-9010-0323-0046 category = disk sub_category = generic architecture = SCSI
: :
Figure 15. Example of the hwmgr command to verify attachment
Chapter 4. Attaching to a Hewlett-Packard AlphaServer Tru64 UNIX host 35
Page 66
2. Use the example Korn shell script, called dsvol, shown in Figure 16, to display a summary that includes information for all the storage unit volumes that are attached.
Figure 17 shows an example of what displays when you execute the dsvol korn shell script.
Note: You can see storage unit volumes 282, 283, and 284 as LUNS 0, 1, and
2 respectively. Yo u can access the LUNs in the Tru64 UNIX by using the following special device files:
v /dev/rdisk/dsk3
v /dev/rdisk/dsk4
v /dev/rdisk/dsk5
Configuring the storage for fibre-channel Tru64 UNIX hosts
This section lists the steps you must perform to configure the storage for fibre-channel Tru64 UNIX host systems.
You can use the standard Tru64 storage configuration utilities to partition and prepare storage unit LUNs and create and mount file systems.
Perform one of the following sets of steps to configure the storage for a Tru64 5.x file system:
1. Perform the following steps to configure an AdvFS file system: a. Type: # disklabel -wr /dev/rdisk/dsk6c b. Type: # mkfdmn /dev/disk/dsk6c adomain
echo Extracting DS volume information... for ID in `hwmgr -view dev -cat disk | grep ibm1750 | awk ’{ print $1}’` do echo; echo DS vol, H/W ID $ID hwmgr -get attrib -id $ID | awk ’/phys_loc//dev_base//capacity//serial/’ done
Figure 16. Example of a Korn shell script to display a summary of storage unit volumes
# ./dsvol | more
Extracting DS volume information...ESS vol, H/W ID 38:
phys_location = bus-2-targ-0-lun-0 dev_base_name = dsk3 capacity = 5859392 serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2821-5660
DS vol, H/W ID 39:
phys_location = bus-2-targ-0-lun-1 dev_base_name = dsk4 capacity = 5859392 serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2831-5660
DS vol, H/W ID 40:
phys_location = bus-2-targ-0-lun-2 dev_base_name = dsk5 capacity = 5859392 serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-2841-5660
#
Figure 17. Example of the Korn shell script output
36 DS6000 Host Systems Attachment Guide
Page 67
c. Type: # mkfset adomain afs d. Type: # mkdir /fs e. Type: # mount -t advfs adomain#afs /fs
2. Perform the following steps to configure an Ufs file system:
a. Type: # disklevel -wr /dev/disk/dsk6c b. Type: # newfs /dev/disk/dsk6c c. Type: # mkdir /fs d. Type: # mount -t ufs /dev/disk/dsk6c /fs
Removing persistent reserves for Tru64 UNIX 5.x
This section explains how to remove persistent reserves for the Tru64 UNIX 5.x host system.
In the clustered environment, Tru64 5.x hosts place a persistent reserve whenever you assign a LUN. If you perform a FlashCopy or Remote Mirror and Copy operation on a LUN that has persistent reserve, it will fail. If you want to perform a FlashCopy or Remote Mirror and Copy operation, remove the persistent reserve on the target LUN before you assign the LUN.
For example, assume that there are two Tru64 5.x hosts, Alpha1 and Alpha2. The connections available for accessing storage unit LUNs with fibre:
v Fibre connection: One fibre connection goes from the KGPSA-xx card in Alpha1
to a switch. One fibre connection goes from the Alpha2 to the switch and another fibre connection goes from the switch to the storage unit.
Use a storage unit volume, for example 10a-21380, as a target LUN to perform a FlashCopy. There are two hosts and four connections: two from Alpha1 and two from Alpha2. The storage unit volume 10a-21380 has four registrants. One registrant is reserved. Use the essvol script to find the devices that are associated with storage unit volume 10a-21380 on each Trucluster node. Figure 18 on page 38 shows an example of how to remove a persistent reserve when you use the essvol script.
Chapter 4. Attaching to a Hewlett-Packard AlphaServer Tru64 UNIX host 37
Page 68
Use the scu command to see the reservations on these devices. Figure 19 and Figure 20 on page 39 show examples of what you see when you use the scu command. You can associate dsk47 on Alpha1 and Alpha2 with storage unit volume 10a-21380.
alpha1> essvol DS vol, H/W ID 176:
phys_location = bus-9-targ-0-lun-0 dev_base_name = dsk43 capacity = 3906304 serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-1042-1380
DS vol, H/W ID 225:
phys_location = bus-9-targ-0-lun-7 dev_base_name = dsk47 capacity = 3906304 serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-10a2-1380
alpha2> essvol DS vol, H/W ID 176:
phys_location = bus-4-targ-0-lun-0 dev_base_name = dsk43
capacity = 3906304
serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-1042-1380
DS vol, H/W ID 225:
phys_location = bus-3-targ-0-lun-1 dev_base_name = dsk47 capacity = 3906304 serial_number = SCSI-WWID:01000010:6000-1fe1-4942-4d20-0000-0000-10a2-1380
Figure 18. Example of the essvol script
alpha1> scu –f /dev/rdisk/dsk47c show reservations
Persistent Reservation Header:
Generation Value: 49
Additional Length: 16
Reservation Descriptors:
Reservation Key: 0x30001
Scope-Specific Address: 0
Reservation Type: 0x5 (Write Exclusive Registrants Only)
Reservation Scope: 0 (LU - full logical unit)
Extent Length: 0
Figure 19. Example of the scu command
38 DS6000 Host Systems Attachment Guide
Page 69
Each device shows 0x30001 as a reservation key. Yo u must issue the scu command again to remove the persistent reserve using reservation key 0x30001 on each node. Tru64 places a unique reservation key on each LUN whenever the storage unit assigns the LUN. The reservation key can only be removed from that specific host to which it is assigned. Because it is not possible to tell exactly which registrant on the host holds the reservation, you must issue an scu clear command on each node in the cluster. Figure 21 shows an example of what you see when you use the scu clear command.
One of the two commands that you see in Figure 21 clears the persistent reserve on storage unit volume 10a-21380.
Use the scu command to check reservations. Figure 22 on page 40 shows an example of what you see when you use the scu command again.
One of the two commands you see in Figure 22 on page 40 clears the persistent reserves on storage unit volume 10a-21380.
alpha2> scu –f /dev/rdisk/dsk47c show reservations
Persistent Reservation Header:
Generation Value: 49
Additional Length: 16
Reservation Descriptors:
Reservation Key: 0x30001
Scope-Specific Address: 0
Reservation Type: 0x5 (Write Exclusive Registrants Only)
Reservation Scope: 0 (LU - full logical unit)
Extent Length: 0
Figure 20. Example of the scu command
alpha1> scu –f /dev/rdisk/dsk47c press clear key 0x30001 alpha2> scu –f /dev/rdisk/dsk47c press clear key 0x30001
Figure 21. Example of the scu clear command
Chapter 4. Attaching to a Hewlett-Packard AlphaServer Tru64 UNIX host 39
Page 70
After removing the persistent reserve from a storage unit volume, you can use it as a target LUN for FlashCopy or Remote Mirror and Copy.
Limitations for Tru64 UNIX
The following is a list of limitations for Tru64 UNIX for fibre-channel connections:
Boot Volumes
IBM does not support FlashCopy or Remote Mirror and Copy on Tru64 boot volumes (cluster boot volumes). Do not attempt to clear persistent reserves on these LUNs.
UFS file system
The data will be inconsistent if you perform a FlashCopy on a target LUN which is online. Take the LUN offline before you perform a FlashCopy.
AdvFS file system
It is not possible to access a FlashCopy target volume on the same host as the source because of domain/fileset advfs concepts. You must unmount the source volume before you can access the target volume.
From the command line, type the following commands:
1. # umount /source
2. # mkdir /etc/fdmns/t_domain (target domain)
3. # ln -s /dev/disk/dsk47c dsk47c (target volume)
4. # mkdir /target
5. # mount -t advfs dsk47c
6. # source /target (source is fileset of source volume)
alpha1> scu -f /dev/rdisk/dsk47c show reservations
Persistent Reservation Header:
Generation Value: 50 Additional Length: 0
Reservation Descriptors:
Alpha2> scu -f /dev/rdisk/dsk47c show reservations
Persistent Reservation Header:
Generation Value: 50 Additional Length: 0
Reservation Descriptors:
Figure 22. Example of the scu command to show persistent reserves
40 DS6000 Host Systems Attachment Guide
Page 71
Chapter 5. Attaching to a Hewlett-Packard AlphaServer OpenVMS host
This chapter describes how to attach a Hewlett-Packard (HP) AlphaServer OpenVMS host to a storage unit with fibre-channel adapters.
Supported fibre-channel adapters for the HP AlphaServer OpenVMS host system
You can attach a storage unit to an HP AlphaServer that is running the OpenVMS operating system with the following fibre-channel adapters:
v KGPSA-CA adapter card
v KGPSA-DA (FCA-2354) adapter card
v KGPSA-EA (FCA-2384) adapter card
Note:
You do not need the Subsystem Device Driver because OpenVMS manages
multipathing.
For information about adapters that you can use to attach the storage unit to the HP host and the HP AlphaServer models, see http://www-
1.ibm.com/servers/storage/disk/ds6000.
Fibre-channel OpenVMS attachment requirements
Review the following requirements before you attach the storage unit to your host system:
v Ensure that you can reference the documentation for your host system and the
IBM TotalStorage DS6000 Information Center that is integrated with the IBM TotalStorage DS Storage Manager.
v See the Interoperability Matrix at
http://www.ibm.com/servers/storage/disk/ds6000/interop.html for details about the release level for your operating system.
v Either you or an IBM service support representative must install and configure an
IBM storage unit by using the procedures in the IBM TotalStorage DS6000 Service Guide.
Fibre-channel OpenVMS attachment considerations
Table 3 lists the maximum number of adapters you can have for an AlphaServer.
Table 3. Maximum number of adapters you can use for an AlphaServer
AlphaServer name Maximum number of adapters
800 2
1200 4
4000, 4000a 4
4100 4
8200, 8400 8
DS10, DS20, DS20E 2
© Copyright IBM Corp. 2004, 2005 41
Page 72
Table 3. Maximum number of adapters you can use for an AlphaServer (continued)
AlphaServer name Maximum number of adapters
ES40, ES45, ES47 4
GS60, GS60E, GS140 8
GS80, GS160, GS320 8
Supported OpenVMS feature codes
For more information about the feature codes for the storage unit and the distances that are supported by fibre-channel cables for the storage unit, see IBM TotalStorage DS6000 Introduction and Planning Guide.
Supported microcode levels for the HP OpenVMS host system
For more information about the microcode levels for the storage unit, see IBM TotalStorage DS6000 Introduction and Planning Guide.
Supported switches for the HP OpenVMS host system
IBM supports cascaded switches in configurations up to a maximum of 8 switches with a maximum of 3 interswitch hops for any path. Use two hops for normal operations with the third hop reserved for backup paths.
For more information on supported switches, see the Interoperability Matrix at http://www.ibm.com/servers/storage/disk/ds6000/interop.html
Attaching the HP AlphaServer OpenVMS host to a storage unit using fibre-channel adapters
The following sections describe attaching the HP AlphaServer OpenVMS host to a storage unit using fibre-channel adapters:
v Confirm the installation of the operating system
v Installing the KGPSA-xx adapter card
v Set up the storage unit
v Confirm switch connectivity
v Confirm storage connectivity
v Configure storage
v Restrictions
v Troubleshooting fibre-attached volumes
Supported operating system levels for fibre-channel attachment to an HP OpenVMS host
This section lists the supported operating system levels for fibre-channel attachment to an OpenVMS host system.
You can attach a storage unit to an OpenVMS host with the following system levels:
v 7.3
v 7.3-1
42 DS6000 Host Systems Attachment Guide
Page 73
v 7.3-2
See
the HP document, Guidelines for OpenVMS Cluster Configurations, for a
discussion about working with fibre-channel devices and system parameters such as MSCP_CMD_TMO, MVTIMEOUT, and MPDEV_LCRETRIES.
Confirming the installation of the OpenVMS operating system
This section provides steps to confirm the installation of the OpenVMS operating system.
Ensure that the storage unit supports attachment to the OpenVMS operating system level that you are installing. See “Supported operating system levels for fibre-channel attachment to an HP OpenVMS host” on page 42 for a list of supported OpenVMS operating system levels.
1. Use the show system command to show the current version of the OpenVMS operating system that you have installed. Figure 23 shows an example of what is displayed when you use the show system command.
2. Ensure that you have installed the most current version of patches and HP-recommended remedial kits. Some kits are dependent on other kits. Yo u must install some kits before you install another kit. See the documentation from HP for more information.
3. Use the product show history command to check the patches that are installed on your system. Figure 24 shows an example of what you see when you use the product show history command.
$ show system OpenVMS V7.3-2 on node DS20 5-FEB-2004 15:15:16.88 Uptime 1 01:45:09
Figure 23. Example of show system command to show system command on the OpenVMS operating system
$ product show history
----------------------------------- ----------- ----------- -------------------­PRODUCT KIT TYPE OPERATION DATE AND TIME
----------------------------------- ----------- ----------- -------------------­DEC AXPVMS VMS731_LAN V6.0 Patch Install 04-AUG-2004 15:34:30 DEC AXPVMS VMS731_UPDATE V1.0 Patch Install 04-AUG-2004 15:28:37 DEC AXPVMS VMS731_PCSI V1.0 Patch Install 04-AUG-2004 15:22:10 CPQ AXPVMS CDSA V1.0-2 Full LP Install 04-AUG-2004 13:59:00 DEC AXPVMS DWMOTIF V1.2-6 Full LP Install 04-AUG-2004 13:59:00 DEC AXPVMS OPENVMS V7.3-1 Platform Install 04-AUG-2004 13:59:00 DEC AXPVMS TCPIP V5.3-18 Full LP Install 04-AUG-2004 13:59:00 DEC AXPVMS VMS V7.3-1 Oper System Install 04-AUG-2004 13:59:00
----------------------------------- ----------- ----------- -------------------­8 items found $
Figure 24. Example of the product show history command to check the versions of patches already installed
Chapter 5. Attaching to a Hewlett-Packard AlphaServer OpenVMS host 43
Page 74
Installing the KGPSA-xx adapter card in an OpenVMS host system
This section provides the steps you must follow to install the KGPSA-xx adapter card.
1. Shut down the Hewlett-Packard AlphaServer host system.
2. Use the procedures that are provided by the manufacturer of the adapter card to install the KGPSA-xx host adapter.
3. Restart the host (nonclustered configurations) or each cluster member (clustered configurations).
4. Bring each host system to a halt condition at the console level.
5. Type set mode diag at the Hewlett-Packard AlphaServer console to place the console in diagnostic mode.
Note: This step is only for certain AlphaServer models.
6. Type wwidmgr-show adapter to confirm that you installed each adapter properly.
7. If necessary, update the adapter firmware. Figure 25 shows an example of what you see when you type set mode diag
and wwidmgr -show adapter. Figure 25 shows the worldwide node name (WWNN). You also need the worldwide port name (WWPN) to configure the storage unit host attachment. To determine the WWPN for the KGPSA adapters, replace the }2} in the WWNN with a }1}.
Setting the mode for the KGPSA-xx host adapter in an OpenVMS host system
This task describes setting the mode for the KGPSA-xx host adapter for an HP OpenVMS host system.
You must install the KGPSA-xx host adapter before you can set the mode.
The KGPSA-xx fibre-channel adapter card must be set to FABRIC. Use the
wwidmgr -show adapter console command to see the setting for the Cur. Topo
and Next Topo variables. Figure 26 on page 45 shows example output from the
wwidmgr command.
1. Type # shutdown -h now to shutdown the operating system.
P00>>>set mode diag Console is in diagnostic mode P00>>>wwidmgr -show adapter polling for units on kgpsa0, slot 9, bus 0, hose0... kgpsaa0.0.0.9.0 PGA0 WWN 2000-0000-c922-69bf polling for units on kgpsa1, slot 10, bus 0, hose0... kgpsab0.0.0.10.0 PGB0 WWN 2000-0000-c921-df4b item adapter WWN Cur. Topo Next Topo [ 0] kgpsab0.0.0.10.0 2000-0000-c921-df4b FABRIC FABRIC [ 1] kgpsaa0.0.0.9.0 2000-0000-c922-69bf FABRIC FABRIC [9999] All of the above. P00>>>
Figure 25. Example of the set mode diag command and the wwidmgr -show adapter command
44 DS6000 Host Systems Attachment Guide
Page 75
2. Place the AlphaServer into console mode.
3. Type wwidmgr -show adapter to check the mode. Figure 26 shows example
output from the wwidmgr command.
4. Type wwidmgr -set adapter -item 9999 -topo fabric to set the KGPSA host
adapter mode to FABRIC.
5. Type init to initialize the system.
6. Type wwidmgr -show adapter to check the mode.
Setting up the storage unit to attach to an HP AlphaServer OpenVMS host system with fibre-channel adapters
This section describes setting up the storage unit to attach to an HP AlphaServer OpenVMS host system with fibre-channel adapters.
Adding or modifying AlphaServer fibre-channel connections for the OpenVMS host
To create, remove, or modify the AlphaServer connections, use the IBM TotalStorage DS Storage Manager. When you create a connection, it is necessary to specify the worldwide port name of the host connection. See “Locating the worldwide port name (WWPN),” on page 211 for procedures on how to locate the WWPN.
Defining OpenVMS fibre-channel adapters to the storage unit
This section lists the steps you must perform to define OpenVMS fibre-channel adapters to the storage unit.
You must create a storage complex before defining each host adapter to the storage unit. Use the IBM TotalStorage DS Storage Manager to create a storage complex. You can use the following menu path: click Real-time manager, click Manage hardware, click Storage complexes.
To define each host adapter to the storage unit, perform the following steps using the IBM TotalStorage DS Storage Manager.
Note: These instructions assume that you are familiar with the IBM TotalStorage
DS Storage Manager.
1. Logon to the IBM TotalStorage DS Storage Manager and click Real-time manager, click Manage hardware, click Host systems.
2. From the Select action box, click Create, then click Go.
3. From the General host information screen, complete the following fields for each fibre-channel host adapter. When finished, click OK.
v Host type
v Nickname
item adapter WWN Cur. Topo Next Topo [ 0] pga0.0.4.1 2000-0000-C923-1765 FABRIC FABRIC
Figure 26. Example results of the wwidmgr command.
Chapter 5. Attaching to a Hewlett-Packard AlphaServer OpenVMS host 45
Page 76
v Description
4.
From the Define host ports screen, specify the host ports for this host. Click
Add to add each host port to the defined host ports table.
5. From the Define host WWPN screen, specify the world-wide port names for the selected hosts. When finished, click Next.
6. From the Select storage images screen, specify the storage image for attachment. Click Add to add the storage image to the selected storage images table. When finished, click Next.
7. From the Specify storage image parameters screen, specify the following parameters for each storage image.
v Host attachment identifier
v Volume Group for host attachment
v Type of I/O port (any valid storage image I/O port or a specific I/O port)
8.
Click the Apply assignment button to apply the current attachment assignment.
Use this button to go through this page repeatedly for each host attachment identifier that you want to assign to the storage image. When finished, click OK.
9. From the Verification screen, verify the data that represents the new host attachments. When finished, click Finish.
Configuring fibre-channel host adapter ports for OpenVMS
To configure the host adapter ports, use the IBM TotalStorage DS Storage Manager.
OpenVMS fibre-channel considerations
When you assign an OpenVMS host to a specific storage unit adapter port, ensure that you do not assign any other host type to the same storage unit adapter. Do not allow any other host that is not assigned to that adapter to access the OpenVMS-specific storage unit adapter port.
Using fabric zoning, you must create a zone strictly for OpenVMS hosts. The storage unit fibre-channel adapter port to which you assigned a storage unit host must reside exclusively within this OpenVMS zone. Failure to ensure this exclusivity might cause an OpenVMS host to hang. If this occurs, you must reboot the OpenVMS host.
OpenVMS UDID Support
Each OpenVMS fibre-attached volume requires a user-defined identifier or unit device identifier (UDID). This is a nonnegative integer that is used in the creation of the OpenVMS device name. All fibre-attached volumes have an allocation class of $1, followed by the letters, DGA, followed by the UDID. All storage unit LUNs that you assign to an OpenVMS system must have a UDID so that the operating system can detect and name the device. LUN 0 also must have a UDID; however, the system displays LUN 0 as $1GGA<UDID>, not as $1DGA<UDID>. See the HP document,
Guidelines for OpenVMS Cluster Configurations for more information about
fibre-attached storage devices.
You can use the IBM TotalStorage DS Storage Manager or the DS6000 command line interface (DS CLI) to set a value in a storage unit volume name field that is used by AlphaServer systems as the UDID for the volume. (In this document, we provide DS CLI examples.) You can find the DS CLI on the CD that you receive with the storage unit. See the IBM TotalStorage DS6000 Command-Line Interface
User’s Guide for more information.
46 DS6000 Host Systems Attachment Guide
Page 77
The DS CLI is a general purpose utility that supports various storage unit functions. The DS CLI allows 16 alphanumeric characters as input when you complete the storage unit volume name field. OpenVMS UDID values must be an integer within the range of 0 to 32767. Therefore, you must ensure that the input is valid for UDID support. The utility does not enforce UDID rules. It accepts values, such as AaBbCcDd, that are not valid for OpenVMS. It is possible to assign the same UDID value to multiple storage unit volumes. However, each volume that you assign to an OpenVMS system must have a value that is unique for that system or throughout the OpenVMS cluster to which that system belongs. Review the HP OpenVMS documentation for UDID rules, and verify that your input is valid.
Note: Volumes with UDIDs greater than 9999 cannot be MSCP-served in an
OpenVMS cluster to other systems.
The following example uses the DS CLI to add or change a name to an existing DS volume. In the example the DS CLI is in interactive mode and a configuration profile file has been defined. The final command uses the AlphaServer console to list fibre-attached volumes.
1. Use the chfbvol command to change the name of a fixed block volume: For
example to set the UIDD value to 21, type: chfbvol -name 21 0001
The value for the name parameter in the DS CLI command is the UDID field for the HP AlphaServer.
Note:
The first volume, LUN 0, will be reported as a CCL device, and not as a
disk volume.
2. To make a volume group called “VMS_A0” and add a volume to it, type:
mkvolgrp -type scsimap256 -volume 0001 VMS_A0
This command returns the volume group ID. The following is example output:
CMUC00030I mkvolgrp: Volume group V0 successfully created.
3. To create an OpenVMS host with the DS CLI and associate a volume group
with it, type: mkhostconnect -wwname 10000000ABCDEF98 -hosttype HpVms
-volgrp v0 ES40_A
This command returns the host connection ID. The following is example output:
CMUC00012I mkhostconnect: Host connection 0005 successfully created.
4. To display the defined attributes for a host connection, type: showhostconnect 0005
The following is example output:
Name ES40_A ID 0005 WWPN 10000000ABCDEF98 HostType HpVms LBS 512 addrDiscovery LUNPolling Profile HP - Open VMS portgrp 0 volgrpID V0 atchtopo ­ESSIOport all
5. To display the volumes in a volume group, and, its attributes, type:showvolgrp v0
The following is example output:
Chapter 5. Attaching to a Hewlett-Packard AlphaServer OpenVMS host 47
| |
| |
| |
| |
|
| |
|
| | |
|
| |
|
| |
|
| | | | | | | | | | |
|
| |
|
Page 78
Name VMS_A0 ID V0 Type SCSI Map 256 Vols 002A 0000F 0001
6. Use the wwidmgr -show wwid command at the AlphaServer console to list
fibre-attached volumes that have been detected by its fibre-channel host adapters. If a volume has no UDID or has an invalid UDID, the volume UDID number is minus one (-1) or zero (0). When it is booted, OpenVMS does not detect a volume with -1 or 0 as a UDID number.
OpenVMS LUN 0 - Command Control LUN
The storage unit assigns LUN numbers using the lowest available number. The first storage unit volume that is assigned to a host is LUN 0, the next volume is LUN 1, and so on. When a storage unit volume is unassigned, the system reuses the LUN number that it had when assigning another volume to that host.
To be compatible with OpenVMS, the storage unit volume that becomes LUN 0 for an OpenVMS system is interpreted as the Command Console LUN (CCL), or pass-through LUN. The storage unit does not support CCL command functions. This storage unit volume (LUN 0 for OpenVMS) does not display when you issue the wwidmgr -show wwid AlphaServer console command. When OpenVMS is running and a UDID has been set for that storage unit volume, the storage unit volume LUN 0 displays as a GGA device type, not as a DGA device. Although OpenVMS does not strictly require a UDID for the CCL, the SHOW DEVICE command displays CCL device creation if you set a UDID. Yo u can display the multiple paths and diagnose failed paths to the storage controller using the SHOW DEVICE/FULL command.
Guidelines: The first storage unit volume that you assign to an OpenVMS system
can become LUN 0. However, the volume that you assign must be used by the system for only support capability and must be at the minimum size. A host cannot use the volume for any other purpose. Multiple OpenVMS hosts, even in different clusters, that access the same storage unit, can share the same storage unit volume as LUN 0, because there will be no other activity to this volume.
At the AlphaServer console, when you issue the wwidmgr -show wwid command, LUN 0 does not display. Only fibre-channel storage devices are listed in the output. The storage unit LUN 0 is presented as a CCL device, and therefore, is not shown.
Confirming fibre-channel switch connectivity for OpenVMS
This section lists the steps you must perform to confirm fibre-channel switch connectivity for the OpenVMS host system.
1. Open a telnet session and log in to the switch as an administrator.
2. Confirm that each host adapter has performed a fabric login to the switch. Figure 27 on page 49 shows an example of what displays when you type the
switchshow command.
3. Confirm that each storage unit host adapter has performed a fabric login to the switch. Figure 27 on page 49 shows an example of what displays when you type the switchshow command.
48 DS6000 Host Systems Attachment Guide
| | | | |
|
| | | | |
Page 79
Confirming fibre-channel storage connectivity for OpenVMS
Perform the following steps to confirm the fibre-channel storage connectivity for the OpenVMS host system.
1. Reset the host (nonclustered configurations) or each cluster member (clustered configurations).
2. Bring each host system to a halt condition at the console level.
3. If required by the host, type set mode diag at the Hewlett-Packard AlphaServer console to place the console in diagnostic mode.
Type wwidmgr -show wwid to display the information about the storage unit volume at the console level. You can use this information to identify the volumes that are attached to an AlphaServer. Figure 28 shows an example of information about the storage unit volumes that you can see at the AlphaServer console.
4. Type wwidmgr-show adapter to confirm storage attachment.
snj2109f16h4:osl> switchshow switchName: snj2109f16h4 switchType: 9.1 switchState: Online switchRole: Principal switchDomain: 1 switchId: fffc01 switchWwn: 10:00:00:60:69:50:0c:3e switchBeacon: OFF port 0: id N1 Online F-Port 50:05:07:63:00:c9:91:62 port 1: id N1 Online F-Port 10:00:00:00:c9:22:d2:08 port 2: id N1 Online F-Port 10:00:00:00:c9:22:6a:63 port 3: id N1 Online F-Port 50:00:1f:e1:00:00:2b:11 port 4: id N2 No_Light port 5: id N1 Online F-Port 10:00:00:00:c9:22:d4:69 port 6: id N1 Online F-Port 10:00:00:00:c9:22:67:38 port 7: id N1 Online L-Port 1 private, 3 phantom port 8: id N2 No_Light port 9: id N1 Online F-Port 10:00:00:00:c9:22:69:bf port 10: id N1 Online F-Port 10:00:00:00:c9:21:df:4b port 11: id N1 Online F-Port 50:05:07:63:00:cf:8d:7e port 12: id N2 No_Light port 13: id N1 Online F-Port 50:05:07:63:00:c7:91:1c port 14: id N1 Online F-Port 50:05:07:63:00:cd:91:62 port 15: -- N2 No_Module snj2109f16h4:osl>
Figure 27. Example of the switchshow command
P00>>>set mode diag Console is in diagnostic mode P00>>>wwidmgr -show wwid [0] UDID:20 WWID:01000010:6005-0763-03ff-c0a4-0000-0000-0000-000f (ev:none) [1] UDID:21 WWID:01000010:6005-0763-03ff-c0a4-0000-0000-0000-0001 (ev:none) P00>>>
Figure 28. Example of storage unit volumes on the AlphaServer console
Chapter 5. Attaching to a Hewlett-Packard AlphaServer OpenVMS host 49
Page 80
OpenVMS World Wide Node Name hexadecimal representations
This section explains OpenVMS World Wide Node Name hexadecimal representations.
The UDID for each volume appears as -1, which signifies that the UDID is undefined. With the supported storage unit LMC, all UDIDs for storage unit volumes are undefined.
The underscore in Figure 29 highlights the World Wide Node Name that identifies the storage unit volume that is attached to an AlphaServer.
The underscore in Figure 30 highlights the hex string that identifies the volume number of the storage unit volume.
Verifying the fibre-channel attachment of the storage unit volumes for OpenVMS
Follow these steps to verify the fibre-channel attachment of the storage unit volumes for OpenVMS.
1. Start the operating system.
2. Use the standard OpenVMS storage configuration utilities to prepare the storage unit LUNs and create and mount devices. Figure 31 on page 51 shows an example of what is displayed when you use the standard OpenVMS storage configuration utilities.
01000010:6005-0763-03ff-c0a4-0000-0000-0000-0001
Figure 29. Example of a World Wide Node Name for the storage unit volume on an AlphaServer console
01000010:6005-0763-03ff-c0a4-0000-0000-0000-0001
Figure 30. Example of a volume number for the storage unit volume on an AlphaServer console
50 DS6000 Host Systems Attachment Guide
Page 81
Configuring the storage for fibre-channel OpenVMS hosts
Perform the following steps to configure the storage for fibre-channel OpenVMS hosts.
1. Start the operating system.
2. Initialize the storage unit volumes.
On the OpenVMS platform, you can initialize storage unit volumes as ODS-2 or ODS-5 volumes. Yo u can use the volumes to create volume sets. Volume sets are concatenated volumes that form a larger volume. See the HP document, OpenVMS System Manager’s Manual, Volume 1: Essentials.
3. Mount the storage unit volumes.
For OpenVMS shadow sets, the storage unit does not support READL or WRITEL commands. Therefore, the volume does not support the shadowing data repair (disk bad block errors) capability as some other disks do. Add the
/OVERRIDE=NO_FORCED_ERROR qualifier to the MOUNT command when you use
storage unit volumes as a shadow set. This qualifier suppresses bad block handling by OpenVMS shadowing data repair. See the HP document, Volume Shadowing for OpenVMS, for more information.
4. Access the storage unit volumes.
OpenVMS fibre-channel restrictions
The following restrictions are required for the storage unit host adapter to maintain compatibility with the OpenVMS host system. Compatibility is enabled on a storage unit adapter port after a defined host establishes fibre-channel connectivity to one or more storage unit volumes.
v You must dedicate storage unit adapter ports for only the OpenVMS type. It is
recommended that each OpenVMS host adapter be in a fabric zone with one or more storage unit adapter ports.
v All storage unit adapter ports that are in a fabric zone with an OpenVMS host
adapter must have at least one storage unit volume assigned to it. This can be the LUN 0 volume.
$ SHOW DEVICES DG
Device Device Error Volume Free Trans Mnt Name Status Count Label Blocks Count Cnt 1$DGA20: (HANK) Online 0 1$DGA21: (HANK) Online 0 $ $ INITIALIZE/SYSTEM $1$DGA20 ESS001
$ MOUNT/SYSTEM $1$DGA20 ESS001
$ DIRECTORY $1$DGA20:[000000]
$ DISMOUNT $1$DGA20
Figure 31. Example of what is displayed when you use OpenVMS storage configuration utilities
Chapter 5. Attaching to a Hewlett-Packard AlphaServer OpenVMS host 51
Page 82
v Multiple OpenVMS systems can access the same storage unit adapter port.
However, you must define each system for that specific storage unit port and assign at least one storage unit volume.
v To re-enable compatibility, you can force the defined OpenVMS host to
reestablish connectivity to the storage unit adapter by disabling the switch port it is connected to, and then enable the switch port again.
Troubleshooting fibre-channel attached volumes for the OpenVMS host system
This section contains a few examples of issues that can occur with your OpenVMS host system and some suggestions to correct the issues.
Problem
The following communication problems might occur when you attach a storage unit to an HP AlphaServer OpenVMS host system.
Investigation
If the system hangs during boot
Your system might be attempting to access a host adapter port that is not configured for it. To correct your system, check to ensure that the following items are true:
1. The fibre-channel switch is enabled to ensure zoning.
2. The zones for the OpenVMS host are connected to only those ports that are correctly configured to support them.
3. Other fabric zones do not include the OpenVMS host adapters by mistake.
If
the system reaches the mount verification timeout
All pending and future I/O requests to the volume fail. Yo u have to dismount and remount the disk before you can access it again.
The mount verification process for fibre attached volumes might not complete until the system reaches the mount verification timeout. This can be caused by the same scenario that causes the system to hang during boot. Verify that the path to the affected volumes is available, and then follow the above list.
If another host system gains access to a storage unit host adapter port that is dedicated for OpenVMS
The port will have its compatibility mode disabled. This can occur when the fabric switch has disabled all zoning. After you reenable zoning, the storage unit port compatibility remains disabled. Some internal storage unit processes cause a momentary loss of connectivity to volumes with activity I/O. OpenVMS has no problems when the storage unit adapter port is in compatibility mode. If the mode is disabled, disk I/O could fail a read or write and display the following message:
-SYSTEM-F-TOOMANYRED, too many redirects
Force the host adapter to reestablish connectivity
Forcing the OpenVMS host adapter to reestablish fibre-channel connectivity with the storage unit adapter port will enable compatibility mode. Yo u can force the connection only by disconnecting and then reconnecting one end of the physical fibre-channel cable between the host and the affected
52 DS6000 Host Systems Attachment Guide
Page 83
storage unit adapter port. You can also reestablish connectivity by accessing the fibre-channel switch and disabling one of the switch ports, and then enabling it again.
Chapter 5. Attaching to a Hewlett-Packard AlphaServer OpenVMS host 53
Page 84
54 DS6000 Host Systems Attachment Guide
Page 85
Chapter 6. Attaching to a Hewlett-Packard Servers (HP-UX) host
This chapter provides instructions, requirements, and considerations for attaching a Hewlett-Packard Servers (HP-UX) host system to a storage unit.
Attaching with fibre-channel adapters
This section describes the host system requirements and provides procedures to attach a storage unit to a Hewlett-Packard 9000 host system with fibre-channel adapters.
Supported fibre-channel adapters for HP-UX hosts
This section lists the supported fibre-channel adapters for HP-UX host systems.
This section describes how to attach a storage unit to a Hewlett-Packard host system with the following fibre-channel adapter cards:
v A5158A
v A6684A
v A6685A
v A6795A
v A6826A
v A9782A
Fibre-channel attachment requirements for HP-UX hosts
This section lists the requirements for attaching the storage unit to your HP-UX host system:
v Check the LUN limitations for your host system.
v Ensure that you can reference the documentation for your host system and the
IBM TotalStorage DS6000 Information Center that is integrated with the IBM TotalStorage DS Storage Manager.
v See the Interoperability Matrix at
http://www.ibm.com/servers/storage/disk/ds6000/interop.html for details about the release level for your operating system, open-systems hosts, adapters and switches that IBM supports.
Either
you or an IBM service support representative (SSR) must perform the
following tasks to install and configure a storage unit:
1. Install the storage unit by using the procedures in the IBM TotalStorage DS6000 Installation, Troubleshooting, and Recovery Guide.
2. Define the fibre-channel host system with the worldwide port name identifiers. For the list of worldwide port names, see “Locating the worldwide port name (WWPN),” on page 211.
3. Define the fibre-port configuration if you did not do it when you installed the storage unit or fibre-channel adapters.
4. Configure the host system for the storage unit by using the instructions in your host system publications.
Note:
To have failover protection on an open system, SDD requires at lease 2
paths, and SDD allows a maximum of 32 paths.
© Copyright IBM Corp. 2004, 2005 55
Page 86
Installing the fibre-channel adapter drivers for HP-UX 11.i, and HP-UX 11iv2
This section tells you how to download and configure the following fibre-channel adapter drivers for HP-UX 11.i and HP-UX 11iv2:
v A5158A
v A6684A
v A6685A
v A6975A
v A9782A
1.
Go to:
http://knowledge.storage.ibm.com/servers/storage/support/hbasearch/interop/hbaSearch.do
2. Select the appropriate options for your product and operating system.
3. Find the section for the current version of the driver and firmware and driver you want.
4. Click on View Details.
5. Under FC HBA drivers download, click on the driver type that you need for your system.
6. Find the driver that you need for your system and click Download.
7. Follow the system instructions to install the driver.
Setting the queue depth for the HP-UX operating system with fibre-channel adapters
Before you set the queue depth, you must connect the host system to the storage unit. See “General information about attaching to open-systems host with fibre-channel adapters” on page 12.
1. Use the following formula to set the queue depth for all classes of HP-UX:
256 ÷ maximum number of LUNs = queue depth
Note: Although this algorithm implies that the upper limit for the number of
LUNs on an adapter is 256, HP-UX supports up to 1024 LUNs.
2. You must monitor configurations with greater than 256 LUNs. Yo u must adjust the queue depth for optimum performance.
3. To update the queue depth by device level, use the following command: scsictl
-m queue_depth=21 /dev/rdsk/$dsksf
where /dev/rdsk/$dsksf is the device node.
4. To make a global change to the queue depth, use the HP System Administration Manager (SAM) to edit the kernel parameter so that it equals scsi_max_qdepth.
Configuring the storage unit for clustering on the HP-UX 11iv2 operating system
This section describes how to configure a storage unit for clustering on the HP-UX 11iv2 operating systems that use the MC/ServiceGuard 11.14.
56 DS6000 Host Systems Attachment Guide
|
| |
Page 87
The steps to configure MC/ServiceGuard with the storage unit are the same as the steps in the Hewlett-Packard high availability documentation. You can find that documentation at www.docs.hp.com/hpux/ha/index.html.
After you configure your host for normal operating system access, the storage unit acts as a normal disk device in the MC/ServiceGuard configuration. Create volume groups that contain the volumes by using the Hewlett-Packard logical volume manager. This method of disk management is more reliable, easier, and more flexible than whole-disk management techniques.
When you create volume groups, you can implement PV-Links, Hewlett-Packard’s built-in multipathing software for highly available disks such as the storage unit.
1. Create the volume group, using the path to the volumes that you want as the primary path to the data.
2. Extend the volume group with the path to the volumes that are intended as alternate paths.
The logical volume manager reads the label on the disk and knows that it is an alternate path to one of the volumes in the group. The logical volume manager labels the volume.
As an example, assume that you have a host that has access to a volume on a storage unit with the device nodes c2t0d0 and c3t0d0. You can use the c2 path as the primary path and create the volume group that uses only the c2t0d0 path.
3. Extend the volume group to include the c3t0d0 path. When you issue a
vgdisplay -v command on the volume group, the command lists c3t0d0 as an
alternate link to the data.
Chapter 6. Attaching to a Hewlett-Packard Servers (HP-UX) host 57
Page 88
58 DS6000 Host Systems Attachment Guide
Page 89
Chapter 7. Attaching to an IBM iSeries host
This topic describes the host system requirements to attach a storage unit to an IBM iSeries host system. This topic also describes the procedures for attaching to an IBM iSeries host system.
Attaching with fibre-channel adapters to the IBM iSeries host system
This section describes the host system requirements and provides the procedure to attach a storage unit to your IBM iSeries host system with fibre-channel adapters.
Supported fibre-channel adapter cards for IBM iSeries hosts
This section lists the fibre-channel adapter cards that are supported when you attach a storage unit to an IBM iSeries host system.
v Feature code 2766
v Feature code 2787
Fibre-channel attachment requirements for IBM iSeries hosts
This topic provides fibre-channel attachment requirements for IBM iSeries hosts.
Use the following requirements when attaching the storage unit to your host system:
1. You can obtain documentation for the IBM iSeries host system from publib.boulder.ibm.com/pubs/html/as400/infocenter.htm.
2. Check the LUN limitations for your host system. See Table 4 on page 60.
3. Either you or an IBM service support representative must install the storage unit by using the procedures in the IBM TotalStorage DS6000 Installation, Troubleshooting, and Recovery Guide.
Note:
1. You cannot use SDD on the IBM iSeries host system.
2. With i5/OS Version 5 Release 3, you can assign storage unit LUNs to multiple iSeries fibre-channel adapters through switches, direct connection, or through a fabric. These multiple adapters must all be within the same i5/OS LPAR.
Fibre-channel attachment considerations for IBM iSeries hosts
This section provides fibre-channel attachment considerations for IBM iSeries hosts.
Note: For a list of open-systems hosts, operating systems, adapters, switches, and
fabric connections that IBM supports, see the Interoperability Matrix at http://www.ibm.com/servers/storage/disk/ds6000/interop.html.
The storage unit creates LUN serial numbers that are eight characters in the format 0LLLLNNN, where:
LLLL A unique volume number that the storage unit assigns when it creates the
LUN.
NNN The low-order, three characters of the storage unit serial number or unique
three-character value that are entered using the menu option on a storage unit service panel
© Copyright IBM Corp. 2004, 2005 59
Page 90
Note:
1. You can specify 1 - 32 LUNs for each attachment to an IBM iSeries fibre-channel adapter.
2. Fibre-channel attached LUNs are identified as the storage unit device type of 1750, on the IBM iSeries host system.
3. You can place the IBM iSeries volumes in the storage unit storage arrays according to the selected host system attachment type.
4. With a pre-i5/OS Version 5 Release 3 iSeries system, you cannot share an IBM iSeries volume with more than one fibre-channel system attachment.
5. With an i5/OS Version 5 Release 3 iSeries system, you can share an IBM iSeries volume with multiple fibre-channel system attachment. All fibre-channel adapters must be within the same i5/OS LPAR.
6. The attachment type and the available storage unit storage array capacity determine the number of volumes that you can create.
You can create 1 - 32 LUNs for a fibre-channel attachment.
Figure
32 shows an example of the display for the hardware service manager
(HSM) auxiliary storage hardware resource detail for the 2766 adapter card. This same information is displayed for the 2787 adapter card.
Host limitations for IBM iSeries hosts
This topic list some host limitations for IBM iSeries hosts
See Table 4 for a description of the LUN assignments for the IBM iSeries host system.
Table 4. Host system limitations for the IBM iSeries host system
Host system LUN limitation assignments per target
IBM iSeries (fibre-channel). Yo u can attach the IBM iSeries through fibre-channel adapter feature 2766.
0 - 32
IBM iSeries (fibre-channel). Yo u can attach the IBM iSeries through fibre-channel adapter feature 2787.
0 - 32
Description........................: Multiple Function IOA
Type-Model.........................: 2766-001
Status.............................: Operational
Serial number......................: 10-22036
Part number........................: 0000003N2454
Resource name......................: DC18
Port worldwide name................: 10000000C922D223
PCI bus............................:
System bus.......................: 35
System board.....................: 0
System card......................: 32
Storage............................:
I/O adapter......................: 6
I/O bus..........................:
Controller.......................:
Device...........................:
Figure 32. Example of the display for the auxiliary storage hardware resource detail for a 2766 or 2787 adapter card
60 DS6000 Host Systems Attachment Guide
Page 91
IBM iSeries hardware
This section describes the hardware that you can use with IBM iSeries hosts.
The DS6000 supports the following models for the IBM iSeries hosts:
v Models 270, 800, 810, 820, 825, 830, 840, 870, 890
v The IBM eServer i5 product line consists of model 550, model 520, and model
570.
For
more information on supported switches, see the Interoperability Matrix at
http://www.ibm.com/servers/storage/disk/ds6000/interop.html
The IBM iSeries fibre-channel adapter automatically detects the attachment protocol. You do not need to manually perform a system configuration.
The iSeries fibre-channel adapter operates at 2 Gb on OS/400
®
Version 5 Release
2 or i5/OS Version 5 Release 3.
IBM iSeries software
This section describes the software that you can use with IBM iSeries hosts.
Before you attach a storage unit to the IBM iSeries host using a fibre-channel adapter, you must install one of the following operating systems:
v i5/OS Version 5 Release 3
v OS/400 Version 5 Release 2
v Red Hat Enterprise Linux 3.0
v SUSE SLES 9
For
more information on supported operating systems, see the Interoperability
Matrix at http://www.ibm.com/servers/storage/disk/ds6000/interop.html
General information for configurations for IBM iSeries hosts
This section contains general configuration information for IBM iSeries hosts.
The following list identifies some general information about connecting the IBM iSeries host through a switch:
v The IBM iSeries supports only a homogeneous environment (only IBM iSeries
initiators). You can establish a homogeneous environment by using the logical zoning of the switch. All host systems within an IBM iSeries zone must be IBM iSeries systems.
v CNT: A list of supported environments (servers, operating systems, adapters) and
hardware and software prerequisites for the CNT FC/9000 is available at http://www.storage.ibm.com/ibmsan/products/directors/index.html.
v Brocade: A list of supported environments (servers, operating systems, and
adapters) and hardware and software prerequisites for the Brocade 2109 M12 and 2109 M14 Fibre Channel Director is available at http://www.brocade.com.
v McDATA: A list of supported environments (servers, operating systems and
adapters) and hardware and software prerequisites for the McDATA ED-6064 is available at http://www.storage.ibm.com/ibmsan/products/2032/6064/.
Chapter 7. Attaching to an IBM iSeries host 61
Page 92
v Cisco: A list of supported environments (servers, operating systems, and
adapters) and hardware and software prerequisites for the Cisco is available at http://www.cisco.com/ibmsan/cisco/index.html.
Recommended configurations for IBM iSeries hosts
This topic provides recommended configurations for IBM iSeries hosts.
You can use the following configurations for each feature code:
v For feature code 2766:
Install feature code 2766, which is an I/O adapter card, in the IBM iSeries
system unit or in the high-speed link (HSL) PCI I/O towers.
Install only one 2766 adapter per I/O processor (IOP) because it requires a
dedicated IOP. No other I/O adapters are supported under the same IOP.
Install only two 2766 adapters per a multiadapter bridge.
v
For feature code 2787:
Install feature code 2787, which is an I/O adapter card, in the IBM iSeries
system unit or in the high-speed link (HSL) PCI I/O towers.
Install only one 2787 adapter per I/O processor (IOP) because it requires a
dedicated IOP. No other I/O adapters are supported under the same IOP.
Install only two 2787 adapters per a multiadapter bridge.
Figure
33 shows an example of the display for the HSM logical hardware resources
associated with the IOP.
Figure 34 on page 63 shows an example of the display for the HSM auxiliary storage hardware resource detail for the storage unit.
Opt Description Type-Model Status Resource Name
Combined Function IOP 2843-001 Operational CMB04 Storage IOA 2766-001 Operational DC18 Disk Unit 1750-A82 Operational DD143 Disk Unit 1750-A81 Operational DD140 Disk Unit 1750-A81 Operational DD101
Figure 33. Example of the logical hardware resources associated with an IOP
62 DS6000 Host Systems Attachment Guide
Page 93
You can define the storage unit LUNs as either protected or unprotected. From a storage unit physical configuration view point, all IBM iSeries volumes are RAID-5 or RAID-10 volumes and are protected within the storage unit. When you create the IBM iSeries LUNs by using the IBM TotalStorage DS Storage Manager, you can create them as logically protected or unprotected. Table 5 shows the disk capacity for the protected and unprotected models. To logically unprotect a storage LUN, allow the iSeries host to perform remote load source mirroring to that device. Because the load source is mirrored on an external LUN, the storage unit can copy or transfer this load source as a disaster recovery backup. When you use the iSeries tools kit, an iSeries host in a remote location, using a copy of the original load source, can recover this load source and start running as if this recovery box was the original source host.
Table 5. Capacity and models of disk volumes for IBM iSeries
Size Type
Protected model
Unprotected model
Release support
8.5 GB 1750 A01 A81 Version 5 Release 2 or later
17.5 GB 1750 A02 A82 Version 5 Release 2 or later
35.1 GB 1750 A05 A85 Version 5 Release 2 or later
70.5 GB 1750 A04 A84 Version 5 Release 2 or later
141.1 GB 1750 A06 A86 Version 5 Release 3 or later
282.2 GB 1750 A07 A87 Version 5 Release 3 or later
Description........................: Disk unit
Type-Model.........................: 1750-A82
Status.............................: Operational
Serial number......................: 75-1409194
Part number........................:
Resource name......................: DD143
licensed machine code.............: FFFFFFFF
Level..............................:
PCI bus............................:
System bus.......................: 35
System board.....................: 0
System card......................: 32
Storage............................:
I/O adapter......................: 6
I/O bus..........................: 0
Controller.......................: 1
Device...........................: 1
Figure 34. Example of the display for the auxiliary storage hardware resource detail for the storage unit
Chapter 7. Attaching to an IBM iSeries host 63
Page 94
Starting with i5/OS Version 5 Release 3, IBM iSeries supports multipath attachment through fibre-channel as part of the base i5/OS support. Version 5 Release 3 uses the existing HBA’s (feature code 2766 and 2787). New paths are automatically detected, configured by the system and immediately used. If a disk is initially setup as a single path and a second path is added, the resource name of the disk unit is modified from DDxxx to DMPxxx to reflect that it now has more than one path to the disk unit. No changes are needed by the user on the iSeries to make use of the new path. Multipath connections can be made direct connect or through a fabric.
To activate multipath on an iSeries host, use the IBM TotalStorage DS Storage Manager.
To improve the availability of fibre-channel attached disks (when there is more than one 2766 or 2787 I/O adapters), place each I/O adapter and its I/O processor as far as possible from the other fibre-channel disk I/O adapters or I/O processors in a system. If possible, place them on different HSL loops, different towers, or on different multiadapter bridges.
With i5/OS Version 5 Release 3, path information is available for the Disk Unit hardware configuration from iSeries Navigator.
Running the Linux operating system on an IBM i5 server
The Linux operating system can run on an IBM i5 server.
Disabling automatic system updates
Many Linux distributions give administrators the ability to configure their systems for automatic system updates. Red Hat provides this ability in the form of a program called up2date, and SUSE provides a program called YaST Online Update. You can configure these features to query for updates that are available for each host and to automatically install any new updates that they find, which can include updates to the kernel.
If your host is attached to the DS6000 series and is running the IBM Multipath Subsystem Device Driver (SDD), you should consider turning off this automatic update feature because some drivers supplied by IBM, like SDD, are dependent on a specific kernel and cannot function in the presence of a new kernel. Similarly, host bus adapters drivers must be compiled against specific kernels to function optimally. By allowing automatic update of the kernel, you risk an unexpected impact to your host system.
Supported fibre-channel adapters for IBM i5 servers running the Linux operating system
Fibre-channel adapters are supported for IBM i5 servers running the Linux operating system.
The following adapter cards are supported for the IBM i5 server when it is running the Linux operating system:
v Feature code 0612-designates a 2766 assigned to Linux
v Feature code 0626-designates a 5704 assigned to Linux
v Feature code 0646-used on IBM i5 servers
64 DS6000 Host Systems Attachment Guide
Page 95
Running the Linux operating system in a guest partition on an IBM i5 servers
This topic provides instructions for running the Linux operating system on IBM i5 servers.
IBM and a variety of Linux distributors have partnered to integrate the Linux operating system with the reliability of the i5 server. Linux brings a new generation of web-based applications to the i5 server. IBM has modified the Linux PowerPC
®
kernel to run in a secondary logical partition and has contributed the kernel back to the Linux community. This section contains an overview of the types of tasks you must perform to operate Linux on an i5 server:
v Plan to run Linux as a hosted or nonhosted guest partition
Find out what you need to do before you install Linux on the i5 server. Understand what software and hardware requirements that you need to support Linux. Find out the configuration options that are available and which options fit your company’s needs. Find out if the system that you own requires you to disable the multitasking function of your server’s processor.
v Create a guest partition to run Linux
Understand how to configure a guest partition using system service tools (SST) and how to install Linux on the i5 server. You can also find information about supported I/O adapters (IOAs) and how to configure a network server description (NWSD.)
v Manage Linux in a guest partition
Find the information you need to manage a guest partition running Linux. Use the information to understand what IPL types you can use and how Linux partitions can communicate with other partitions on the server.
v Ordering a new server or upgrading an existing server
Use the LPAR Validation tool to validate that your planned partitions are valid. You can also find the contact information that you need to order a new server.
Go to the following Web sites for more information about operating with Linux on a i5 server:
v http://www.ibm.com/servers/storage/disk/ds6000/interop.html
v http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm
Planning to run Linux in a hosted or nonhosted guest partition
This section provides the planning information you need to be able to run Linux in a hosted or nonhosted guest partition.
Find out what you need to do before you install Linux on the i5 server. Understand what software and hardware requirements are needed to support Linux. Find out the configuration options that are available and which options fit your company’s needs.
Linux support on iSeries servers
Evaluate each i5 server to determine if your hardware supports Linux. To successfully partition an i5 server to run Linux, the server requires specific hardware and software. The primary partition must run i5/OS Version 5 Release 3 and be updated with the latest programming temporary fixes. You can find the latest Linux related i5/OS programming temporary fixes at http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm.
Chapter 7. Attaching to an IBM iSeries host 65
Page 96
Linux is not supported on a primary partition.
Selected models can run Linux by using the shared processor pool configuration. Other models require the use of dedicated processors for a Linux partition. Those same models also require you to disable processor multitasking for the whole system, including the primary partition.
The Linux operating system supports single processors or multiple processors. Yo u make this choice when you create the guest partition. If you build a Linux kernel for a single processor that is loaded into a partition with multiple processors, the processor will function correctly. But, you can only use one processor. If you assign multiple processors to a partition, you must use Linux built for Symmetric Multiprocessors (SMP). Yo u can assign a number of available processors to a guest partition.
To determine whether or not your system will support Linux in a guest partition, go to http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm.
Creating a guest partition to run Linux
This sections helps you understand how to configure a guest partition and how to install Linux on the i5 server.
You can also find information about supported input/output (I/O) adapters and configuring a network server description.
Hosted versus nonhosted guest partition running Linux
A hosted guest partition is dependent on a hosting i5/OS partition for I/O resources. The hosting i5/OS partition can either be a primary or a secondary partition. The I/O resources a guest partition can use from a hosting partition include disk, CD, and tape devices.
You must start the hosted guest partition from the hosting i5/OS partition by using a network server description (NWSD). You can use the NWSD to control the guest partition. The guest partition can only be active when the hosting partition is active and out of restricted state. When i5/OS is in restricted state, all NWSDs are automatically varied off. An i5/OS partition can host multiple guest partitions. Ensure that the hosting partition can support guest partitions.
You can IPL a hosted guest partition by varying on an NWSD object. You must not power on a hosted guest partition when using the Work with Partitions Status display. If you power on the guest partition running Linux using the Work with Partitions Status display, all of the virtual I/O devices will be unavailable.
A nonhosted guest partition is not dependent on a hosting i5/OS partition for any I/O resources. The guest partition has its own disk units, or the partition makes use of networking support to do a network boot. You can start a nonhosted guest partition even if the primary partition is not fully active. Yo u can start a nonhosted guest partition from the Work with Partitions Status display.
Obtaining Linux for i5 servers
Linux is an open source operating system. You can obtain Linux in source format and build Linux for one person or a business. The open source code encourages
66 DS6000 Host Systems Attachment Guide
Page 97
feedback and further development by programmers. Linux developers are encouraged to design their own specialized distribution of the operating system to meet their specific needs.
All Linux distributions share a similar Linux kernel and development library. Linux distributors provide custom components that ease the installation and maintenance of Linux systems. Before you install another distributor’s version of Linux, verify that the kernel has been compiled for the Power PC and the hardware for the i5 server. Your system might be misconfigured and will not run Linux in a guest partition.
You can download different versions of Linux through the Internet. However, not all the versions of Linux have been tested for use with the storage unit. Please see your Linux distributor for information regarding how to obtain the latest maintenance updates.
iSeries I/O adapters (IOAs) supported by Linux
You can assign IOAs to a guest partition. See publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm for a list of adapters that the i5 server supports in a guest partition running Linux.
Managing Linux in a guest partition
The following sections provide the information that you need to manage Linux in a guest partition.
Virtual I/O in a guest partition running Linux
Virtual I/O resources are devices that are owned by the hosting i5/OS partition. The i5 Linux kernel and i5/OS support several different kinds of virtual I/O resources. They are virtual console, virtual disk unit, virtual CD, virtual tape, and virtual Ethernet.
Virtual console provides console function for the guest partition through an i5/OS partition. The virtual console can be established to the hosting partition or to the primary partition. The use of the virtual console allows the installation program to communicate with the user prior to networking resources that you configure. You can use the virtual console to troubleshoot system errors.
When you use virtual disk for a Linux partition, the i5/OS partitions control connectivity to the real disk storage. The i5/OS hosting partition and its operating system versions control the storage unit connectivity solely in this configuration. For more information about i5/OS connectivity to the storage unit, see http://www-1.ibm.com/servers/storage/disk/ds6000.
A hosting partition can only provide virtual disk unit. Virtual DASD provides access to NWSSTG virtual disks from Linux. By default, the CRTNWSSTG command creates a disk environment with one disk partition that is formatted with the FAT16 file system. The Linux installation program will reformat the disk for Linux, or you can use Linux commands such as fdisk and mke2fs to format the disk for Linux.
For more detailed information about how your company might use a guest partition with I/O resources go to http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm.
Chapter 7. Attaching to an IBM iSeries host 67
Page 98
Directly attached I/O in a guest partition running Linux
With directly attached I/O, Linux manages the hardware resources directly, and all I/O resources are under the control of the Linux operating system. You can allocate disk units, tape devices, optical devices, and LAN adapters to a guest partition running Linux.
You must have an NWSD to install Linux in a guest partition. After you install Linux, you can configure the partition to start independently.
For directly attached hardware, all failure and diagnostic messages are displayed within the guest partition.
Connectivity to the storage unit from i5 Linux is solely through fibre-channel adapters. For more information about the adapters and the Linux device, see http://www-1.ibm.com/servers/eserver/iseries/linux/fibre_channel.html.
For more detailed information about how your company might use a guest partition with I/O resources go to http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm or contact your IBM marketing representative or IBM Business Partner for further assistance on using directly attached I/Os in a guest partition.
Ordering a new server or upgrading an existing server to run a guest partition
This section tells you who to contact to order a new server or upgrade an existing server to run a guest partition.
The LPAR Validation tool emulates an LPAR configuration and validates that the planned partitions are valid. In addition, the LPAR Validation Tool allows you to test the placement of i5/OS and Linux hardware within the system to ensure that the placement is valid. Refer to Logical Partitions for information on the LPAR Validation Tool (LVT).
Contact your IBM marketing representative or IBM Business Partner to enter the order. You can enter the order by using the iSeries configurator. The configurator has been enhanced to support ordering IOAs without IOPs when you define a Linux partition.
68 DS6000 Host Systems Attachment Guide
Page 99
Chapter 8. Attaching to an IBM NAS Gateway 500 host
This chapter provides instructions, requirements, and considerations for attaching an IBM NAS Gateway 500 host to a storage unit.
Supported adapter cards for IBM NAS Gateway 500 hosts
This topic list the supported host adapters for the IBM NAS Gateway 500 host.
The following fibre-channel adapter cards are supported for the IBM NAS Gateway 500:
v Feature code 6239
v Feature code 6240
Feature
code 6239 is a 1-port 2 gigabit fibre-channel host adapter card. Feature
code 6240 is a 2-port 2 gigabit fibre-channel host adapter card.
Finding the worldwide port name
You can obtain the worldwide port names (WWPN) of the fibre-channel adapters installed on NAS Gateway 500 through a Web browser or through the DS CLI. The following sections provide instructions for using these methods.
Obtaining WWPNs using a Web browser
This section provides instructions for obtaining WWPNs using a web browser.
If your external storage requires you to enter a worldwide port name (WWPN) for the fibre-channel host adapters that are installed in your NAS Gateway 500, you can obtain these WWPNs by using the Internet.
1. Open a Web brower.
2. Enter the following Web address: http://hostname/NAS500GetWWN.html
where hostname is the host name or IP address of your NAS Gateway 500 system. If your NAS Gateway 500 is not within the same IP subnet, use the fully qualifield domain name that you use with the DNS name resolution. For example: nasgateway500.servers.mycompany.com.
Obtaining WWPNs through the command-line interface
This section provides instruction for obtaining WWPNs through the command line interface.
1. Login to the NAS Gateway 500 using the root user ID from a serial terminal.
2. Run the following command to install the WWPNs of all fibre-channel adapters:
lscfg -vpl “fcs*” |grep Network
Figure 35 on page 70 shows an example of the output that you would receive.
© Copyright IBM Corp. 2004, 2005 69
Page 100
3. You can optionally use the following command to put all vital product data of the fibre-channel adapters installed on NAS Gateway 500 into a text file.
lscfg -vpl “fcs*” > foo.txt
You can keep this file for future use. Figure 36 shows an example of the text output that you would receive. You can find information for your WWPN, location information, microcode level, part number, other information about your fibre-channel adapters in this file.
(/)-->lscfg -vpl “fcs*” |grep Network
Network Address.............10000000C93487CA
Network Address.............10000000C934863F
Network Address.............10000000C93487B8
Network Address.............10000000C934864F
Figure 35. Example output from the lscfg -vpl “fcs*” |grep Network command.
fcs2 U0.1-P2-I6/Q1 FC Adapter
Part Number.................00P4295
EC Level....................A
Serial Number...............1E323088E2
Manufacturer................001E
Feature Code/Marketing ID...5704
FRU Number.................. 00P4297
Device Specific.(ZM)........3
Network Address.............10000000C93487CA
ROS Level and ID............02E01035
Device Specific.(Z0)........2003806D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF601032
Device Specific.(Z5)........02E01035
Device Specific.(Z6)........06631035
Device Specific.(Z7)........07631035
Device Specific.(Z8)........20000000C93487CA
Device Specific.(Z9)........HS1.00X5
Device Specific.(ZA)........H1D1.00X5
Device Specific.(ZB)........H2D1.00X5
Device Specific.(YL)........U0.1-P2-I6/Q1
fcs3 U0.1-P2-I5/Q1 FC Adapter
Part Number.................00P4295
EC Level....................A
Serial Number...............1E3230890F
Manufacturer................001E
Feature Code/Marketing ID...5704
FRU Number.................. 00P4297
Device Specific.(ZM)........3
Network Address.............10000000C934863F
ROS Level and ID............02E01035
Device Specific.(Z0)........2003806D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Figure 36. Example output saved to a text file
70 DS6000 Host Systems Attachment Guide
Loading...