IBM DS8800, DS8700 Introduction And Planning Manual

IBM System Storage DS8800 and DS8700
Version 6 Release 3
Introduction and Planning Guide

GC27-2297-09
IBM System Storage DS8800 and DS8700
Version 6 Release 3
Introduction and Planning Guide

GC27-2297-09
Note:
Before using this information and the product it supports, read the information in the Safety and environmental notices and Notices sections.
© Copyright IBM Corporation 2004, 2012.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents

Figures ..............vii
Tables ...............ix
Safety and Environmental notices . . . xi
Safety notices ..............xi
Environmental notices...........xii
About this guide ..........xiii
Who should use this guide .........xiii
Conventions used in this guide .......xiii
DS8000 library and related publications.....xiii
How to order IBM publications .......xvi
Send us your feedback ..........xvi
Summary of changes ........xix
Chapter 1. Introduction to IBM System
Storage DS8000 series ........1
DS8700 overview .............1
DS8700 (Models 941 and 94E)........4
DS8800 overview .............5
DS8800 (Models 951 and 95E)........7
Machine types overview ..........14
Features overview ............14
Limitations ..............19
DS Storage Manager limitations ......20
DS8000 physical footprint .........20
Chapter 2. Hardware and features . . . 23
Storage complexes ............23
IBM System Storage Hardware Management
Console................23
RAID implementation ...........24
RAID 5 overview ...........24
RAID 6 overview ...........25
RAID 10 overview ...........25
DS8000 Interfaces ............25
System Storage DS Storage Manager .....25
Tivoli Storage Productivity Center ......26
DS command-line interface ........27
DS Open Application Programming Interface . . 28
Tivoli Storage Productivity Center for Replication 28
DS8000 hardware specifics .........29
Storage unit structure ..........29
Disk drives .............30
Host attachment overview ........30
Processor memory ...........34
Subsystem device driver for open-systems ....34
Balancing the I/O load ..........34
Storage consolidation ...........35
Count key data .............36
Fixed block ..............36
T10 DIF support............36
||
Logical volumes .............37
Allocation, deletion, and modification of volumes 37
LUN calculation .............38
Extended address volumes for CKD ......39
Quick initialization ............40
Chapter 3. Data management features 41
FlashCopy SE feature ...........41
Dynamic volume expansion .........42
Count key data and fixed block volume deletion
prevention...............42
IBM System Storage Easy Tier ........42
Easy Tier: automatic mode ........44
Easy Tier: manual mode .........48
Monitoring volume data .........51
Managing migration processes .......51
IBM System Storage DS8000 Storage Tier Advisor
Tool................52
Easy Tier considerations and limitations ....53
Performance for System z .........54
Copy Services .............55
Thin provisioned and Global Mirror volume
considerations ............63
Disaster recovery using Copy Services ....64
Resource groups for Copy Services scope limiting 65
Comparison of licensed functions .......67
Logical configuration overview ........67
I/O Priority Manager ...........69
Encryption ..............70
Encryption concepts ..........70
Tivoli Key Lifecycle Manager .......71
IBM Security Key Lifecycle Manager for z/OS. . 74
DS8000 disk encryption .........74
Encryption deadlock ..........78
Best practices for encrypting storage
environments.............79
Guidelines and requirements for key server
management .............85
Exporting and importing keys between key
server instances ............85
DS8000 encryption considerations ......86
Virtual private network .........87
Chapter 4. Planning the physical
configuration ............89
Overview of physical configurations ......89
Configuration controls..........90
Determining physical configuration features . . 90
Management console features ........91
Internal and external management consoles . . 91
Management console external power cord . . . 92
Configuration rules for management consoles . . 93
Storage features .............93
Disk drives and disk enclosures ......93
Standby CoD disk sets..........97
© Copyright IBM Corp. 2004, 2012 iii
Disk enclosure fillers ..........99
Device adapters ............99
Disk drive cables ...........100
Configuration rules for storage features . . . 101
Additional storage feature information ....102
I/O adapter features ...........107
I/O enclosures and cables ........107
Fibre Channel (SCSI-FCP and FICON) host
adapters and cables ..........107
Configuration rules for I/O adapter features 109
Processor memory features .........113
Feature codes for processor memory.....113
Configuration rules for processor memory . . . 114
Power features .............114
Powercords.............114
Input voltage ............115
Battery assemblies ...........116
Configuration rules for power features ....116
Other configuration features ........117
Extended power line disturbance ......117
Remote zSeries power control feature ....117
Earthquake Resistance Kit ........117
BSMI certificate (Taiwan) ........118
Shipping weight reduction ........118
Chapter 6. Meeting DS8000 series delivery and installation requirements. 139
Delivery requirements ..........139
Receiving delivery...........139
Installation site requirements ........142
Planning for floor and space requirements. . . 142
Planning for power requirements......160
Planning for environmental requirements . . . 165
Providing a fire-suppression system .....171
Considering safety issues ........172
Planning for external management console
installation .............172
Planning for network and communications
requirements ............173
Chapter 7. Planning your DS8000
storage complex setup .......177
Company information ..........177
Management console network settings .....177
Remote support settings..........178
Notification settings ...........178
Power control settings ..........179
Control switch settings ..........179
Chapter 5. Planning use of licensed
functions .............121
239x function authorization models (242x machine
types) ................121
Licensed function indicators ........121
License scope .............122
Ordering licensed functions ........124
Ordering rules for licensed functions .....124
Operating environment license (239x Model LFA,
OEL license, 242x machine type) .......127
Feature codes for the operating environment
license...............127
Parallel access volumes (239x Model LFA, PAV
license; 242x machine type).........128
Feature codes for parallel access volume . . . 128 IBM HyperPAV (242x Model PAV and 239x Model
LFA, PAV license) ............129
Feature code for IBM HyperPAV ......129
IBM System Storage Easy Tier ........129
Point-in-time copy function (239x Model LFA, PTC license) and FlashCopy SE Model SE function (239x
Model LFA, SE license) ..........130
Feature codes for point-in-time copy ....130
Feature codes for FlashCopy SE ......131
Remote mirror and copy functions (242x Model
RMC and 239x Model LFA) ........132
Feature codes for remote mirror and copy . . . 132
Feature codes for I/O Priority Manager ....133
z/OS licensed features ..........134
Remote mirror for z/OS (242x Model RMZ and
239x Model LFA, RMZ license) ......134
Feature codes for z/OS Metro/Global Mirror
Incremental Resync (RMZ Resync) .....135
z/OS Distributed Data Backup ......135
Thin provisioning LIC key feature .....136
Chapter 8. Planning data migration 181
Chapter 9. Managing and activating
licenses..............183
Planning your licensed functions .......183
Activating licensed functions ........184
Obtaining activation codes ........184
Importing activation keys ........185
Adding activation keys .........186
Scenarios for managing licensing .......187
Adding storage to your machine ......187
Managing a licensed feature .......187
Appendix A. Accessibility features for
the DS8000 ............189
Appendix B. IBM-provided DS8000
equipment and documents .....191
Installation components ..........191
Customer components ..........192
Service components ...........192
Appendix C. Company information
work sheet ............193
Appendix D. Management console
network settings work sheet .....197
Appendix E. Remote support work
sheets ..............203
Outbound (call home and dump/trace offload)
work sheet ..............203
Inbound (remote services) work sheet .....208
iv Introduction and Planning Guide
Appendix F. Notification work sheets 211
SNMP trap notification work sheet ......211
Email notification work sheet ........212
Appendix G. Power control work
sheet ...............215
Appendix H. Control switch settings
work sheet ............217
Notices ..............221
Trademarks ..............222
Electronic emission notices .........223
Federal Communications Commission statement 223
Industry Canada compliance statement....223
European Union Electromagnetic Compatibility
Directive ..............223
Japanese Voluntary Control Council for
Interference (VCCI) class A statement ....225
Japanese Electronics and Information Technology Industries Association (JEITA)
statement..............225
Korea Communications Commission (KCC) Electromagnetic Interference (EMI) Statement. . 225 Russia Electromagnetic Interference (EMI) Class
A Statement .............226
Taiwan Class A compliance statement ....226
Taiwan contact information.........226
Index ...............227
Contents v
vi Introduction and Planning Guide

Figures

1. A Model 941 (2-way processor with the front
cover off) and its main components .....2
2. A 94E expansion model (with the back cover
off) and its main components .......3
3. Configuration for 941 (4-way) with two 94E
expansion models ...........5
4. Base model (front and back views) of a Model
951 (4-way) .............8
5. Expansion model (front and back views) of a
Model 95E .............9
6. Expansion model configured with drives 10
7. Front view of the storage expansion enclosure 11
8. Back view of the storage expansion enclosure 12
9. Front view of the LFF storage expansion
enclosure .............12
10. DS8000 physical footprint. Dimensions are in
centimeters (inches). .........21
11. Adapter plug order for the DS8700 (4-port) and the DS8800 (4-port and 8-port ) in a HA
configuration. ............31
12. Plug order for two and four DS8700 and
DS8000 I/O enclosures.........32
13. Three-tier migration types and their processes 48
14. Remote Pair FlashCopy ........59
15. Implementation of multiple-client volume
administration ...........66
16. Logical configuration sequence ......69
17. Maximum tilt for a packed unit is 10° 140
18. DS8000 with top exit feature installed (cable
routing and top exit locations) ......144
19. Cable cutouts for a DS8000 unit. .....146
20. Cable cutouts for a DS8800 .......147
21. Measurements for DS8000 placement with top
exit bracket feature present .......148
22. Service clearance requirements ......153
23. Earthquake Resistance Kit, as installed on a
raised floor ............155
24. Locations for the cable cutouts and rubber bushing holes in the raised floor and the eyebolt installation on the concrete floor. The pattern repeats for up to five models.
Dimensions are in millimeters (inches). . . . 156
25. Eyebolt required dimensions. Dimensions are
in millimeters (inches). ........157
26. Earthquake Resistance Kit, as installed on a nonraised floor. The detail shows two of the most common fasteners that you could use. . 158
27. Locations for fastener installation (nonraised floor). The pattern repeats for up to five models. Dimensions are in millimeters
(inches). .............159
28. DS8000 layouts and tile setups for cooling 170
29. DS8800 layouts and tile setup for cooling 171
© Copyright IBM Corp. 2004, 2012 vii
viii Introduction and Planning Guide

Tables

1. DS8000 library ...........xiv
2. Other IBM publications ........xiv
3. IBM documentation and related websites xv
4. Available hardware and function authorization
machine types ...........14
5. Capacity and models of disk volumes for
System i .............39
6. Drive combinations to use with three-tiers 44
7. DS CLI and DS Storage Manager settings for
monitoring.............51
8. Volume states required for migration with
Easy Tier .............51
9. Comparison of licensed functions .....67
10. Performance groups and policies .....70
11. Storage unit configuration values .....89
12. Management console feature codes.....92
13. Management console external power cord
feature codes ............93
14. Disk drive set feature placement - full drive
sets ...............94
15. Disk drive set feature placement - half drive
sets ...............94
16. Disk drive set feature codes .......95
17. Feature codes for IBM Full Disk Encryption
disk drive sets ...........96
18. Disk enclosure feature code .......96
19. Feature codes for Standby CoD disk drive sets
(16 disk drives per set) .........97
20. Feature codes for IBM Full Disk Encryption Standby CoD disk drive sets (16 disk drives
per set) ..............98
21. Device adapter feature codes ......100
22. Disk drive cable feature codes ......100
23. DS8000 RAID capacities for RAID 5 arrays 103
24. DS8000 RAID capacities for RAID 6 arrays 104
25. DS8000 RAID capacities for RAID 10 arrays 105
26. PCIe cable feature codes ........107
27. Fibre Channel host adapters feature codes 108
28. Fibre Channel cable feature codes.....109
29. Model 941 required I/O enclosures and device adapters (based on disk enclosures). . 110
30. Model 951 required I/O enclosures and device adapters (based on disk enclosures) . . 111
31. Model 941 minimum and maximum host
adapter features...........112
32. Model 951 minimum and maximum host
adapter features...........112
33. Processor memory feature codes .....113
34. Power cord feature codes .......115
35. Additional feature codes for top exit power
cords..............115
36. Required quantity of battery assemblies (feature 1050) for expansion models. The I/O enclosure feature is 1301 and the extended
PLD feature is 1055. .........116
37. Licensed function indicators for Model 941
and Model 951 ...........122
38. License scope for each DS8000 licensed
function .............123
39. Total physical capacity of each type of disk
drive feature ............124
40. Operating environment license feature codes 128
41. Parallel access volume (PAV) feature codes 129
42. Point-in-time copy (PTC) feature codes 131
43. FlashCopy SE feature codes.......131
44. Remote mirror and copy (RMC) feature codes 133
45. I/O Priority Manager feature codes ....134
46. Remote mirror for System z (RMZ) feature
codes ..............135
47. z/OS Metro/Global Mirror Incremental Resync (RMZ Resync) for System z feature
codes ..............135
48. License z/OS Distributed Data Backup
function indicator ..........136
49. License function indicators for the Thin
Provisioning feature .........137
50. Packaged dimensions and weight for DS8000
models (all countries) .........141
51. Feature codes for overhead cable option (top
exit bracket) ............145
52. Floor load ratings and required weight
distribution areas ..........149
53. DS8000 dimensions and weights .....151
54. DS8000 input voltages and frequencies 162
55. DS8000 power cords .........162
56. Power consumption and environmental information for the DS8000 series - Models
941and94E............163
57. Power consumption and environmental information for the DS8000 series - Models
951and95E............164
58. Acoustic declaration for the DS8000 series,
including the DS8800 .........165
59. Machine fan location .........165
60. Operating extremes with the power on 166
61. Optimum operating points with the power on 166
62. Optimum operating ranges with the power
on...............166
63. Temperatures and humidity with the power
off...............166
64. Temperatures and humidity while in storage 167
65. Vibration levels for the DS8700 and DS8800 167
66. Comparison of data migration options 181
67. Company information work sheet.....193
68. Management console network settings work
sheet ..............197
69. Default TCP/IP address range determination 200
70. Address ranges of the storage facility private network (LIC bundles 75.xx.xx.xx and above) . 201
71. IPv6 configuration options .......201
72. Outbound (call home) work sheet.....204
© Copyright IBM Corp. 2004, 2012 ix
73. Types of FTP proxy servers .......208
74. Inbound (remote services) work sheet 209
75. SNMP trap notification work sheet ....212
76. Email notification work sheet ......213
77. Power control work sheet .......215
78. SIM presentation to operator console by
severity .............218
79. SIM severity definitions ........218
80. Control switch settings work sheet ....219
x Introduction and Planning Guide

Safety and Environmental notices

This section contains information about safety notices that are used in this guide and environmental notices for this product.

Safety notices

Observe the safety notices when using this product. These safety notices contain danger and caution notices. These notices are sometimes accompanied by symbols that represent the severity of the safety condition.
Most danger or caution notices contain a reference number (Dxxx or Cxxx). Use the reference number to check the translation in the IBM System Storage DS8000 Safety Notices, P/N 98Y1543.
The sections that follow define each type of safety notice and give examples.
Danger notice
A danger notice calls attention to a situation that is potentially lethal or extremely hazardous to people. A lightning bolt symbol always accompanies a danger notice to represent a dangerous electrical condition. A sample danger notice follows:
DANGER: An electrical outlet that is not correctly wired could place hazardous voltage on metal parts of the system or the devices that attach to the system. It is the responsibility of the customer to ensure that the outlet is correctly wired and grounded to prevent an electrical shock. (D004)
Caution notice
A caution notice calls attention to a situation that is potentially hazardous to people because of some existing condition, or to a potentially dangerous situation that might develop because of some unsafe practice. A caution notice can be accompanied by one of several symbols:
If the symbol is... It means...
A generally hazardous condition not represented by other safety symbols.
This product contains a Class II laser. Do not stare into the beam. (C029) Laser symbols are always accompanied by the classification of the laser as defined by the U. S. Department of Health and Human Services (for example, Class I, Class II, and so forth).
A hazardous condition due to mechanical movement in or around the product.
© Copyright IBM Corp. 2004, 2012 xi
If the symbol is... It means...
This part or unit is heavy but has a weight smaller than 18 kg (39.7 lb). Use care when lifting, removing, or installing this part or unit. (C008)
Sample caution notices follow:
Caution
The battery is a lithium ion battery. To avoid possible explosion, do not burn. Exchange only with the IBM-approved part. Recycle or discard the battery as instructed by local regulations. In the United States, IBM process for the collection of this battery. For information, call 1-800-426-4333. Have the IBM part number for the battery unit available when you call. (C007)
Caution
The system contains circuit cards, assemblies, or both that contain lead solder. To avoid the release of lead (Pb) into the environment, do not burn. Discard the circuit card as instructed by local regulations. (C014)
Caution
When removing the Modular Refrigeration Unit (MRU), immediately remove any oil residue from the MRU support shelf, floor, and any other area to prevent injuries because of slips or falls. Do not use refrigerant lines or connectors to lift, move, or remove the MRU. Use handholds as instructed by service procedures. (C016)
®
has a
Caution
Do not connect an IBM control unit directly to a public optical network. The customer must use an additional connectivity device between an IBM control unit optical adapter (that is, fibre, ESCON external public network . Use a device such as a patch panel, a router, or a switch. You do not need an additional connectivity device for optical fibre connectivity that does not pass through a public network.

Environmental notices

The environmental notices that apply to this product are provided in the Environmental Notices and User Guide, Z125-5823-xx manual. A copy of this manual is located on the publications CD.
®
, FICON®) and an
xii Introduction and Planning Guide

About this guide

The IBM System Storage®DS8800 and IBM System Storage DS8700 Introduction and Planning Guide provides information about the IBM System Storage DS8800 and DS8700 storage units.
This guide provides you with the following information: v What you need to consider as you plan to use the DS8800 and DS8700 storage
units.
v How you can customize your DS8800 and DS8700 storage units.

Who should use this guide

The IBM System Storage DS8800 Introduction and Planning Guide is for storage administrators, system programmers, and performance and capacity analysts.

Conventions used in this guide

The following typefaces are used to show emphasis:
boldface
Text in boldface represents menu items and lowercase or mixed-case command names.
italics Text in italics is used to emphasize a word. In command syntax, it is used
for variables for which you supply actual values.
monospace
Text in monospace identifies the data or commands that you type, samples of command output, or examples of program code or messages from the system.

DS8000 library and related publications

Product manuals, other IBM publications, and websites contain information that relates to DS8000®.
DS8000 Information Center
The IBM System Storage DS8000 Information Center contains all of the information that is required to install, configure, and manage the DS8000. The information center is updated between DS8000 product releases to provide the most current documentation. The information center is available at the following website: publib.boulder.ibm.com/infocenter/ds8000ic/index.jsp
DS8000 library
Table 1 on page xiv lists and describes the publications that make up the DS8000 library. Unless otherwise noted, these publications are available in Adobe portable document format (PDF). Go to the IBM Publications Center at www.ibm.com/shop/publications/order to obtain a publication.
© Copyright IBM Corp. 2004, 2012 xiii
Table 1. DS8000 library
Title Description
IBM System Storage DS: Command-Line Interface User's Guide
IBM System Storage DS8000: Host Systems Attachment Guide
IBM System Storage DS8000: Introduction and Planning Guide
IBM System Storage Multipath Subsystem Device Driver User's Guide
IBM System Storage DS Application Programming Interface Reference
This guide describes the commands that you can use from the command-line interface (CLI) for managing your DS8000 configuration and Copy Services relationships. The CLI provides a set of commands that you can use to write customized scripts for a host system.
This guide provides information about attaching hosts to the DS8000 storage unit. The DS8000 provides a variety of host attachments so that you can consolidate storage capacity and workloads for open-systems hosts and System z
This guide introduces the DS8000 product and lists the features you can order. It also provides guidelines for planning the installation and configuration of the storage unit.
This publication describes how to use the IBM Subsystem Device Driver (SDD) on open-systems hosts to enhance performance and availability on the DS8000. SDD creates single devices that consolidate redundant paths for logical unit numbers. SDD permits applications to run without interruption when path errors occur. It balances the workload across paths, and it transparently integrates with applications.
This publication provides reference information for the IBM System Storage DS application programming interface (API) and provides instructions for installing the Common Information Model Agent, which implements the API.
®
Order Number
GC53-1127
GC27-2298
or S/390®hosts.
GC27-2297
GC27-2122
GC35-0516
Other IBM publications
Other IBM publications contain additional information that is related to the DS8000 product library. Table 2 is divided into categories to help you find publications that are related to specific topics.
Table 2. Other IBM publications
Title Description Order number
IBM System Storage Productivity Center Introduction and Planning Guide
Read This First: Installing the IBM System Storage Productivity Center
IBM System Storage Productivity Center Software Installation and User's Guide
xiv Introduction and Planning Guide
System Storage Productivity Center
This publication introduces the IBM System Storage Productivity Center hardware and software.
This publication provides quick instructions for installing the IBM System Storage Productivity Center hardware.
This publication describes how to install and use the IBM System Storage Productivity Center software.
SC23-8824
GI11-8938
SC23-8823
Table 2. Other IBM publications (continued)
Title Description Order number
IBM System Storage Productivity Center User's Guide
This publication describes how to use the IBM System Storage Productivity Center to manage the DS8000, IBM System Storage SAN Volume
SC27–2336
Controller clusters, and other components of your data storage infrastructure from a single interface.
®
Key Lifecycle Manager
SC23-9977
IBM Tivoli Key Lifecycle Manager Installation and Configuration Manager
IBM Tivoli
This publication describes how to install and configure the Tivoli encryption key manager. The key server can be used to manage the encryption keys assigned to the IBM Full Disk Encryption disk drives in the DS8000.
IBM System Management Pack for Microsoft
IBM System Management Pack for Microsoft System Center Operations
This publication describes how to install, configure, and use the IBM Storage Management Pack for Microsoft System Center Operations Manager (SCOM).
GC27-3909
Manager User Guide
IBM documentation and related websites
The following websites provide information about the DS8000 or related products or technologies:
Table 3. IBM documentation and related websites
Website Link
IBM System Storage DS8000 series www.ibm.com/servers/storage/disk/ds8000 Support for DS8000, IBM System Storage,
and IBM TotalStorage
®
products
Concurrent Copy for IBM System z and S/390 host systems
DS8000 command-line interface (DS CLI) publib.boulder.ibm.com/infocenter/ds8000ic/
Information about code bundles for DS8700 and DS8800.
®
IBM FlashCopy
for System z and S/390
host systems Host system models, operating systems,
adapters, and switches that the DS8000 series supports
www.ibm.com/storage/support/
www.storage.ibm.com/software/sms/sdm
index.jsp
The information center has a complete command reference for the DS CLI.
www.ibm.com/support/ docview.wss?uid=ssg1S1003593 See Section 3 for cross-reference links to SDD.
www.ibm.com/support/ docview.wss?uid=ssg1S1003740
www.storage.ibm.com/software/sms/sdm
www.ibm.com/servers/storage/disk/ds8000
Click Interoperability matrix.
www.ibm.com/systems/support/storage/ config/ssic
Click New search.
About this guide xv
Table 3. IBM documentation and related websites (continued)
Website Link
IBM Disk Storage Feature Activation (DSFA)
IBM version of the Java SE Runtime Environment (JRE) that is often required for IBM products
Information about IBM Storage Easy Tier
Remote Mirror and Copy (formerly Peer-to-Peer Remote Copy [PPRC]) for System z and S/390 host systems
SAN Fibre Channel switches www.ibm.com/systems/storage/san Subsystem Device Driver (SDD) www.ibm.com/support/
IBM Publications Center www.ibm.com/shop/publications/order
®
IBM Redbooks
Publications www.redbooks.ibm.com/
www.ibm.com/storage/dsfa
www.ibm.com/developerworks/java/jdk
®
v www-03.ibm.com/support/techdocs/
atsmastr.nsf/WebIndex/WP101844
v www-03.ibm.com/support/techdocs/
atsmastr.nsf/WebIndex/WP101675
www.storage.ibm.com/software/sms/sdm
docview.wss?uid=ssg1S7000303
Related accessibility information
To view a PDF file, you need Adobe Acrobat Reader, which can be downloaded for free from the Adobe website at: http://www.adobe.com/support/downloads/ main.html

How to order IBM publications

The IBM Publications Center is a worldwide central repository for IBM product publications and marketing material.
The IBM Publications Center offers customized search functions to help you find the publications that you need. Some publications are available for you to view or download at no charge. You can also order publications. You can access the IBM Publications Center at:
http://www.ibm.com/e-business/linkweb/publications/servlet/pbi.wss.

Send us your feedback

Your feedback is important in helping to provide the most accurate and high-quality information. If you have comments or suggestions for improving this publication, you can send us comments by e-mail to starpubs@us.ibm.com or use the Readers' Comments form at the back of this publication. Be sure to include the following information in your correspondence:
v Exact publication title v Form number (for example, GA32–0689–00), part number, or EC level (located
on the back cover)
v Page numbers to which you are referring
xvi Introduction and Planning Guide
Note: For suggestions on operating enhancements or improvements, please contact
your IBM Sales team.
About this guide xvii
xviii Introduction and Planning Guide

Summary of changes

This document contains terminology, maintenance, and editorial changes for version GC27-2297-09 of the IBM System Storage DS8800 and DS8700 Introduction and Planning Guide. Technical changes or additions to the text and illustrations are indicated by a vertical line to the left of the change.
|
| |
| | | | | | |
| | | | | |
| | | | | | |
New information
The following section contains new information for Version 6, Release 3, Modification 1:
T10 DIF support
American National Standards Institute (ANSI) T10 Data Integrity Field (DIF) standard is supported on System z for SCSI end-to-end data protection on fixed block (FB) LUN volumes. This support applies to the IBM Storage DS8800 unit (models 951 and 95E). System z support applies to FCP channels only. For more information, see “T10 DIF support” on page 36.
Global Mirror pause on a consistency group boundary
A new Global Mirror command pauses Global Mirror after a consistency group has formed. By internally suspending data transfers between Global Copy primary and Global Copy secondary devices, the DS8000 enables software such as GDPS For more information, see your IBM representative.
PPRC dynamic bitmaps for the DS8700 and DS8800
For large volumes or SATA drives, Hyperswap operation times are too long. This happens because of the amount of metadata that is read during the Hyperswap operation to initialize the PPRC bitmap. Support is now available to pre-initialize the PPRC bitmap so that the time required for a PPRC failover (Hyperswap) operation is not dependent on the size nor speed of the drives. For more information, see your IBM representative.
®
to perform simpler and faster test copy scenarios.
|
| |
| | |
© Copyright IBM Corp. 2004, 2012 xix
Changed information
The following section contains changed information for Version 6, Release 3, Modification 1:
Maximum number of FlashCopy relationships on a volume
The maximum number of FlashCopy relationships allowed on a volume is
65534. For more information, see “Copy Services” on page 55.
xx Introduction and Planning Guide

Chapter 1. Introduction to IBM System Storage DS8000 series

IBM System Storage DS8000 series is a high-performance, high-capacity series of disk storage that supports continuous operations.
The DS8000 series includes the DS8800 (Models 951 and 95E) and the DS8700 (Models 941 and 94E). The DS8700 uses IBM POWER6 represents a level of high-performance and high-capacity for disk systems. The DS8700 also provides improved performance through upgrades to processor and I/O enclosure interconnection technology.
The latest and most advanced disk enterprise storage system in the DS8000 series is the IBM System Storage DS8800. It represents the latest in the series of high-performance and high-capacity disk storage systems. The DS8800 supports IBM POWER6+ processor technology to help support higher performance.
The DS8000 series including the DS8800 and DS8700 support functions such as point-in-time copy functions with IBM FlashCopy, FlashCopy Space Efficient, and Remote Mirror and Copy functions with Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror, IBM z/OS Easy Tier functions are supported on DS8800 and DS8700 storage units.
All DS8000 series models consist of a storage unit and one or two management consoles, two being the recommended configuration. The graphical user interface (GUI) or the command-line interface (CLI) provide the ability to logically partition storage and use the built-in Copy Services functions. For high-availability, the hardware components are redundant.
The Tivoli Key Lifecycle Manager (TKLM) software performs key management tasks for IBM encryption-enabled hardware, such as the DS8000 series by providing, protecting, storing, and maintaining encryption keys that are used to encrypt information being written to, and decrypt information being read from, encryption-enabled disks. TKLM operates on a variety of operating systems.
®
Global Mirror, and z/OS Metro/Global Mirror.
®
server technology, which
To learn additional information about the DS8000 series, you can view the e-Learning modules that are available from the IBM System Storage DS8000 Storage Manager Welcome page or the IBM System Storage DS8000 Information Center. The e-Learning modules provide animated presentations that describe the installation, configuration, management, and servicing of the DS8000 series.

DS8700 overview

The DS8700 series offers various choices of base and expansion models, so you can configure storage units that meet your performance and configuration needs.
The DS8700 (Model 941) provides the option of a dual two-way processor complex (feature code 4301) or dual-four way processor complex (feature code 4302). A dual four-way processor complex provides support for up to four expansion models (94E). (Dual LPAR support is not available for the DS8700.)
Figure 1 on page 2 provides a high-level view of the components of Model 941 (2-way processor).
© Copyright IBM Corp. 2004, 2012 1
1
3
4
2
5
6
f2c01331
Figure 1. A Model 941 (2-way processor with the front cover off) and its main components
The following notes provide additional information about the labeled components in Figure 1.
1. All base models contain a fan sense card to monitor fans in each frame.
2. The rack power area of the base models provides redundant power supplies
(two primary power supplies, PPSs), power control cards, and backup battery assemblies to help protect data in the event of a loss of external power. Model 941 (2-way processor) contains two batteries (for extended power line disturbance (EPLD) or non-EPLD). Model 941 (4-way processor) contains three batteries (for EPLD or non-EPLD) to support the 4-way processors.
3. All base models can have up to eight disk enclosures, which contain the disk drives. In a maximum configuration, each base model can hold up to 128 disk drives.
4. All base models contain one management console comprised of a keyboard and display or laptop.
5. All base models contain two processor enclosures. Model 941 (2-way) processor enclosures have 2-way processors. Processor enclosures on Model 941 (4-way) have 4-way processors.
6. All base models contain I/O enclosures and adapters. Model 941 can be equipped with either two- or four-way I/O enclosures. The I/O enclosures hold the adapters and provide connectivity between the adapters and the processors. Both device adapters and host adapters are installed in the I/O enclosure.
The EPLD is an optional feature, it temporarily maintains power for the entire subsystem up to 60 seconds to minimize the impact of the power line disturbance. This feature is highly recommended for areas where power source is not reliable. This feature requires additional hardware:
2 Introduction and Planning Guide
v For each PPS in each model, a booster module is added. When the BBUs supply
power to the primary power bus, battery power is fed into the booster module, which in turn keeps disk enclosure power present.
v Batteries will be added to expansion models that do not already have them. The
base model and the first expansion model will already have BBUs but subsequent expansion models will not include BBUs until the EPLD feature is installed.
With the addition of this hardware, the DS8000 will be able to run for approximately 60 seconds on battery power before the processor enclosures begin to copy NVS data to internal disk and then shut down. This allows for up to a 60-second interruption to line power with no outage to the DS8000.
Figure 2 provides a high-level view of the components of an expansion model (Model 94E).
1
4
2
5
3
f2c01332
Figure 2. A 94E expansion model (with the back cover off) and its main components
The following notes provide additional information about the labeled components in Figure 2:
1. The rack power and cooling area of each expansion model provide the power and cooling components needed to run the frame.
2. All expansion models can have up to 16 disk enclosures, which contain the disk drives. In a maximum configuration, each expansion model can hold 256 disk drives.
3. Expansion models can contain I/O enclosures and adapters if they are the first expansion models that are attached to a Model 941 (4-way). If the expansion model contains I/O enclosures, the enclosures provide connectivity between the adapters and the processors. The adapters contained in the I/O enclosures can be either device or host adapters.
Chapter 1. Introduction to IBM System Storage DS8000 series 3
4. Expansion models contain fan sense cards to monitor fans in each frame.
5. Each expansion model contains two primary power supplies (PPSs) to convert
the ac input into dc power. The power area contains two battery backup units (BBUs) in the first expansion model. The second through fourth expansion model will have no BBUs, unless the EPLD feature is installed then each of those models will have two BBUs.

DS8700 (Models 941 and 94E)

IBM System Storage DS8700 models (Models 941 and 94E) offer higher performance and capacity than previous models.
All DS8700 models offer the following features:
v Dual two-way or four-way processor complex v Up to 128 GB of processor memory (cache) on a two-way processor and up to
384 GB of processor memory (cache) on a four-way processor complex
v Up to 16 host adapters or 64 FCPs on a Model 941 (2-way) and up to 32 host
adapters or 128 FCPs on a Model 941 (4-way)
With an optional expansion unit, 94E, the DS8700 scales as follows: v With one Model 94E expansion unit, Model 941 (4-way) supports up to 384 disk
drives, for a maximum capacity of up to 768 TB, and up to 32 Fibre Channel/FICON adapters.
v With two Model 94E expansion units, Model 941 (4-way) supports up to 640
disk drives, for a maximum capacity of up to 1,280 TB, and up to 32 Fibre Channel/FICON adapters.
v With three Model 94E expansion units, Model 941 (4-way) supports up to 896
disk drives, for a maximum capacity of up to 1,792 TB, and up to 32 Fibre Channel/FICON adapters.
v With four Model 94E expansion units, Model 941 (4-way) supports up to 1,024
disk drives, for a maximum capacity of up to 2,048 TB, and up to 32 Fibre Channel/FICON adapters.
Figure 3 on page 5 shows the configuration of a Model 941 with two 94E expansion models.
4 Introduction and Planning Guide
Base Model
128 Drives
Expansion Model
256 Drives
Expansion Model
256 Drives
Laptop Management Console Ethernet
Switch
Ethernet
Switch
Fan
Sense
RPC
PPS
PPS
Battery
Battery
Battery
Fans/Plenum
Disk Drive
Set
Disk Drive
Set
Disk Drive
Set
Disk Drive
Set
Processor
4-way
Processor
4-way
I/O
Enclosures
Enclosures
Adapters
Enclosures
Adapters
I/O
Adapters
Enclosures
Adapters
Fan
Sense
PPS
PPS
I/O
I/O
Battery
Battery
Fans/Plenum
Disk Drive
Disk Drive
Disk Drive
Disk Drive
Disk Drive
Disk Drive
Disk Drive
Disk Drive
I/O
Enclosures
Adapters
I/O
Enclosures
Adapters
Figure 3. Configuration for 941 (4-way) with two 94E expansion models
Set
Set
Set
Set
Set
Set
Set
Set
Enclosures
Adapters
Enclosures
Adapters
Fan
Sense
PPS
PPS
I/O
I/O
Fans/Plenum
Disk Drive
Set
Disk Drive
Set
Disk Drive
Set
Disk Drive
Set
Disk Drive
Set
Disk Drive
Set
Disk Drive
Set
Disk Drive
Set
f2c01336

DS8800 overview

The DS8800 is the newest addition to the IBM System Storage DS series. The DS8800 adds Model 951 (base frame) and 95E (expansion unit) to the 242x machine type family.
Features and functions of the DS8800
The following describes the features and functions of the DS8800.
High-density storage enclosure
High-density frame design
The DS8800 previously introduced the storage enclosure that supports 24 small form factor (SFF) 2.5” SAS drives in a 2U (height) factor. The DS8800 also supports a new high-density and lower-cost-per-capacity large form factor (LFF) storage enclosure. This enclosure accepts 3.5" SAS drives, offering 12 drive slots. The LFF enclosure has a different appearance from the front than does the SFF enclosure, with its 12 drives slotting horizontally rather than vertically.For more information on storage enclosures, see “ DS8800 storage enclosure overview” on page 11.
The DS8800 base model (951) supports up to 240 disks. Up to three additional expansion frames (Model 95E) can be added. The first expansion model supports up to 336 disks, the second expansion model supports up to 480 disks, and the third expansion model supports an additional 480 disks. The DS8800 can support a total of 1,536 drives (four frames) versus 1,024 drives (five frames in a DS8700) in a significantly smaller footprint, allowing higher density and preserving valuable raised floor space in
Chapter 1. Introduction to IBM System Storage DS8000 series 5
datacenter environments. Coupled with this improved cooling implementation, the reduced system footprint, and small form factor SAS-2 drives, a fully configured DS8800 consumes up to 36% less power than previous generations of DS8000. For more information, see “ DS8800 (Models 951 and 95E)” on page 7.
IBM POWER6+
technology
The DS8800 supports IBM POWER6+ processor technology to help support high performance. It can be equipped with a 2-way processor complex or a 4-way processor complex for the highest performance requirements. Processor complex technology has been upgraded from POWER6 to POWER6+.
8 Gbps host and 8 Gbps device adapters
The DS8800 model offers host adapters that have been upgraded to include high-function PCIe-attached four and eight-port Fibre Channel and IBM FICON host adapters. The DS8800 uses direct point-to-point high-speed PCIe connections to the I/O enclosures to communicate with the device and host adapters.
The DS8800 offers I/O enclosures that include up to 16 device adapters. The device adapters have four 8 Gbps Fibre Channel arbitrated loop (FC-AL) ports. The device adapters attach the storage enclosures to the DS8800.
Disk drive support
The DS8800 supports the following choice of disk drives: v Serial attach SCSI (SAS) drives:
– 146 GB, 15K RPM – 300 GB, 15K RPM – 450 GB, 10K RPM – 600 GB, 10K RPM – 900 GB, 10K RPM – 3 TB, 7.2K RPM
v SAS drives with Full Disk Encryption (FDE) and encryption Standby
CoD: – 146 GB 15K RPM – 300 GB 15K RPM – 450 GB, 10K RPM – 600 GB, 10K RPM – 900 GB 10K RPM – 3 TB 7.2K RPM
v Solid state drives (SSDs):
– 300 GB (nonencrypted) – 400 GB (FDE and nonencrypted)
Business class cabling
6 Introduction and Planning Guide
The DS8800 business class cabling feature offers a more streamlined, cost-competitive configuration than the standard configuration. The business class cabling feature reconfigures the Model 951 by reducing the number of installed device adapters and I/O enclosures, while increasing the number of storage enclosures attached to the remaining device adapters.

DS8800 (Models 951 and 95E)

The DS8800 offers higher performance and capacity than previous models in the DS8000 series. DS8800 includes Model 951 (base model) and Model 95E (expansion model).
Model 951 (base model)
Figure 4 on page 8 provides a high-level view of the front and back of the Model 951 base, which includes the new high-density storage enclosures 1 that support 24 drives in a 2U (height) form factor. The Model 951 is offered with a 2-way and a 4-way processor complex.
The back of the model contains the power supplies to the storage enclosures and the FCIC cards 1 (FCIC cards on top, power supplies on the bottom).
The POWER6+ servers 2 contain the processor and memory that drive all functions within the DS8800.
The base model also provides I/O enclosures 3, power supplies and cooling, and space for up to 15 disk drive sets (16 drives per disk drive set). In a maximum configuration, the base model can hold 240 disk drives (15 disk drive sets x 16 drives = 240 disk drives).
The I/O enclosures provide connectivity between the adapters and the processors. The adapters contained in the I/O enclosures can be either device adapters (DAs) or host adapters (HAs). The communication path used for adapter-to-processor complex communication in the DS8800 consists of four, 8 lane (x8) PCI-e Generation 2 connections.
The power area contains two battery backup units (BBUs) 4. This is true whether it is a 2-way or 4-way system or whether you purchased the optional Extended Power Line Disturbance (EPLD) feature (feature 1055) or not. (The BBUs help protect data in the event of a loss of external power.)
The primary power supplies (PPSs) 5 are on the side of the model. They provide a redundant 208 V dc power distribution to the rack to convert the ac input into dc power. The processor nodes, I/O drawers, and storage enclosures have dual power supplies that are connected to the rack power distribution units 6. The power and cooling are integrated into the storage enclosure.
Chapter 1. Introduction to IBM System Storage DS8000 series 7
1
5
2
4
Figure 4. Base model (front and back views) of a Model 951 (4-way)
3
The EPLD is an optional feature that temporarily maintains power for the entire subsystem for up to 50 seconds to minimize the impact of the power line disturbance. This feature is highly recommended for areas where power source is not reliable This feature adds two separate pieces of hardware to the DS8000:
v For each PPS in each model, a booster module is added. When the BBUs supply
power to the primary power bus, battery power is fed into the booster module, which in turn keeps disk enclosure power present.
v Batteries will be added to expansion models that do not already have them. The
base model and the first expansion model will already have BBUs but subsequent expansion models will not include BBUs until the EPLD feature is installed.
With the addition of this hardware, the DS8000 will be able to run for approximately 50 seconds on battery power before the processor enclosures begin to copy NVS data to internal disk and then shut down. This allows for up to a 50-second interruption to line power with no outage to the DS8000.
6
f2c01617
Model 95E (expansion model)
Figure 5 on page 9 shows the expansion model configuration of a Model 95E and the increased number of storage enclosures 1 that can be added.
The back side of the expansion model is the model power area. The expansion models do not contain rack power control cards; these cards are only present in the base model.
8 Introduction and Planning Guide
Each expansion model contains two primary power supplies (PPSs) 4. They provide a redundant 208 V dc power distribution to the rack to convert the ac input into dc power.
The power area contains a maximum of two BBUs 3. The expansion model also provides power distribution units 5.
Note: The power area can contain zero or two BBUs, depending on your
configuration. The first expansion model requires two BBUs (with or without EPLD). The second expansion model requires two BBUs (with EPLD) and no BBUs without EPLD.
In a DS8800 4-way system, the first expansion model has four I/O enclosures 2 (with adapters). A second expansion model has no I/O enclosures. (You cannot add an expansion model to a DS8800 2-way system.)
1
4
3
Figure 5. Expansion model (front and back views) of a Model 95E
2
Figure 6 on page 10 shows an expansion model configured only with drives: v Storage enclosures (front) and power supplies to the storage enclosures and
FCIC cards (back) 1
v Battery backup units (BBUs) 2 v Primary power supplies (PPSs) 3 v Power distribution units 4
5
f2c01618
Chapter 1. Introduction to IBM System Storage DS8000 series 9
1
3
4
2
f2c01619
Figure 6. Expansion model configured with drives
DS8800 model 951 and 95E comparison
With a dual 4-way processor complex, the DS8800 Model 951 supports up to 240 disk drives for a maximum capacity of up to 216 TB. With the introduction of the 900 GB 10k drive and the 3 TB 7.2k drive, the maximum capacities have changed for the base model (951), and expansion frames. The base model now supports up to 120 3.5" disk drives for a maximum capacity of up to 360 TB. It also supports up to 384 GB of processor memory with up to eight Fibre Channel/FICON adapters. The DS8800 95E is an optional expansion model. Up to three expansion models can be added. The 95E scales with 3.5" drives as follows:
v With one DS8800 Model 95E expansion units, the DS8800 Model 951 (4-way)
supports up to 288 3.5" disk drives, for a maximum capacity of up to 864 TB, and up to 16 Fibre Channel/FICON adapters.
v With two DS8800 Model 95E expansion units, the DS8800 Model 951 (4-way)
supports up to 528 3.5" disk drives, for a maximum capacity of up to 1.58 PB, and up to 16 Fibre Channel/FICON adapters.
v With three DS8800 Model 95E expansion units, the DS8800 Model 951 (4-way)
supports up to 768 3.5" disk drives, for a maximum capacity of up to 2.3 PB, and up to 16 Fibre Channel/FICON adapters.
The 95E scales with 2.5" drives as follows: v With one DS8800 Model 95E expansion unit, the DS8800 Model 951 (4-way)
supports up to 576 2.5" disk drives, for a maximum capacity of up to 518 TB, and up to 16 Fibre Channel/FICON adapters.
10 Introduction and Planning Guide
v With two DS8800 Model 95E expansion units, the DS8800 Model 951 (4-way)
supports up to 1,056 2.5" disk drives, for a maximum capacity of up to 950 GB, and up to 16 Fibre Channel/FICON adapters.
v With three DS8800 Model 95E expansion units, the DS8800 Model 951 (4-way)
supports up to 1536 2.5" disk drives, for a maximum capacity of up to 1.4 PB, and up to 16 Fibre Channel/FICON adapters.
For more information about comparison values for Model 951 and 95E, see “Overview of physical configurations” on page 89.
DS8800 storage enclosure overview
The DS8800 includes two types of high-density storage enclosure, the 2.5" small form factor (SFF) enclosure and the new 3.5" large form factor (LFF) enclosure.
The previously introduced DS8800 high-density storage enclosure is a small form factor (SFF) drive enclosure.The 3.5" enclosure is a large form factor (LFF) drive enclosure. Both enclosures have the following features:
v Fibre Channel (FC) cable connection v A high performance 8 Gbps optical FC-AL fabric attachment from the device
adapter to the storage expansion enclosure
v An enclosure control card providing an FC to SAS bridge, matching industry
standards.
v Device adapter (DA) attachment, which supports dual-trunked Fibre Channel,
allowing for higher bandwidth and an extra layer of redundancy
Status
Indicators
Figure 7 shows front and back views of the storage enclosure. This supports 24 SFF, 2.5” SAS drives. The storage enclosure is 2U (EIA units) or 3.5” in height. The front of the enclosure contains slots for 24 drives, and also contains enclosure-level status indicators. The back of the enclosure contains:
v Two power supplies, N+1 for redundancy, each with four cooling elements at
the drawer-level.
v Two interface controller (IC) cards, N+1 for redundancy.
All power and signal cables exit from the back of the enclosure. An exception to this would be the use of overhead cable management with feature code 1400 (top exit bracket for Fibre cable). The top exit bracket for Fibre cable is an optional configuration.
Front View
f2c01579
24 SFF Drives
Figure 7. Front view of the storage expansion enclosure
Chapter 1. Introduction to IBM System Storage DS8000 series 11
Back View
f2c01580
Interface Controller Cards
Figure 8. Back view of the storage expansion enclosure
Power Supplies
The DS8800 now supports a new high-density and lower-cost large form factor (LFF) storage enclosure. This enclosure accepts 3.5" drives, offering 12 drive slots. The previously introduced SFF enclosure offers 24, 2.5" drive slots. The LFF enclosure has a different appearance from the front than does the SFF enclosure, with its 12 drives slotting horizontally rather than vertically.
1
2
Figure 9. Front view of the LFF storage expansion enclosure
The following notes provide additional information about the labeled components in the Figure 9:
1. Status indicators for the enclosure
2. 12 LFF drives, 3.5"
f2c01676
Performance features
Features of the expansion storage enclosure include:
v Support for up to four enclosures per loop v Redundant, integrated power and cooling v 6 Gbps SAS data transfer rate to the disk drives v Support for optical 8 Gbps FC-AL v FC-AL to SAS protocol conversion
Power supply and cooling
Power supply features and requirements include: v The power and cooling system are composed of two redundant power supply
units, accepting 208 V dc voltage.
v Each power supply contains fans to supply cooling for the entire enclosure.
12 Introduction and Planning Guide
Frame configuration notes
There are two types of frame configurations supported, which are commonly designated as A and B frames. An A frame is the base configuration (Mode 951) and contains not just power and storage but also the I/O enclosures, Ethernet switch, Hardware Management Console (HMC), and I/O bays. If more storage is needed than the A frame (base Model 951) can provide, the next step is to add a B frame, or expansion (Model 95E). The 95E contains more storage and more I/O bays, increasing the number of device adapter cards you can select. Up to three expansion models can be added to the configuration.
Note: The design of this enclosure assumes that all drives used are homogeneous,
possessing the same interface speed, type - all native serial-attached SCSI (SAS), all Nearline-SAS (NL), or all solid state drives (SSDs) and capacity. However, drives with differing speeds can be combined if the capacity is the same. An exception to this is that intermixing encrypting and non-encrypting drives is not supported.
DS8800 (Business Class Cabling feature)
The IBM System Storage DS8800 business class cabling feature offers a streamlined, lower cost configuration than the standard configuration. The DS8800 business class cabling feature reconfigures the Model 951, reducing the number of installed device adapters and I/O enclosures while increasing the number of storage enclosures attached to the remaining device adapters. The business class option allows a system to be configured with more drives per device adapter, reducing configuration cost and increasing adapter usage.
The DS8800 Model 951 with business class cabling has the following features:
v Cost-optimized cabling v Dual two-way processor complex v Up to 128 GB of processor memory on a two-way configuration
Note: A 16 GB processor memory option is available with feature code 4211 for
the two-way configuration, business class cabling feature.
v Up to 4 host adapters or up to 32 host ports, FCP (Fibre Channel protocol) or
FICON
The DS8800 Model 951 Business Class Cabling feature supports up to 240 disk drives. For more information, including maximum storage capacities, see “Overview of physical configurations” on page 89 The cabling of the expansion frame remains the same for both the standard and business class.
Note: The DS8800 high-density business class disk drive cable option is an
optional feature (feature code 1250). Up to 240 drives are supported on a two-way, with single-phase power optional.
DS8000 model conversion limitations
Model conversions are not supported on the DS8800. There are several limitations for the DS8700, noted below.
If a third and fourth expansion unit are attached to a DS8700, the following considerations need to be made:
v If the primary in a Global Mirror relationship has a third and fourth expansion
frame, then FlashCopy pairs are limited to 1,000
Chapter 1. Introduction to IBM System Storage DS8000 series 13
v Global Mirror and Metro/Global Mirror configurations are not supported in
System i environments

Machine types overview

The DS8700 and DS8800 include several machine types. Order a hardware machine type for the storage unit hardware and a corresponding function authorization machine type for the licensed functions that are planned for use.
Table 4 shows the available hardware machine types and their corresponding function authorization machine types.
Table 4. Available hardware and function authorization machine types
Hardware machine
type
Machine type 2421 (1-year warranty period)
Machine type 2422 (2-year warranty period)
Machine type 2423 (3-year warranty period)
Machine type 2424 (4-year warranty period)
Hardware Licensed functions
Corresponding
Available hardware
models
Models 941, 94E, 951
and 95E
function
authorization
machine type
Machine type 2396
(1-year warranty
period)
Machine type 2397
(2-year warranty
period)
Machine type 2398
(3-year warranty
period)
Machine type 2399
(4-year warranty
period)
Available function
authorization models
Model LFA
Because the 242x hardware machine types are built upon the 2107 machine type and microcode, some interfaces might display 2107. This display is normal, and is no cause for alarm. The 242x machine type that you purchased is the valid machine type.

Features overview

The DS8700 and the DS8800 are designed to provide you with high-performance, connectivity, and reliability so that your workload can be easily consolidated into a single storage subsystem.
The following list provides an overview of some of the features that are associated with the DS8700 and the DS8800:
Note: Additional specifications are provided at www.ibm.com/systems/storage/
Storage pool striping (rotate extents)
disk/ds8000/specifications.html.
Storage pool striping is supported on the DS8000 series, providing improved performance. The storage pool striping function stripes new volumes across all ranks of an extent pool. The striped volume layout reduces workload skew in the system without requiring manual tuning by a storage administrator. This approach can increase performance with minimal operator effort. With storage pool striping support, the system
14 Introduction and Planning Guide
automatically performs close to highest efficiency, which requires little or no administration. The effectiveness of performance management tools is also enhanced, because imbalances tend to occur as isolated problems. When performance administration is required, it is applied more precisely.
You can configure and manage storage pool striping using the DS Storage Manager, DS CLI, and DS Open API. The default of the extent allocation method (EAM) option that is applied to a logical volume is now rotate extents. The rotate extents option (storage pool striping) is designed to provide the best performance by striping volume extents across ranks in extent pool. Existing volumes can be re-configured nondisuptively by using manual volume migration and volume rebalance.
The storage pool striping function is provided with the DS8000 series at no additional charge.
POWER6 processor technology
DS8700
Compared to the IBM POWER5+ processor in previous models, the DS8700 supports IBM POWER6 processor technology that can enable over a 50% performance improvement in I/O operations per second in transaction processing workload environments. Additionally, sequential workloads can receive as much as 150% bandwidth improvement.
DS8800
The DS8800 supports IBM POWER6+ processor technology to help support high performance. It can be equipped with a 2-way processor complex or a 4-way processor complex for the highest performance requirements.
Solid-state drives (SSDs)
The DS8000 series can accommodate superfast solid-state drives, and traditional spinning disk drives, to support multitier environments. SSDs are the best choice for I/O intensive workload. The DS8800 can also accommodate superfast SSDs to support multitier environments. They come in disk enclosures and have the same form factor as the traditional disks.
SSDs are available in the following capacities:
DS8800 (300 GB)
Half-drive set and disk drive set
DS8800 (400 GB)
Half-drive set and disk drive set
DS8700 (600 GB)
Half drive set and disk drive set
Notes: You can order a group of 8 drive install groups of SSDs (half disk
drive sets) or 16 (disk drive sets). Only one half disk drive set feature is allowed per model. Half disk drive sets and disk drive sets cannot be intermixed within a model. If you require a second set of 8 SSDs, then a conversion from a half disk drive set to full disk drive set is required.
Industry standard disk drives
The DS8000 series models offer a selection of disk drives. The DS8800 and DS8700 offer the following selection:
Chapter 1. Introduction to IBM System Storage DS8000 series 15
DS8800
Along with SSD drives, the DS8800 supports Serial Attached SCSI (SAS) drives, available in the following capacities:
v 300 GB 15K drive set v 900 GB 10K drive set v 3 TB 7.2K drive set
DS8700
Fibre Channel, SSD, and Serial Advanced Technology Attachment (SATA) drives. SATA drives are both the largest and slowest of the drives available for the DS8000. SATA drives are best used for applications that favor capacity optimization over performance. (2 TB SATA drive support is available for the DS8700.)
Notes:
1. RAID 5 implementations are not compatible with the use of SATA disk drives. RAID 6 and RAID 10 implementations are compatible with the use of SATA disk drives.
2. SATA and nearline SAS drives are not recommended for Space-Efficient FlashCopy repository data because of performance impacts.
Sign-on support using Lightweight Directory Access Protocol (LDAP)
The DS8000 provides support for both unified sign-on functions (available through the DS Storage Manager), and the ability to specify an existing Lightweight Directory Access Protocol (LDAP) server. The LDAP server can have existing users and user groups that can be used for authentication on the DS8000.
Setting up unified sign-on support for the DS8000 is achieved using the Tivoli Storage Productivity Center. See the Tivoli Storage Productivity Center Information Center for more information.
Note: Other supported user directory servers include IBM Directory Server
and Microsoft Active Director.
Easy Tier
Easy Tier is designed to determine the appropriate tier of storage based on data access requirements and then automatically and nondisruptively move data, at the subvolume or sub-LUN level, to the appropriate tier on the DS8000. Easy Tier is an optional feature on the DS8700 and the DS8800 that offers enhanced capabilities through features such as auto-rebalancing, hot spot management, rank depopulation, support for extent space-efficient (ESE) volumes, auto performance rebalance in both homogeneous and hybrid pools, and manual volume migration.
Multitenancy support (resource groups)
Resource groups functions provide additional policy-based limitations to DS8000 users, which in conjunction with the inherent volume addressing limitations support secure partitioning of copy services resources between user-defined partitions. The process of specifying the appropriate limitations is performed by an administrator using resource groups functions. DS Storage Manager (GUI) and DS CLI support is also available for resource groups functions.
16 Introduction and Planning Guide
It is feasible that multitenancy can be supported in certain environments without the use of resource groups provided the following constraints are met:
v Either copy services must be disabled on all DS8000 that share the same
SAN (local and remote sites), or the landlord must configure the operating system environment on all hosts (or host LPARs) attached to a SAN which has one or more DS8000 units, so that no tenant can issue copy services commands.
v The zOS Distribute Data backup feature is disabled on all DS8000 units
in the environment (local and remote sites).
v Thin provisioned volumes (ESE or TSE) are not used on any DS8000 unit
in the environment (local and remote sites).
v On zSeries systems there can be no more than one tenant running in a
given LPAR, and the volume access must be controlled so that a CKD base volume or alias volume is only accessible by a single tenant’s LPAR or LPARs.
I/O Priority Manager
The I/O Priority Manager can help you effectively manage quality of service levels for each application running on your system. This feature aligns distinct service levels to separate workloads in the system to help maintain the efficient performance of each DS8000 volume. The I/O Priority Manager detects when a higher-priority application is hindered by a lower-priority application that is competing for the same system resources. This might occur when multiple applications request data from the same disk drives. When I/O Priority Manager encounters this situation, it delays lower-priority I/O data to assist the more critical I/O data in meeting their performance targets.
Use this feature when you are looking to consolidate more workloads on your system and need to ensure that your system resources are aligned to match the priority of your applications. This feature is useful in multitenancy environments.
| | | | | | |
Peripheral Component Interconnect Express
Note: To enable monitoring, use DS CLI commands to set I/O Priority
Manager to "Monitor" or to "MonitorSNMP," or use the DS Storage Manager to set I/O Priority Manager to "Monitor" on the Advanced tab of the Storage Image Properties page. The I/O Priority Manager feature can be set to "Managed" or "ManagedSNMP," but the I/O priority is not managed unless the I/O Priority Manager LIC key is activated.
®
(PCIe) I/O enclosures
The DS8700 and DS8800 processor complexes use a PCIe infrastructure to access I/O enclosures. PCIe is a standard-based replacement to the general-purpose PCI expansion bus. PCIe is a full duplex serial I/O interconnect. Transfers are bi-directional, which means data can flow to and from a device simultaneously. The PCIe infrastructure uses a non-blocking switch so that more than one device can transfer data.
In addition, to improve I/O Operations Per Second (IOPS) and sequential read/write throughput, the I/O enclosures in Model 951 are directly connected to the internal servers through point-to-point PCIe cables. I/O enclosures no longer share common "loops," they connect directly to each server through separate cables and link cards, thus enabling a performance improvement over previous models.
Chapter 1. Introduction to IBM System Storage DS8000 series 17
Four-port 8 Gbps device adapters
The DS8800 includes high-function PCIe-attached 4-port, 8 Gbps FC-AL adapter. The device adapters attach the storage enclosure to the DS8800.
Four- and eight-port Fibre-Channel/FICON adapters
DS8700
The DS8700 supports high-function PCIe-attached four-port, 4 and 8 Gbps Fibre Channel/FICON host adapters
The DS8700 model with a 2-way configuration offers up to 16 host adapters (64 Fibre Channel Protocol/FICON ports) and the DS8700 model with a 4-way configuration offers up to 32 host adapters (128 FCP/FICON ports).
DS8800
The DS8800 supports four- and eight-port Fibre Channel/FICON host adapters. These 8 Gbps host adapters are offered in longwave and shortwave. These adapters are designed to offer up to 100% improvement in single-port MBps throughput performance. This improved performance helps contribute to cost savings by reducing the number of required host ports.
The DS8800 supports up to 16 host adapters maximum (up to 128 Fibre Channel Protocol/FICON ports) on a 4-way configuration.
Note: The DS8700 and DS8800 supports connections from adapters
and switches that are 8 Gbps, but auto-negotiates them to 4 Gbps or 2 Gbps, as needed.
High performance FICON
A high performance FICON feature is available that allows FICON extension (FCX) protocols to be used on fibre channel I/O ports that are enabled for the use of the FICON upper layer protocol. The use of FCX protocols provides a significant reduction in channel utilization. This reduction can allow more I/O input on a single channel and also for a reduction in the number of FICON channels required to support a given workload.
IBM Standby Capacity on Demand
Using the IBM Standby Capacity on Demand (Standby CoD) offering, you can install inactive disk drives that can be easily activated as business needs require. To activate, you logically configure the disk drives for use, nondisruptive activity that does not require intervention from IBM. Upon activation of any portion of the Standby CoD disk drive set, you must place an order with IBM to initiate billing for the activated set. At that time, you can also order replacement Standby CoD disk drive sets.
DS8700
The DS8700 offers up to four Standby CoD disk drive sets (64 disk drives) that can be factory- or field-installed into your system.
DS8800
The DS8800 offers up to six Standby CoD disk drive sets (96 disk drives) can be factory- or field-installed into your system.
Online Information Center
18 Introduction and Planning Guide
Note: Solid-state disk drives are unavailable as Standby Capacity on
Demand disk drives.

Limitations

The online Information Center is an information database that provides you with the opportunity to quickly familiarize yourself with the major aspects of the DS8000 series and to easily recognize the topics for which you might require more information. It provides information regarding user assistance for tasks, concepts, reference, user scenarios, tutorials, and other types of user information. Because the information is all in one place rather than across multiple publications, you can access the information that you need more efficiently and effectively.
For the latest version of the online Information Center, go to http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp
The following list describes known limitations for DS8000 storage units: v For the Dynamic Volume Expansion function, volumes cannot be in Copy
Services relationships (point-in-time copy, FlashCopy SE, Metro Mirror, Global Mirror, Metro/Global Mirror, and z/OS Global Mirror) during expansion.
v The size limit for single volumes in a Copy Services relationship is 2 TB. This
limit does not apply to extents (when part of multiple volumes).
v The amount of physical capacity within a 242x system that can be logically
configured for use is enforced by the 242x Licensed Machine Code to maintain compliance with the extent of IBM authorization established for licensed functions activated on the machine.
v The deactivation of an activated licensed function, or a lateral change or
reduction in the license scope, is a nondisruptive activity that occurs at the next machine IML. For example:
A lateral change is defined as changing the license scope from FB to CKD or
from CKD to FB.
– A reduction is defined as changing the license scope from ALL to FB or from
ALL to CKD.
v The following activities are disruptive:
– Addition of the Earthquake Resistance Kit feature 1906. This feature is not
supported on the DS8700 but is supported on the DS8800.
– Removal of an expansion model from the base model. Data is not preserved
during this activity.
v Some DS8000 series functions are unavailable or are not supported in all
environments. Go to the System Storage Interoperation Center (SSIC) website at www.ibm.com/systems/support/storage/config/ssic for the most current information on supported hosts, operating systems, adapters, and switches.
v Plant configured systems with Encryption Drive Set support (feature number
1751) can support a field installation of encrypted drives. Existing DS8000 systems or systems lacking the encryption drive set support feature cannot support encryption drive sets.
v In a DS8700, 8 drives are the minimum number of SSDs (RAID-5 only) that are
supported. A single DDM failure in a minimum configuration will trigger the call home feature.
v SSD drive sets are not supported in RAID-6 or RAID-10 configurations. v Nearline SAS drives are not supported on RAID-5 and RAID-10 configurations. v In a tiered extent pool (with Enterprise and SSD drives), Extent Space Efficient
(ESE) volumes cannot allocate extents to SSD ranks.
v Conversions between warranty machine types are not supported.
Chapter 1. Introduction to IBM System Storage DS8000 series 19
v The 16 GB processor memory feature (4211) does not support:
– SSD drive sets – Encryption – Copy Services functions – DS8800 expansion frames
v Thin provisioning functions are not supported on System z/OS volumes.

DS Storage Manager limitations

The following section includes information about the DS Storage Manager (GUI).
The DS Storage Manager (GUI) can be used on different versions of Internet browsers. Supported browsers include:
v Mozilla Firefox 3.0 - 3.4 or 3.6. v Microsoft Internet Explorer 7 or 8.
| | |
|
| |
|
| |
Note: Some panels might not display properly in Internet Explorer 7 when
You must select appropriate browser security settings to open the DS Storage Manager with a browser. Additionally, if you access the DS Storage Manager through the Tivoli must configure Internet Explorer for that process. For instructions on how to perform these actions, visit the IBM System Storage DS8000 Information Center and select Installing > DS Storage Manager postinstallation instructions > Internet browser support.

DS8000 physical footprint

The physical footprint dimensions, caster locations, and cable openings for a DS8800 unit help you plan your installation site.
using DS Storage Manager Version 6 Releases 2, 3, and 3.1. Problems have been reported for the following panels:
Volumes > FB Volumes (Fixed block volumes — Main page )Access > Remote Authentication (Remote authentication — Main page
)
Access > Users (Users — Main page) To avoid problems with these panels, use Internet Explorer 8 with the DS
Storage Manager.
®
Storage Productivity Center using Internet Explorer 7, you
Figure 10 on page 21 shows the overall physical footprint of a DS8800 unit.
20 Introduction and Planning Guide
3
34.8
(13.7)
84.8
(33.4)
3
58.9
(23.2)
4
76.2
(30.0)
(2x)
Cable
Openinig
6
90.0
(35.4)
9
5.1
5
(2.0)
64.5
(25.4)
6
7
106.7
(42.0)
1
85.3
(33.6)
8
122.7
2
121.9
(48.0)
(48.3)
9.37 (3.7)
10
8.55
(3.4)
Figure 10. DS8000 physical footprint. Dimensions are in centimeters (inches).
The following dimensions are labeled on Figure 10:
1. Front cover width
2. Front service clearance
3. Back cover widths
4. Back service clearance
5. Clearance to allow front cover to open
6. Distance between casters
7. Depth of frame without covers
8. Depth of frame with covers
9. Minimum dimension between casters and outside edges of frames
10. Distance from the edge to the front of the open cover
Chapter 1. Introduction to IBM System Storage DS8000 series 21
f2c01565
22 Introduction and Planning Guide

Chapter 2. Hardware and features

This chapter helps you understand the hardware components and features available in the DS8000 series.
This chapter contains information about the hardware and the hardware features in your DS8000. It includes hardware topics such as the IBM System Storage Hardware Management Console, the storage complexes, available interfaces, device drivers, and storage disks. It also contains information on the features supported by the DS8000 hardware. Use the information in this chapter to assist you in planning, ordering, and in the management of your DS8000 hardware and its hardware features.

Storage complexes

A storage complex is a set of storage units that are managed by management console units.
You can associate one or two management console units with a storage complex. Each storage complex must use at least one of the internal management console units in one of the storage units. You can add a second management console for redundancy. The second storage management console can be either one of the internal management console units in a storage unit or an external management console.

IBM System Storage Hardware Management Console

The management console is the focal point for hardware installation, configuration, and maintenance activities.
The management console is a dedicated notebook that is physically located (installed) inside your DS8000 storage unit, and can automatically monitor the state of your system, notifying you and IBM when service is required. The DS8000 Storage Manager is accessible from IBM System Storage Productivity Center (SSPC) through the IBM Tivoli Storage Productivity Center GUI. SSPC uses Tivoli Storage Productivity Center Basic Edition, which is the software that drives SSPC and provides the capability to manage storage devices and host resources from a single control point.
In addition to using Tivoli Storage Productivity Center, the GUI can also be accessed from any location that has network access using a web browser. Supported web browsers include:
v Mozilla Firefox 3.0 - 3.4 or 3.6 v Microsoft Internet Explorer 7 or 8.
| | |
|
| |
Note: Some panels might not display properly in Internet Explorer 7 when
using DS Storage Manager Version 6 Releases 2, 3, and 3.1. Problems have been reported for the following panels:
Volumes > FB Volumes (Fixed block volumes — Main page )Access > Remote Authentication (Remote authentication — Main page
)
© Copyright IBM Corp. 2004, 2012 23
|
Access > Users (Users — Main page)
| |
The first management console in a storage complex is always internal to the 242x machine type, Model 941 and Model 951. To provide continuous availability of your access to the management console functions, use a second management console, especially for storage environments using encryption. For more information, see “Best practices for encrypting storage environments” on page 79.
This second management console can be provided in two ways:
v External (outside the 242x machine type, Model 941 and Model 951). This
v Internal The internal management console from each of two separate storage
To avoid problems with these panels, use Internet Explorer 8 with the DS Storage Manager.
console is installed in the customer-provided rack. It uses the same hardware as the internal management console.
Note: The external HMC must be within 50 feet of the base model.
facilities can be connected in a "cross-coupled" manner. Plan for this configuration to be accomplished during the initial installation of the two storage facilities to avoid additional power cycling. (Combining two previously installed storage facilities into the cross-coupled configuration at a later date, requires a power cycle of the second storage facility.) Ensure that you maintain the same machine code level for all storage facilities in the cross-coupled configuration.

RAID implementation

RAID implementation improves data storage reliability and performance.
Redundant array of independent disks (RAID) is a method of configuring multiple disk drives in a storage subsystem for high availability and high performance. The collection of two or more disk drives presents the image of a single disk drive to the system. If a single device failure occurs, data can be read or regenerated from the other disk drives in the array.
RAID implementation provides fault-tolerant data storage by storing the data in different places on multiple disk drive modules (DDMs). By placing data on multiple disks, I/O operations can overlap in a balanced way to improve the basic reliability and performance of the attached storage devices.
Physical capacity can be configured as RAID 5, RAID 6 (only on the DS8000 series), RAID 10, or a combination of RAID 5 and RAID 10. RAID 5 can offer excellent performance for most applications, while RAID 10 can offer better performance for selected applications, in particular, high random, write content applications in the open systems environment. RAID 6 increases data protection by adding an extra layer of parity over the RAID 5 implementation.
You can reconfigure RAID 5 disk groups as RAID 10 disk groups or vice versa.

RAID 5 overview

RAID 5 is a method of spreading volume data across multiple disk drives. The DS8000 series supports RAID 5 arrays.
24 Introduction and Planning Guide
RAID 5 increases performance by supporting concurrent accesses to the multiple DDMs within each logical volume. Data protection is provided by parity, which is stored throughout the drives in the array. If a drive fails, the data on that drive can be restored using all the other drives in the array along with the parity bits that were created when the data was stored.

RAID 6 overview

RAID 6 is a method of increasing the data protection of arrays with volume data spread across multiple disk drives. The DS8000 series supports RAID 6 arrays.
RAID 6 increases data protection by adding an extra layer of parity over the RAID 5 implementation. By adding this protection, RAID 6 can restore data from an array with up to two failed drives. The calculation and storage of extra parity slightly reduces the capacity and performance compared to a RAID 5 array. RAID 6 is suitable for storage using archive class DDMs.

RAID 10 overview

RAID 10 provides high availability by combining features of RAID 0 and RAID 1. The DS8000 series supports RAID 10 arrays.
RAID 0 increases performance by striping volume data across multiple disk drives. RAID 1 provides disk mirroring, which duplicates data between two disk drives. By combining the features of RAID 0 and RAID 1, RAID 10 provides a second optimization for fault tolerance.
RAID 10 implementation provides data mirroring from one DDM to another DDM. RAID 10 stripes data across half of the disk drives in the RAID 10 configuration. The other half of the array mirrors the first set of disk drives. Access to data is preserved if one disk in each mirrored pair remains available. In some cases, RAID 10 offers faster data reads and writes than RAID 5 because it is not required to manage parity. However, with half of the DDMs in the group used for data and the other half used to mirror that data, RAID 10 disk groups have less capacity than RAID 5 disk groups.

DS8000 Interfaces

This section describes the following interfaces:
v IBM System Storage DS Storage Manager v IBM System Storage DS Command-Line Interface (CLI) v IBM System Storage DS Open application programming interface v IBM Tivoli Storage Productivity Center v IBM Tivoli Storage Productivity for Replication Manager
Note: For DS8000, you can have a maximum of 256 interfaces of any type

System Storage DS®Storage Manager

The DS Storage Manager is an interface that is used to perform logical configurations and Copy Services management functions.
connected at one time.
The DS Storage Manager can be accessed through the Tivoli Storage Productivity Center Element Manager from any network-connected workstation with a
Chapter 2. Hardware and features 25
supported browser. The DS Storage Manager is installed as a GUI for the Windows and Linux operating systems. It can be accessed from any location that has network access using a Web browser.
Note: Supported browsers include:
v Mozilla Firefox 3.0 - 3.4 or 3.6 v Microsoft Internet Explorer 7 or 8
| | |
|
| |
|
| |
For more information, see the IBM DS8000 Information Center (http://publib.boulder.ibm.com/infocenter/dsichelp/ds8000ic/index.jsp) and search on the topic Internet browser support, which is located in the Installing section.
Note: Some panels might not display properly in Internet Explorer 7
when using DS Storage Manager Version 6 Releases 2, 3, and 3.1. Problems have been reported for the following panels:
Volumes > FB Volumes (Fixed block volumes—Main page)Access > Remote Authentication (Remote authentication—Main
page)
Access > Users (Users—Main page) To avoid problems with these panels, use Internet Explorer 8 with
the DS Storage Manager.
Access the GUI from a browser using the IP_address:P_port on your DS8000 HMC.

Tivoli Storage Productivity Center

The Tivoli Storage Productivity Center is an integrated software solution that can help you improve and centralize the management of your storage environment through the integration of products. With the Tivoli Storage Productivity Center (TPC), it is possible to manage and fully configure multiple DS8000 storage systems from a single point of control.
Note: System Storage Productivity Center (SSPC) uses Tivoli Storage Productivity
The DS Storage Manager is installed as a GUI for the Windows and Linux operating systems. In addition to using TPC, it can also be accessed from any location that has network access using a web browser. Supported web browsers include:
v Mozilla Firefox 3.0 - 3.4 or 3.6 v Microsoft Internet Explorer 7 or 8.
| | |
|
| |
|
| |
26 Introduction and Planning Guide
Center Basic Edition to manage storage devices.
Note: Some panels might not display properly in Internet Explorer 7 when
using DS Storage Manager Version 6 Releases 2, 3, and 3.1. Problems have been reported for the following panels:
Volumes > FB Volumes (Fixed block volumes — Main page )Access > Remote Authentication (Remote authentication — Main page
)
Access > Users (Users — Main page) To avoid problems with these panels, use Internet Explorer 8 with the DS
Storage Manager.
Tivoli Storage Productivity Center simplifies storage management by providing the following benefits:
v Centralizing the management of heterogeneous storage network resources with
IBM storage management software
v Providing greater synergy between storage management software and IBM
storage devices
v Reducing the number of servers that are required to manage your software
infrastructure
v Migrating from basic device management to storage management applications
that provide higher-level functions
With the help of agents, Tivoli Storage Productivity Center discovers the devices to which it is configured. It then can start an element manager that is specific to each discovered device, and gather events and data for reports about storage management.

DS command-line interface

The IBM System Storage DS CLI can be used to create, delete, modify, and view Copy Services functions and the logical configuration of a storage unit. These tasks can be performed either interactively, in batch processes (operating system shell scripts), or in DS CLI script files. A DS CLI script file is a text file that contains one or more DS CLI commands and can be issued as a single command. DS CLI can be used to manage logical configuration, Copy Services configuration, and other functions for a storage unit, including managing security settings, querying point-in-time performance information or status of physical resources, and exporting audit logs.
The DS CLI provides a full-function set of commands to manage logical configurations and Copy Services configurations. The DS CLI can be installed on and is supported in many different environments, including:
®
v AIX
5.1, 5.2, 5.3, 6.1
v HP-UX 11.0, 11i, v1, v2, v3
Note: The DS CLI supports HP-UX 11iv3 only when the Legacy mode is
enabled.
v HP Tru64 UNIX version 5.1, 5.1A v Linux RedHat 3.0 Advanced Server (AS) and Enterprise Server (ES) v Red Hat Enterprise Linux (RHEL) 4 and RHEL 5 v SuSE 8, SuSE 9, SuSE Linux Enterprise Server (SLES) 8, SLES 9, and SLES 10 v VMware ESX v3.0.1 Console v Novell NetWare 6.5
®
v IBM System i
i5/OS®5.3
v OpenVMS 7.3-1 (or newer) v Sun Solaris 7, 8, and 9 v Microsoft Windows 2000, Windows Datacenter, Windows 2003, Windows Vista,
Windows Server 2008, Windows XP, and Windows 7
Chapter 2. Hardware and features 27

DS Open Application Programming Interface

The DS Open Application Programming Interface (API) is a nonproprietary storage management client application that supports routine LUN management activities, such as LUN creation, mapping and masking, and the creation or deletion of RAID 5, RAID 6, and RAID 10 volume spaces.
The DS Open API supports these activities through the use of the Storage Management Initiative Specification (SMI-S), as defined by the Storage Networking Industry Association (SNIA).
The DS Open API helps integrate configuration management support into storage resource management (SRM) applications, which help you to use existing SRM applications and infrastructures. The DS Open API can also be used to automate configuration management through customer-written applications. Either way, the DS Open API presents another option for managing storage units by complementing the use of the IBM System Storage DS Storage Manager Web-based interface and the DS command-line interface.
Note: The DS Open API supports the IBM System Storage DS8000 series and is an
embedded component.
You can implement the DS Open API without using a separate middleware application, like the IBM System Storage Common Information Model (CIM) agent, which provides a CIM-compliant interface. The DS Open API uses the CIM technology to manage proprietary devices as open system devices through storage management applications. The DS Open API is used by storage management applications to communicate with a storage unit.

Tivoli Storage Productivity Center for Replication

Tivoli Storage Productivity Center for Replication facilitates the use and management of Copy Services functions such as the remote mirror and copy functions (Metro Mirror and Global Mirror) and the point-in-time function (FlashCopy).
Tivoli Storage Productivity Center for Replication provides a graphical interface that you can use for configuring and managing Copy Services functions across the DS8000, DS6000 data-copy services maintain consistent copies of data on source volumes that are managed by Replication Manager.
Tivoli Storage Productivity Center for Replication for FlashCopy, Metro Mirror, and Global Mirror support provides automation of administration and configuration of these services, operational control (starting, suspending, resuming), Copy Services tasks, and monitoring and managing of copy sessions.
Tivoli Storage Productivity Center for Replication is an option of the Tivoli Storage Productivity Center for Replication software program. If you are licensed for Copy Services functions, you can use Tivoli Storage Productivity Center for Replication to manage your data copy environment.
Notes:
1. Tivoli Storage Productivity Center for Replication operations can now
, and Enterprise Storage Server®(ESS) storage units. These
be performed using the DS8000 hardware management console (HMC).
28 Introduction and Planning Guide
2. The use of Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6) are both supported through the HMC ports.
For more information, visit the IBM Publications website using the following web address:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp

DS8000 hardware specifics

The DS8000 models offer a high degree of availability and performance through the use of redundant components that can be replaced while the system is operating. You can use the DS8000 models with a mix of different operating systems and clustered and nonclustered variants of the same operating systems.
Contributing to the high degree of availability and reliability are the structure of the DS8000 storage unit, the host systems it supports, and its processor memory and processor speeds.

Storage unit structure

The design of the storage unit, which contains the base model and the expansion models, contributes to the high degree of availability that is associated with the DS8000 series. The primary components that support high availability within the storage unit are the storage server, the processor complex, and the rack power control card.
Storage unit
The storage unit contains a storage server and one or more storage (disk) enclosures that are packaged in one or more racks with associated power supplies, batteries, and cooling.
Storage server
The storage server consists of two processor complexes, two or more I/O enclosures, and a pair of rack power control cards.
Processor complex
A processor complex controls and manages the storage unit to perform the function of the storage server. The two processor complexes form a redundant pair such that if either processor complex fails, the remaining processor complex performs all storage server functions.
Rack power control card
A redundant pair of rack power control (RPC) cards coordinate the power management within the storage unit. The RPC cards are attached to the service processors in each processor complex, the primary power supplies in each rack, and indirectly to the fan/sense cards and storage enclosures in each rack.
All DS8000 models include the IBM System Storage Multi-path Subsystem Device Driver (SDD). The SDD provides load balancing and enhanced data availability capability in configurations with more than one I/O path between the host server and the DS8000 series storage unit. Load balancing can reduce or eliminate I/O bottlenecks that occur when many I/O operations are directed to common devices using the same I/O path. The SDD can eliminate the single point of failure by automatically rerouting I/O operations when a path failure occurs.
Chapter 2. Hardware and features 29

Disk drives

The DS8700 and the DS8800 provides you with the following choice of disk drives.
DS8700
DS8800
The following drives are available in a 3.5" form factor. Fibre Channel:
v 300 GB, 15K RPM v 450 GB, 15K RPM v 600 GB, 15K RPM
Serial Advanced Technology Attachment (SATA): v 2 TB, 7.2K RPM
Fibre Channel drive types with Full Disk Encryption:
v 300 GB, 15K RPM v 450 GB, 10K RPM
Fibre Channel Solid State Drives (SSDs): v 600 GB
The following drives are available in a 2.5" form factor. Serial-attached SCSI (SAS):
v 146 GB, 15K RPM v 300 GB, 15K RPM v 450 GB, 10K RPM v 600 GB, 10K RPM v 900 GB, 10K RPM
SAS drive types with Full Disk Encryption (FDE):
v 146 GB, 15K RPM v 300 GB, 15K RPM v 450 GB, 10K RPM v 600 GB, 10K RPM v 900 GB, 10K RPM
2.5" SSD:
v 300 GB (nonencryption) v 400 GB (FDE and nonencryption)
The following is available in a 3.5" form factor. SAS drive set:
v 3 TB 7.2K RPM (includes nonencryption,FDE and FDE Standby CoD)

Host attachment overview

The DS8000 series provides various host attachments so that you can consolidate storage capacity and workloads for open-systems hosts and System z.
The DS8000 series provides extensive connectivity using Fibre Channel adapters across a broad range of server environments.
30 Introduction and Planning Guide
Host adapter intermix support
Both 4-port and 8-port host adapters (HAs) are available in the DS8800. The DS8700 supports only the 4-port HA, but like the DS8800, it can use the same 8 Gbps, 4-port HA for improved performance.
DS8700
The DS8700 supports a 4-port, 4 Gbps or 8 Gbps HA plugged into either the C1 or C4 slots of the I/O enclosures, slots C2 and C5 only support 4 Gbps HA.
If all of the C1 and C4 slots are occupied by a previously installed 4 Gbps HA, you can remove the existing HA from the C1 or C4 slot to install the 4-port 8 Gbps HA. The removed 4 Gbps HA might be reinstalled to an available C2 or C5 slot. The DS8700 can support up to 128 Fibre Channel ports, but only 64 of those can be 8 Gbps.
DS8800
The DS8800 supports both 4-port and 8-port, 8 Gbps HAs. It does not support 4 Gbps HAs. The position and the plug order in the I/O enclosure of the storage unit is the same as the DS8700. No HAs are supported in slots C2 and C5.
Figure 11 shows the adapter plug order for the DS8700 (4-port) and DS8800 (4-port and 8-port) HA configuration.
C1 C2 C3 C4 C5 C6
C7
HA HA DA HA HA DA
3
715
3
X
Figure 11. Adapter plug order for the DS8700 (4-port) and the DS8800 (4-port and 8-port ) in a HA configuration.
1X
C1 C2 C3 C4 C5 C6
C7
HA HA DA HA HA DA
2
6
2
X
4
8
4
X
HA Plug order 94X
HA Plug order 95X
f2c01642
Figure 12 on page 32 also shows the adapter plug order for the DS8700 and DS8800 using a 4-port or 8-port HA configuration.
Chapter 2. Hardware and features 31
I/O enclosures
Host adapter plug order for two and four I/O bay enclosures
Slot number
C1 C2 C3 C4 C5 C6
I/O enclosures
Slot number
C1 C2 C3 C4 C5 C6
Installed I/O enclosures
Top I/O enc 1
Bottom I/O enc 3
Top I/O enc 1
Bottom I/O enc 3
Top I/O enc 1
Bottom I/O enc 3
Top I/O enc 1
Bottom I/O enc 3
-- --
37 15
15 3 11 4 12 8 16
7
513 1 9
-- --
3
X
X3X 4X 8X
7
5X 1
1X
X
Top I/O enc 2
Bottom I/O enc 4
Top I/O enc 2
Bottom I/O enc 4
Top I/O enc 2
Bottom I/O enc 4
Top I/O enc 2
Bottom I/O enc 4
-- --
648
2
210 614
-- --
X4X
2
2X 6
Figure 12. Plug order for two and four DS8700 and DS8000 I/O enclosures
Open-systems host attachment with Fibre Channel adapters
You can attach a DS8000 series to an open-systems host with Fibre Channel adapters.
2 I/O enclosures (DS8700)
4 I/O enclosures (DS8700)
2 I/O enlcosures (DS8800)
4 I/O enclosures (DS8800)
X
f2c01643
Fibre Channel is a 2 Gbps, 4 Gbps or 8 Gbps, full-duplex, serial communications technology to interconnect I/O devices and host systems that are separated by tens of kilometers.
The IBM System Storage DS8000 series supports SAN connections of up to 2 Gbps with 2 Gbps host adapters, up to 4 Gbps with 4 Gbps host adapters, and up to 8 Gbps with 8 Gbps host adapters. The DS8000 series negotiates automatically, determining whether it is best to run at a 1 Gbps, 2 Gbps, 4 Gbps, or 8 Gbps link speed. The IBM System Storage DS8000 series detects and operates at the greatest available link speed that is shared by both sides of the system.
Fibre Channel technology transfers information between the sources and the users of the information. This information can include commands, controls, files, graphics, video, and sound. Fibre Channel connections are established between Fibre Channel ports that reside in I/O devices, host systems, and the network that interconnects them. The network consists of elements like switches, bridges, and repeaters that are used to interconnect the Fibre Channel ports.
Fibre Channel overview for the DS8000 series
Each storage unit Fibre Channel adapter has four or eight ports, and each port has a unique worldwide port name (WWPN). You can configure a port to operate with the SCSI-FCP upper-layer protocol using the DS Storage Manager or the DS CLI. You can add Fibre Channel shortwave and longwave adapters to a DS8000 model.
32 Introduction and Planning Guide
For details on the host systems that support Fibre Channel adapters, go to the System Storage Interoperation Center (SSIC) website at www.ibm.com/systems/ support/storage/config/ssic.
Fibre Channel adapters for SCSI-FCP support the following configurations:
v A maximum of 128 host ports on the DS8700 v A maximum of 16, 4-port 8 Gbps host adapters on the DS8700 v A maximum of 16 host adapters on the DS8700, Model 941 (2-way) and a
maximum of 32 host adapters on the DS8700, Model 941 (4-way), which equates to a maximum of 128 Fibre Channel ports
v A maximum of 4 host adapters on the DS8800, Model 951 (2-way) and a
maximum of 16 host adapters on the DS8800, Model 95E (4-way), which equates to a maximum of 128 Fibre Channel ports
v A maximum of 506 logins per Fibre Channel port, which includes host ports and
PPRC target and initiator ports
v Access to 63700 LUNs per target (one target per host adapter), depending on
host type
v Either arbitrated loop, switched-fabric, or point-to-point topologies
FICON-attached System z hosts overview
This section describes how you can attach the DS8000 storage unit to FICON-attached System z hosts.
Each storage unit Fibre Channel adapter has four ports. Each port has a unique worldwide port name (WWPN). You can configure the port to operate with the FICON upper-layer protocol. For FICON, the Fibre Channel port supports connections to a maximum of 509 FICON hosts. On FICON, the Fibre Channel adapter can operate with fabric or point-to-point topologies.
With Fibre Channel adapters that are configured for FICON, the storage unit provides the following configurations:
v Either fabric or point-to-point topologies v A maximum of 16 host adapters on DS8700 Model 941 (2-way) and a maximum
of 32 host adapters on Model 941 (4-way), which equates to a maximum of 128 host adapter ports
v A maximum of 16, 4-port 8 Gbps host adapters on the DS8700 v A maximum of eight host adapters on the DS8800 Model 951 (4-way), eight
ports each, which equates to 64 host adapter ports. With the first expansion model, 95E, another eight host adapters are available, which equates to an additional 64 ports (a maximum of 128 host adapter ports).
v A maximum of 509 logins per Fibre Channel port v A maximum of 8192 logins per storage unit v A maximum of 1280 logical paths on each Fibre Channel port v Access to all 255 control-unit images (8000 CKD devices) over each FICON port v A maximum of 512 logical paths per control unit image.
Note: FICON host channels limit the number of devices per channel to 16 384. To
fully access 65 280 devices on a storage unit, it is necessary to connect a minimum of four FICON host channels to the storage unit. You can access the devices through a switch to a single storage unit FICON port. With this method, you can expose 64 control-unit images (16 384 devices) to each host channel.
Chapter 2. Hardware and features 33
The storage unit supports the following operating systems for System z and S/390 hosts:
v Linux v Transaction Processing Facility (TPF) v Virtual Storage Extended/Enterprise Storage Architecture (VSE/ESA) v z/OS v z/VM v z/VSE
For the most current information on supported hosts, operating systems, adapters and switches, go to the System Storage Interoperation Center (SSIC) website at www.ibm.com/systems/support/storage/config/ssic.
®
®

Processor memory

The DS8000 offers the following processor memory offerings.
DS8700
The DS8700 Model 941 two-way configuraton offers up to 128 GB of processor memory.
The DS8700 Model 941 four-way configuration offers up to 384 GB of processor memory.
DS8800
The DS8800 Model 951 two-way configuration offers up to 128 GB of processor memory.
The DS8800 Model 951 four-way configuration offers up to 384 GB of processor memory.
The business-class feature, Model 951 two-way configuration, offers 16 GB of processor memory with feature code 4211.
The nonvolatile storage (NVS) scales with the selected processor memory size, which can also help optimize performance. The NVS is typically 1/32 of the installed memory.
Note: The minimum NVS is 1 GB.
For more information, see “Overview of physical configurations” on page 89.

Subsystem device driver for open-systems

The IBM System Storage Multipath Subsystem Device Driver (SDD) supports open-systems hosts.
The Subsystem Device Driver (SDD) is enclosed in the host server with the native disk device driver for the storage unit. It uses redundant connections between the host server and disk storage in the DS8000 series to provide enhanced performance and data availability.

Balancing the I/O load

You can maximize the performance of an application by spreading the I/O load across clusters, arrays, and device adapters in the storage unit.
34 Introduction and Planning Guide
During an attempt to balance the load within the storage unit, placement of application data is the determining factor. The following resources are the most important to balance, roughly in order of importance:
v Activity to the RAID disk groups. Use as many RAID disk groups as possible
for the critical applications. Most performance bottlenecks occur because a few disks are overloaded. Spreading an application across multiple RAID disk groups ensures that as many disk drives as possible are available. This is extremely important for open-system environments where cache-hit ratios are usually low.
v Activity to the clusters. When selecting RAID disk groups for a critical
application, spread them across separate clusters. Because each cluster has separate memory buses and cache memory, this maximizes the use of those resources.
v Activity to the device adapters. When selecting RAID disk groups within a
cluster for a critical application, spread them across separate device adapters.
v Activity to the SCSI or Fibre Channel ports. Use the IBM System Storage
Multipath Subsystem Device Driver (SDD) or similar software for other platforms to balance I/O activity across SCSI or Fibre Channel ports.
Note: For information about SDD, see IBM System Storage Multipath Subsystem
Device Driver User's Guide. This document also describes the product
engineering tool, the ESSUTIL tool, which is supported in the pcmpath commands and the datapath commands.

Storage consolidation

When you use a storage unit, you can consolidate data and workloads from different types of independent hosts into a single shared resource.
You might mix production and test servers in an open systems environment or mix open systems, System z and S/390 hosts. In this type of environment, servers rarely, if ever, contend for the same resource.
Although sharing resources in the storage unit has advantages for storage administration and resource sharing, there are additional implications for workload planning. The benefit of sharing is that a larger resource pool (for example, disk drives or cache) is available for critical applications. However, you must ensure that uncontrolled or unpredictable applications do not interfere with critical work. This requires the same kind of workload planning that you use when you mix various types of work on a server.
If your workload is critical, consider isolating it from other workloads. To isolate the workloads, place the data as follows:
v On separate RAID disk groups. Data for open systems, System z or S/390 hosts
are automatically placed on separate arrays, which reduces the contention for disk use.
v On separate device adapters. v In separate storage unit clusters, which isolates use of memory buses,
microprocessors, and cache resources. Before you make this decision, verify that the isolation of your data to a single cluster provides adequate data access performance for your application.
Chapter 2. Hardware and features 35

Count key data

Fixed block

In count-key-data (CKD) disk data architecture, the data field stores the user data.
Because data records can be variable in length, in CKD they all have an associated count field that indicates the user data record size. The key field enables a hardware search on a key. The commands used in the CKD architecture for managing the data and the storage devices are called channel command words.
In fixed block (FB) architecture, the data (the logical volumes) are mapped over fixed-size blocks or sectors.
With an FB architecture, the location of any block can be calculated to retrieve that block. This architecture uses tracks and cylinders. A physical disk contains multiple blocks per track, and a cylinder is the group of tracks that exists under the disk heads at one point in time without performing a seek operation.
|
| | | |
| | | |
| |
| |
| |
|
| | |
|
| |
| | | |
| | |

T10 DIF support

American National Standards Institute (ANSI) T10 Data Integrity Field (DIF) standard is supported on System z for SCSI end-to-end data protection on fixed block (FB) LUN volumes. This support applies to the IBM Storage DS8800 unit (models 951 and 95E). System z support applies to FCP channels only.
System z provides added end-to-end data protection between the operating system and the DS8800 unit. This support adds protection information consisting of CRC (Cyclic Redundancy Checking), LBA (Logical Block Address), and host application tags to each sector of FB data on a logical volume.
Data protection using the T10 Data Integrity Field (DIF) on FB volumes includes the following features:
v Ability to convert logical volume formats between standard and protected
formats supported through PPRC between standard and protected volumes
v Backward compatibility of T10-protected volumes on the DS8800 with non T10
DIF-capable hosts
v Allows end-to-end checking at the application level of data stored on FB disks v Additional metadata stored by the storage facility image (SFI) allows host
adapter-level end-to-end checking data to be stored on FB disks independently of whether the host uses the DIF format.
Notes:
v This feature requires changes in the I/O stack to take advantage of all
the capabilities the protection offers.
v T10 DIF volumes can be used by any type of Open host with the
exception of iSeries, but active protection is supported only for Linux on System z. The protection can only be active if the host server is Linux on System z-enabled.
v T10 DIF volumes can accept SCSI I/O of either T10 DIF or standard
type, but if the FB volume type is standard, then only standard SCSI I/O is accepted.
36 Introduction and Planning Guide
|

Logical volumes

Allocation, deletion, and modification of volumes

A logical volume is the storage medium that is associated with a logical disk. It typically resides on one or more hard disk drives.
For the storage unit, the logical volumes are defined at logical configuration time. For count-key-data (CKD) servers, the logical volume size is defined by the device emulation mode and model. For fixed block (FB) hosts, you can define each FB volume (LUN) with a minimum size of a single block (512 bytes) to a maximum size of 2
A logical device that has nonremovable media has one and only one associated logical volume. A logical volume is composed of one or more extents. Each extent is associated with a contiguous range of addressable data units on the logical volume.
All extents of the ranks assigned to an extent pool are independently available for allocation to logical volumes. The extents for a LUN or volume are logically ordered, but they do not have to come from one rank and the extents do not have to be contiguous on a rank. This construction method of using fixed extents to form a logical volume in the DS8000 allows flexibility in the management of the logical volumes. You can delete volumes, resize volumes, and reuse the extents of those volumes to create other volumes, different sizes. One logical volume can be deleted without affecting the other logical volumes defined on the same extent pool.
32
blocks or 16 TB.
Because the extents are cleaned after you delete a volume, it can take some time until these extents are available for reallocation. The reformatting of the extents is a background process.
There are two extent allocation methods used by the DS8000: rotate volumes and storage pool striping (rotate extents).
Storage pool striping: extent rotation
The default storage allocation method is storage pool striping. The extents of a volume can be striped across several ranks. The DS8000 keeps a sequence of ranks. The first rank in the list is randomly picked at each power on of the storage subsystem. The DS8000 tracks the rank in which the last allocation started. The allocation of a first extent for the next volume starts from the next rank in that sequence. The next extent for that volume is taken from the next rank in sequence, and so on. The system rotates the extents across the ranks.
If you migrate an existing non-striped volume to the same extent pool with a rotate extents allocation method, then the volume is "reorganized." If you add more ranks to an existing extent pool, then the "reorganizing" existing striped volumes spreads them across both existing and new ranks.
You can configure and manage storage pool striping using the DS Storage Manager, DS CLI, and DS Open API. The default of the extent allocation method (EAM) option that is allocated to a logical volume is now rotate extents. The rotate extents option is designed to provide the best performance by striping volume extents across ranks in extent pool.
Chapter 2. Hardware and features 37

LUN calculation

Managed EAM
Once a volume is managed by Easy Tier, the EAM of the volume is changed to managed EAM, which can result in placement of the extents differing from the rotate volume and rotate extent rules. The EAM only changes when a volume is manually migrated to a non-managed pool.
Rotate volumes allocation method
Extents can be allocated sequentially. In this case, all extents are taken from the same rank until there are enough extents for the requested volume size or the rank is full, in which case the allocation continues with the next rank in the extent pool.
If more than one volume is created in one operation, the allocation for each volume starts in another rank. When allocating several volumes, rotate through the ranks. You might want to consider this allocation method when you prefer to manage performance manually. The workload of one volume is going to one rank. This method makes the identification of performance bottlenecks easier; however, by putting all the volumes data onto just one rank, you might introduce a bottleneck, depending on your actual workload.
The DS8000 series uses a volume capacity algorithm (calculation) to provide a logical unit number (LUN).
In the DS8000 family, physical storage capacities such as DDMs are expressed in powers of 10. Logical or effective storage capacities (logical volumes, ranks, extent pools) and processor memory capacities are expressed in powers of 2. Both of these conventions are used for logical volume effective storage capacities.
|
On open volumes with 512 byte blocks (including T10-protected volumes), you can specify an exact block count to create a LUN. You can specify a DS8000 standard LUN size (which is expressed as an exact number of binary GBs (2^30)) or you can specify an ESS volume size (which is expressed in decimal GBs (10^9) accurate to
0.1 GB). The unit of storage allocation unit for open volumes is fixed block one extent. The extent size for open volumes is exactly 1 GB (2^30). Any logical volume that is not an exact multiple of 1 GB does not use all the capacity in the last extent that is allocated to the logical volume. Supported block counts are from 1 to 4 194 304 blocks (2 binary TB) in increments of one block. Supported DS8000 sizes are from 1 to 2048 GB (2 binary TB) in increments of 1 GB. The supported ESS LUN sizes are limited to the exact sizes that are specified from 0.1 to 982.2 GB (decimal) in increments of 0.1 GB and are rounded up to the next larger 32 K byte boundary. The ESS LUN sizes do not result in DS8000 standard LUN sizes. Therefore, they can waste capacity. However, the unused capacity is less than one full extent. ESS LUN sizes are typically used on DS8000 when volumes must be copied between DS8000 and ESS.
On open volumes with 520 byte blocks, you can select one of the supported LUN sizes that are used on System i
®
processors to create a LUN. The operating system uses 8 of the bytes in each block. This leaves 512 bytes per block for your data. The selected sizes are specified in decimal GB (10^9) or are specified to the exact block count that is shown in Table 5 on page 39. System i LUNs are defined as protected or unprotected. If the open volume is defined as unprotected, the AS/400
®
operating system performs software mirroring on the LUN with another non-protected internal or external LUN. If the open volume is defined as protected,
38 Introduction and Planning Guide
the AS/400 operating system does not perform software mirroring on the LUN. The selection of protected or unprotected does not affect the RAID protection that is used by DS8000 on the open volume. In either case, the volume remains protected by RAID.
On CKD volumes, you can specify an exact cylinder count or a DS8000 standard volume size to create a LUN. The DS8000 standard volume size is expressed as an exact number of Mod 1 equivalents (which is 1113 cylinders). The unit of storage allocation for CKD volumes is one CKD extent. The extent size for CKD volume is exactly a Mod 1 equivalent (which is 1113 cylinders). Any logical volume that is not an exact multiple of 1113 cylinders (1 extent) does not use all the capacity in the last extent that is allocated to the logical volume. For CKD volumes that are created with 3380 track formats, the number of cylinders (or extents) is limited to either 2226 (1 extent) or 3339 (2 extents). For CKD volumes that are created with 3390 track formats, you can specify the number of cylinders in the range of 1 ­65520 (x'0001' - x'FFF0') in increments of one cylinder, or as an integral multiple of 1113 cylinders between 65,667 - 262,668 (x'10083' - x'4020C') cylinders (59 - 236 Mod1 equivalents). Alternatively, for 3390 track formats, you can specify Mod 1 equivalents in the range of 1-236.
Note: On IBM i, the supported logical volume sizes for load source units (LSUs)
are 17.54 GB, 35.16 GB, 70.56 GB, and 141.1 GB. Logical volume sizes of 8.59 and 282.2 GB are not supported.
Table 5 provides models of storage capacity and disk volumes for System i.
Table 5. Capacity and models of disk volumes for System i
Model Number (Unprotected)
A81 A01 8.59 GB 16 777 216
A82 A02 17.55 GB 34 275 328
A85 A05 35.17 GB 68 681 728
A84 A04 70.56 GB 137 822 208
A86 A06 141.12 GB 275 644 416
A87 A07 282.25 GB 551 288 832
Model Number (Protected) Capacity
Expected Number of LBAs
(0x01000000)
(0x020B0000)
(0x04180000)
(0x08370000)
(0x106E0000)
(0x20DC0000)

Extended address volumes for CKD

Count key data (CKD) volumes now support the additional capacity of 1 TB. The 1 TB capacity is an increase in volume size from the previous 223 GB.
OS Version Support
Version 5 Release 2 and Version 5 Release 3
Version 5 Release 2 and Version 5 Release 3
Version 5 Release 2 and Version 5 Release 3
Version 5 Release 2 and Version 5 Release 3
Version 5 Release 3 and later
Version 5 Release 3 and later
This increased volume capacity is referred to as extended address volumes (EAV) and is supported by the 3390 Model A. Use a maximum size volume of up to 1,182,006 cylinders for the IBM zOS. This support is available to you for the z/OS version 12.1, and later.
Youcannowcreatea1TBIBMSystem z CKD volume on the DS8700 and DS8800.
Chapter 2. Hardware and features 39
A System z CKD volume is composed of one or more extents from a CKD extent pool. CKD extents are 1113 cylinders in size. When you define a System z CKD volume, you must specify the number of cylinders that you want for the volume. The DS8000 and the zOS have limits for the CKD EAV sizes. You can define CKD volumes with up to 262,668 cylinders, about 223 GB on the DS8700 and 1,182,006 cylinders, about 1 TB on the DS8800.
If the number of cylinders that you specify is not an exact multiple of 1113 cylinders, then some space in the last allocated extent is wasted. For example, if you define 1114 or 3340 cylinders, 1112 cylinders are wasted. For maximum storage efficiency, consider allocating volumes that are exact multiples of 1113 cylinders. In fact, multiples of 3339 cylinders should be considered for future compatibility. If you want to use the maximum number of cylinders for a volume (that is 1,182,006 cylinders), you are not wasting cylinders, because it is an exact multiple of 1113 (1,182,006 divided by 1113 is exactly 1062). This size is also an even multiple (354) of 3339, a model 3 size.

Quick initialization

The quick initialization function initializes the data logical tracks or blocks within a specified extent range on a logical volume with the appropriate initialization pattern for the host.
Normal read and write access to the logical volume is allowed during the initialization process. Therefore, the extent metadata must be allocated and initialized before the quick initialization function is started. Depending on the operation, the quick initialization can be started for the entire logical volume or for an extent range on the logical volume.
The quick initialization function is started for the following operations:
v Standard logical volume creation v Standard logical volume expansion v Standard logical volume reinitialization v Extent space-efficient (ESE) logical volume expansion v ESE logical volume reinitialization v ESE logical volume extent conversion v Track space-efficient (TSE) or compressed TSE logical volume expansion v TSE or compressed TSE logical volume reinitialization
40 Introduction and Planning Guide

Chapter 3. Data management features

The DS8000 storage unit is designed with many management features that allow you to securely process and access your data according to your business needs, even if it is 24 hours a day and 7 days a week.
This chapter contains information about the data management features in your DS8000. Use the information in this chapter to assist you in planning, ordering licenses, and in the management of your DS8000 data management features.

FlashCopy SE feature

The FlashCopy SE feature allocates storage space on an as-needed basis by using space on a target volume only when it actually copies tracks from the source volume to the target volume.
Without track space-efficient (TSE) volumes, the FlashCopy function requires that all the space on a target volume be allocated and available even if no data is copied there. With space-efficient volumes, FlashCopy uses only the number of tracks that are required to write the data that is changed during the lifetime of the FlashCopy relationship, so the allocation of space is on an as-needed basis. Because it does not require a target volume that is the exact size of the source volume, the FlashCopy SE feature increases the potential for a more effective use of system storage capacity.
FlashCopy SE is intended for temporary copies. Unless the source data has little write activity, copy duration does not last longer than 24 hours. The best use of FlashCopy SE is when less than 20% of the source volume is updated over the life of the relationship. Also, if performance on the source or target volumes is important, standard FlashCopy is strongly recommended.
You can define the space-efficiency attribute for the target volumes during the volume creation process. A space-efficient volume can be created from any extent pool that has space-efficient storage already created in it. Both the source and target volumes of any FlashCopy SE relationship must be on the same server cluster.
If the space-efficient source and target volumes have been created and are available, they can be selected when you create the FlashCopy relationship.
Important: Space-efficient volumes are currently supported as FlashCopy target
volumes only.
After a space-efficient volume is specified as a FlashCopy target, the FlashCopy relationship becomes space-efficient. FlashCopy works the same way with a space-efficient volume as it does with a fully provisioned volume. All existing copy functions work with a space-efficient volume except for the Background Copy function (not permitted with a space-efficient target) and the Dataset Level FlashCopy function. A miscalculation of the amount of copied data can cause the space-efficient repository to run out of space, and the FlashCopy relationship fails (that is, reads or writes to the target are prevented). You can withdraw the FlashCopy relationship to release the space.
© Copyright IBM Corp. 2004, 2012 41

Dynamic volume expansion

The DS8000 series supports dynamic volume expansion.
Dynamic volume expansion increases the capacity of open systems and System z volumes, while the volume remains connected to a host system. This capability simplifies data growth by providing volume expansion without taking volumes offline.
Since some operating systems do not support a change in volume size, a host action is required to detect the change after the volume capacity is increased.
The following maximum volume sizes are supported:
v Open Systems FB volumes - 16 TB v System z CKD volume types 3390 model 9, and custom is 65520 cylinders v System z CKD volume type 3390 model 3 is 3339 cylinders v System z CKD volume types 3390 model A, up to 1,182,006 cylinders

Count key data and fixed block volume deletion prevention

The DS CLI and DS Storage Manager have been enhanced to include a force option that is designed to prevent deleting count key data (CKD) and fixed block (FB) volumes that are in use or online.
If the force option is enabled, the DS8000 checks whether the volumes are online or in use before they are deleted. (For CKD volumes, a volume is online if it is participating in a Copy Services relationship or if it is online to a System z host. For FB volumes, a volume is online if it is participating in a Copy Services relationship or is part of a volume group.) If you specify the force option when you delete a volume, online checking is suppressed and the DS8000 deletes the volume regardless if it is online or in use.

IBM System Storage Easy Tier

Easy Tier is an optional, no charge feature on the DS8800 and the DS8700. It offers enhanced capabilities such as manual volume capacity rebalance, auto performance rebalancing in both homogenous and hybrid pools, hot spot management, rank depopulation, manual volume migration, and thin provisioning support (ESE volumes only) for Easy Tier features. Easy Tier determines the appropriate tier of storage based on data access requirements and then automatically and nondisruptively moves data, at the subvolume or sub-LUN level, to the appropriate tier on the DS8000.
Use Easy Tier to dynamically move your data to the appropriate drive tier in your system with its automatic performance monitoring algorithms. You can use this feature to increase the efficiency of your SSDs and the efficiency of all the tiers in your DS8000 system.
You can use the features of Easy Tier between three tiers of storage within your system on the DS8700 and DS8800 storage units.
With the latest Easy Tier you can distribute your entire workload among the ranks in your storage pool using three tiers, more efficiently distributing bandwidth across tiers in addition to IOPS.
42 Introduction and Planning Guide
You can also use Easy Tier in automatic mode to assist in the management of your ESE thin provisioning on fixed block (FB) volumes.
Easy Tier feature enhancements help you to effectively manage your system health, storage performance, and storage capacity automatically. Easy Tier uses system configuration and workload analysis with warm demotion to achieve effective overall system health. Simultaneously, data promotion and auto-rebalancing address performance while cold demotion works to address capacity. This maximizes performance while minimizing cost. See “Easy Tier: automatic mode” on page 44 for more information.
An additional enhancement provides the capability for you to use Easy Tier in manual mode for thin provisioning. Rank depopulation is supported on ranks with ESE volumes allocated (extent space-efficient) or auxiliary volumes. See “Easy Tier: manual mode” on page 48 for more information.
Note: Use Easy Tier in manual mode to depopulate ranks containing TSE auxiliary
volumes.
Use the capabilities of Easy Tier to support: v Three tiers - Using three tiers and enhanced algorithms improves system
performance and cost effectiveness.
v Cold demotion - Cold data (or extents) stored on a higher-performance tier is
demoted to a more appropriate tier. Easy Tier is available with two-tier HDD pools as well as with three-tier pools. Sequential bandwidth is moved to the lower tier to increase the efficient use of your tiers. For more information on cold demote functions, see “Easy Tier: automatic mode” on page 44.
v Warm demotion - Active data that has larger bandwidth is demoted from either
tier one (SSD) or tier two (Enterprise) to SAS Enterprise or Nearline SAS on the DS8800 (FC or SATA on a DS8700) to appropriate storage tiers to help protect drive performance. Warm demotion is triggered whenever the higher tier is over its bandwidth capacity. Selected warm extents are demoted to allow the higher tier to operate at its optimal load. Warm demotes do not follow a predetermined schedule. For more information on warm demote functions, see “Easy Tier: automatic mode” on page 44.
v Manual volume or pool rebalance - Volume rebalancing relocates the smallest
number of extents of a volume and restripes those extents on all available ranks of the extent pool.
v Auto-rebalancing - Automatically balances the workload of the same storage tier
within both the homogeneous and the hybrid pool based on usage to improve system performance and resource use. Use the enhanced auto-rebalancing functions of Easy Tier to manage a combination of homogenous and hybrid pools, including relocating hot spots on ranks. With homogenous pools, systems with only one tier can use Easy Tier technology to optimize their raid array utilization.
v Rank depopulations - Allows ranks that have extents (data) allocated to them to
be unassigned from an extent pool by using extent migration to move extents from the specified ranks to other ranks within the pool.
v Thin provisioning - Support for the use of thin provisioning is available on ESE
(FB) and standard volumes. The use of TSE volumes (FB and CKD) is not supported.
Easy Tier provides a performance monitoring capability, regardless of whether the Easy Tier license feature is activated. Easy Tier uses the monitoring process to
Chapter 3. Data management features 43
determine what data to move and when to move it when using automatic mode. You can enable monitoring independently (with or without the Easy Tier license feature activated) for information about the behavior and benefits that can be expected if automatic mode were enabled. For more information, see “Monitoring volume data” on page 51.
Data from the monitoring process is included in a summary report that you can download to your Windows system. Use the IBM System Storage DS8000 Storage Tier Advisor Tool application to view the data when you point your browser to that file. For more information, see “IBM System Storage DS8000 Storage Tier Advisor Tool” on page 52.
Prerequisites
The following conditions must be met to enable Easy Tier: v The Easy Tier license feature is enabled (required for both manual and automatic
mode, except when monitoring is set to All Volumes).
v For automatic mode to be active, the following conditions must be met:
– Easy Tier automatic mode monitoring is set to either All or Auto mode. – For Easy Tier to manage pools, the Auto Mode Volumes must be set to either
Tiered Pools or All Pools.
Table 6 contains drive combinations you can use with your three-tier configuration, and with the migration of your ESE volumes, for DS8700 and DS8800.
Table 6. Drive combinations to use with three-tiers
Models Drive combinations: three tiers
DS8700 SSD, FC, and SATA DS8800 SSD, SAS Enterprise, Nearline SAS
Notes:
1. Easy Tier features include extendable support in automatic mode for FC (enterprise class) and SATA storage tiers.
2. SATA drives are not available for DS8800 in SAS 2, 2.5” form factor.
3. For version 6, release 2 and later, the top tier in a three-tier configuration can only be
SSD.

Easy Tier: automatic mode

Use of the automatic mode of Easy Tier requires the Easy Tier license feature.
In Easy Tier, both IOPS and bandwidth algorithms determine when to migrate your data. This process can help you improve performance.
Use automatic mode to have Easy Tier relocate your extents to their most appropriate storage tier in a hybrid pool, based on usage. Because workloads typically concentrate I/O operations (data access) on only a subset of the extents within a volume or LUN, automatic mode identifies the subset of your frequently accessed extents and relocates them to the higher-performance storage tier.
Subvolume or sub-LUN data movement is an important option to consider in volume movement because not all data at the volume or LUN level becomes hot data. For any given workload, there is a distribution of data considered either hot or cold. This can result in significant overhead associated with moving entire
44 Introduction and Planning Guide
volumes between tiers. For example, if a volume is 1 TB, you do not want to move the entire 1 TB volume when the generated heat map indicates that only 10 GB is considered hot. This capability makes use of your higher performance tiers while reducing the number of drives that you need to optimize performance.
Using automatic mode, you can use high performance storage tiers with a much smaller cost. This means that you invest a small portion of storage in the high-performance storage tier. You can use automatic mode for relocation and tuning without the need for your intervention, using automatic mode to help generate cost-savings while optimizing your storage performance.
Three-tier automatic mode is supported with the following Easy Tier functions:
v Support for ESE volumes with the thin provisioning of your FB volumes. v Support for a matrix of device (DDM) and adapter types v Enhanced monitoring of both bandwidth and IOPS limitations v Enhanced data demotion between tiers v Automatic mode hot spot rebalancing, which applies to the following auto
performance rebalance situations: – Redistribution within a tier after a new rank is added into a managed pool – Redistribution within a tier after a rank is removed from a managed pool – Redistribution when the workload is imbalanced on the ranks within a tier of
a managed pool.
To help manage and improve performance, Easy Tier is designed to identify hot data at the subvolume or sub-LUN (extent) level, based on ongoing performance monitoring, and then automatically relocate that data to an appropriate storage device in an extent pool that is managed by Easy Tier. Easy Tier uses an algorithm to assign heat values to each extent in a storage device. These heat values determine on what tier the data would best reside, and migration takes place automatically. Data movement is dynamic and transparent to the host server and to applications using the data.
By default, automatic mode is enabled (through the DS CLI and DS Storage Manager) when the Easy Tier license feature is activated. You can temporarily disable automatic mode.
Easy Tier provides extended and improved capabilities to support the automatic functions of auto-rebalance, warm demotion, and cold demotion. This now includes support for extent pools with three tiers (SSD, SAS Enterprise, or Near-line SAS for DS8800 and SSD, FC, or SATA for DS8700).
With the latest enhancements to Easy Tier you can use automatic mode to help you manage the thin provisioning of your ESE FB volumes.
Functions and features of Easy Tier: automatic mode
This section describes the functions and features of Easy Tier in automatic mode.
Auto-rebalance
Rebalance is a function of Easy Tier automatic mode to balance the extents in the same tier based on usage. Auto-rebalance is now enhanced to support single managed pools as well as hybrid pools. You can use the Storage Facility Image (SFI) control to enable or disable the auto-rebalance function on all pools of an SFI.
Chapter 3. Data management features 45
When you enable auto-rebalance, every standard and ESE volume is placed under Easy Tier management for auto-rebalancing procedures. Using auto-rebalance gives you the advantage of these automatic functions:
v Easy Tier operates within a tier, inside a managed storage pool. v Easy Tier automatically detects performance skew and rebalances extents within
the same tier.
In any tier, placing highly active (hot) data on the same physical rank can cause the hot rank or the associated device adapter (DA) to become a performance bottleneck. Likewise, over time skews can appear within a single tier that cannot be addressed by migrating data to a faster tier alone, and require some degree of workload rebalancing within the same tier. Auto-rebalance addresses these issues within a tier in both hybrid and homogenous pools. It also helps the system respond in a more timely and appropriate manner to overloading, skews, and any under-utilization that can occur from the addition or deletion of hardware, migration of extents between tiers, changes in the underlying volume configurations, and variations in the workload. Auto-rebalance adjusts the system to continuously provide optimal performance by balancing the load on the ranks and on DA pairs.
The latest version of Easy Tier provides support for auto-rebalancing within homogenous pools. If you set the Easy Tier Automatic Mode Migration control to Manage All Extent Pools, extent pools with a single-tier can perform intra-tier rank rebalancing. If Easy Tier is turned off, then no volumes are managed. If Easy Tier is on, it manages all the volumes it supports, standard or ESE.TSE volumes are not supported with auto-rebalancing.
Notes:
v Standard and ESE volumes are supported. v In the previous version of Easy Tier, merging pools was restricted to
allow auxiliary volumes only in a single pool, and they were not supported on SSD ranks of hybrid pools. However, in this enhanced version of Easy Tier, these restrictions apply only to repository auxiliary volumes.
v If Easy Tier’s Automatic Mode Migration control is set to Manage All
Extent Pools, then single-tier extent pools are also managed to perform intra-tier rank rebalancing.
Warm demotion
Warm demotion operation demotes warm (or mostly sequential-accessed) extents in SSD to HDD, or from Enterprise SAS to NearLine SAS drives (DS8800) to protect the drive performance on the system. Warm demotion was first available in version 6, release 1, the second release of Easy Tier. In both this release and the previous release, the ranks being demoted to are selected randomly. This function is triggered when bandwidth thresholds are exceeded. This means that extents are warm-demoted from one rank to another rank among tiers when extents have high bandwidth but low IOPS.
It is helpful to understand that warm demotion is different from auto-rebalancing. While both warm demotion and auto-rebalancing can be event-based, rebalancing movement takes place within the same tier while warm demotion takes place among more than one tier. Auto-rebalance can initiate when the rank configuration changes. It also performs periodic checking for workload that is not balanced across ranks. Warm demotion initiates when an overloaded rank is detected.
46 Introduction and Planning Guide
Cold demotion
Cold demotion recognizes and demotes cold or semi-cold extents to an appropriate lower-cost tier. Cold extents are demoted in a storage pool to a lower tier as long as that storage pool is not idle.
Cold demotion occurs when Easy Tier detects any of the following scenarios: v Extents in a storage pool become inactive over time, while other data remains
active. This is the most typical use for cold demotion, where inactive data is demoted to the SATA tier. This action frees up extents on the enterprise tier before the extents on the SATA tier become hot, helping the system be more responsive to new, hot data.
v All the extents in a storage pool become inactive simultaneously due to either a
planned or unplanned outage. Disabling cold demotion assists the user in scheduling extended outages or experiencing outages without effecting the extent placement.
v All extents in a storage pool are active. In addition to cold demote using the
capacity in the lowest tier, an extent is selected which has close to zero activity, but with high sequential bandwidth and low random IOPS for the demotion. Bandwidth available on the lowest tier is also used.
v All extents in a storage pool become inactive due to a planned non use event,
such as an application reaching its end of life. In this situation, cold demotion is disabled and the user may select one of three options:
– Allocate new volumes in the storage pool and plan on those volumes
becoming active. Over time, Easy Tier replaces the inactive extents on the enterprise tier with active extents on the SATA tier.
– Depopulate all of the enterprise HDD ranks. When all enterprise HDD ranks
are depopulated, all extents in the pool are on the SATA HDD ranks. Store the extents on the SATA HDD ranks until they need to be deleted or archived to tape. Once the enterprise HDD ranks are depopulated, move them to a storage pool.
– Leave the extents in their current locations and reactivate them at a later time.
Three-tier migration types and their processes illustrates all of the migration types supported by the latest Easy Tier enhancements in a three-tier configuration. The auto-performance rebalance might also include additional swap operations.
Chapter 3. Data management features 47
Auto
Rebalance
Highest
Performance
Tier
Higher
Performance
Tier
Lower
Performance
Tier
SSD
RANK 1
Promote
ENT HDD
RANK 1
Promote
NLHDD RANK1NLHDD RANK
SSD
RANK 2
Warm
Demote
ENT HDD
RANK 2
Warm
Demote
2
Swap
Swap
. . .
. . .
. . .
RANK n
ENT HDD
Cold
Demote
NLHDD RANK
Auto
Rebalance
SSD
RANK n
Expanded Cold Demote
m
f2c01682
Figure 13. Three-tier migration types and their processes

Easy Tier: manual mode

Easy Tier in manual mode provides the capability to migrate volumes and merge extent pools, under the same DS8700 and DS8800 system, concurrently with I/O operations.
In Easy Tier manual mode, you can dynamically relocate a logical volume between extent pools or within an extent pool to change the extent allocation method of the volume or to redistribute the volume across new ranks that have been added. This capability is referred to as dynamic volume relocation. You can also merge two existing pools into one without affecting the data on the logical volumes associated with the extent pools.
Enhanced functions of Easy Tier manual mode offer additional capabilities. You can use manual mode to relocate your extents, or to relocate an entire volume from one pool to another pool. Later, you might also need to change your storage media or configurations. Upgrading to a new disk drive technology, rearranging the storage space, or changing storage distribution within a given workload are typical operations that you can perform with volume relocations. Use manual mode to achieve these operations with minimal performance impact and to increase the options you have in managing your storage.
48 Introduction and Planning Guide
Functions and features of Easy Tier: manual mode
This section descries the functions and features of Easy Tier in manual mode.
Volume migration
Volume migration for restriping can be achieved by: v Restriping - Relocating a subset of extents within the volume for volume
migrations within the same pool.
v Rebalancing - Redistributing the volume across available ranks. This feature
focuses on providing pure striping, without requiring preallocation of all the extents. This means that you can use rebalancing when only a few extents are available.
You can select which logical volumes to migrate, based on performance considerations or storage management concerns. For example, you can:
v Migrate volumes from one extent pool to another. You might want to migrate
volumes to a different extent pool that has more suitable performance characteristics, such as different disk drives or RAID ranks. For example, a volume that was configured to stripe data across a single RAID can be changed to stripe data across multiple arrays for better performance. Also, as different RAID configurations become available, you might want to move a logical volume to a different extent pool with different characteristics, which changes the characteristics of your storage. You might also want to redistribute the available disk capacity between extent pools.
Notes:
– When you initiate a volume migration, ensure that all ranks are in the
configuration state of Normal in the target extent pool.
– Volume migration is supported for standard and ESE volumes. There
is no direct support to migrate auxiliary volumes. However, you can migrate extents of auxiliary volumes as part of ESE migration or rank depopulation.
– Ensure that you understand your data usage characteristics before you
initiate a volume migration.
– The overhead that is associated with volume migration is comparable
to a FlashCopy operation running as a background copy.
v Change the extent allocation method that is assigned to a volume. You can
relocate a volume within the same extent pool but with a different extent allocation method. For example, you might want to change the extent allocation method to help spread I/O activity more evenly across ranks. If you configured logical volumes in an extent pool with fewer ranks than now exist in the extent pool, you can use Easy Tier to manually redistribute the volumes across new ranks that have been added.
Note: If you specify a different extent allocation method for a volume, the new
extent allocation method takes effect immediately.
Manual volume rebalance using volume migration
Volume and pool rebalancing are designed to redistribute the extents of volumes within a non managed pool. This means skew is less likely to occur on the ranks.
Notes:
Chapter 3. Data management features 49
v Manual rebalancing is not allowed in hybrid or managed pools. v Manual rebalancing is allowed in homogeneous pools. v You cannot mix fixed block (FB) and count key data (CKD) drives.
Volume rebalance can be achieved by initiating a manual volume migration. Use volume migration to achieve manual rebalance when a rank is added to a pool, or when a large volume with rotate volumes EAM is deleted. Manual rebalance is often referred to as capacity rebalance because it balances the distribution of extents without factoring in extent usage. When a volume migration is targeted to the same pool and the target EAM is rotate extent, the volume migration acts internally as a volume rebalance.
Use volume rebalance to relocate the smallest number of extents of a volume and restripe the extents of that volume on all available ranks of the pool where it is located. The behavior of volume migration, which differs from volume rebalance, continues to operate as it did in the previous version of Easy Tier.
Notes: Use the latest enhancements to Easy Tier to:
v Migrate ESE logical volumes v Perform pool rebalancing by submitting a volume migration for every
standard and ESE volume in a pool
v Merge extent pools with virtual rank auxiliary volumes in both the
source and destination extent pool
Extent pools
You can manually combine two existing extent pools with homogeneous or hybrid disks into a single extent pool with SSD drives to use auto mode. However, when merged pools are managed by Easy Tier, extents from SSD, SAS Enterprise, and Nearline SAS are managed as a three-tier storage hierarchy.
An extent pool with any mix of SSD, SAS Enterprise, and Nearline SAS drive classes are managed by Easy Tier in automatic mode, for situations in which:
v SSDs are located in the top tier for both DS8800 and DS8700. v There are three tiers composed of SSD, SAS Enterprise or Nearline SAS in a
DS8800. For a DS8700, the three tiers are SSD, FC, and SATA.
Rank depopulation
Easy Tier provides an enhanced method of rank depopulation, which can be used to replace old drive technology, reconfigure pools and tear down hybrid pools. This method increases efficiency and performance when replacing or relocating whole ranks. Use the latest enhancements to Easy Tier to perform rank depopulation on any ranks in the various volume types (ESE logical, virtual rank auxiliary, TSE repository auxiliary, SE repository auxiliary, and non SE repository auxiliary).
Use rank depopulation to concurrently stop using one or more ranks in a pool. You can use rank depopulation to perform any of the following functions:
v Swap out old drive technology v Reconfigure pools v Tear down hybrid pools v Change RAID types
50 Introduction and Planning Guide
Note: Rank depopulation is supported on ranks that have extent space efficient
(ESE) extents.

Monitoring volume data

The IBM Storage Tier Advisory Tool collects and reports volume data. It provides performance monitoring data even if the license feature is not activated.
The monitoring capability of the DS8000 enables it to monitor the usage of storage at the volume extent level. Monitoring statistics are gathered and analyzed every 24 hours. In an Easy Tier managed extent pool, the analysis is used to form an extent relocation plan for the extent pool, which provides a recommendation, based on your current plan, for relocating extents on a volume to the most appropriate storage device. The results of this data are summarized in a report that you can download. For more information, see “IBM System Storage DS8000 Storage Tier Advisor Tool” on page 52.
Table 7 describes monitor settings and mirrors the monitor settings in the DS CLI and DS Storage Manager.
Table 7. DS CLI and DS Storage Manager settings for monitoring
Monitor Setting Not installed Installed
All Volumes All volumes are monitored. All volumes are monitored. Auto Mode Volumes No volumes are monitored. Volumes in extent pools managed by
No Volumes No volumes are monitored. No volumes are monitored.
Easy Tier license feature
Easy Tier are monitored.
The default monitoring setting for Easy Tier Auto Mode is On. Volumes in managed extent pools are monitored when the Easy Tier license feature is activated. Volumes are not monitored if the Easy Tier license feature is not activated.
You can determine whether volumes are monitored and also disable the monitoring process temporarily, using either the DS CLI or DS Storage Manager.

Managing migration processes

You can initiate volume migrations and pause, resume, or cancel a migration process that is in progress.
Volumes that are eligible for migration are dependent on the state and access of the volumes. Table 8 shows the states required to allow migration with Easy Tier.
Table 8. Volume states required for migration with Easy Tier
Volume state Is migration allowed with Easy Tier?
Online Yes Fenced No
Normal Yes Pinned No
Access state
Data state
Chapter 3. Data management features 51
Table 8. Volume states required for migration with Easy Tier (continued)
Volume state Is migration allowed with Easy Tier?
Read only Yes Inaccessible No Indeterminate data loss No Extent fault No
Initiating volume migration
With Easy Tier, you can migrate volumes from one extent pool to another. The time to complete the migration process might vary, depending on what I/O operations are occurring on your storage unit.
If an error is detected during the migration process, the storage facility image (SFI) retries the extent migration after a short time. If an extent cannot be successfully migrated, the migration is stopped, and the configuration state of the logical volume is set to migration error.
Pausing and resuming migration
You can pause volumes that were in the process of being migrated. You can also resume the migration process on the volumes that were paused.
Canceling migration
You can cancel the migration of logical volumes that were in the process of being migrated. The volume migration process pre-allocates all extents for the logical volume when you initiate a volume migration. All pre-allocated extents on the logical volume that have not migrated are released when you cancel a volume migration. The state of the logical volumes changes to migration-canceled and the target extent pool that you specify on a subsequent volume migration is limited to either the source extent pool or target extent pool of the original volume migration.
Note: If you initiate a volume migration but the migration was queued and not in
progress, then the cancel process returns the volume to normal state and not migration-canceled.

IBM System Storage DS8000 Storage Tier Advisor Tool

The DS8000 offers a reporting tool called IBM System Storage DS8000 Storage Tier Advisor Tool.
The Storage Tier Advisor Tool is a Windows application that provides a graphical representation of performance data that is collected by Easy Tier over a 24-hour operational cycle. It is the application that allows you to view the data when you point your browser to the file. The Storage Tier Advisor Tool supports the enhancements provided with Easy Tier, including support for SSD, SAS Enterprise, and Nearline SAS for DS8800 and SSD, FC, and SATA for DS8700 and the auto performance rebalance feature. Download the Storage Tier Advisor Tool for the DS8700 at the DS8700 Storage Tier Advisory Tool program download page. Download the Storage Tier Advisor Tool for the DS8800 at the Customer Download Files Storage Tier Advisor Tool FTP page.
52 Introduction and Planning Guide
To extract the summary performance data generated by the Storage Tier Advisor Tool, you can use either the DS CLI or DS Storage Manager. When you extract summary data, two files are provided, one for each server in the storage facility image (SFI server). The download operation initiates a long running task to collect performance data from both selected storage facility images. This information can be provided to IBM if performance analysis or problem determination is required.
You can view information to analyze workload statistics and evaluate which logical volumes might be candidates for Easy Tier management. If you have not installed and enabled the Easy Tier feature, you can use the performance statistics gathered by the monitoring process to help you determine whether to use Easy Tier to enable potential performance improvements in your storage environment.

Easy Tier considerations and limitations

This section discusses Easy Tier data relocation considerations and limitations.
Migration considerations
The following information might be helpful in using Easy Tier with the DS8000: v You cannot initiate a volume migration on a volume that is in the process of
being migrated. The first migration must complete first.
v You cannot initiate, pause, resume, or cancel migration on selected volumes that
are aliases or virtual volumes.
v You cannot migrate volumes from one extent pool to another or change the
extent allocation method unless the Easy Tier feature is installed on the DS8000.
v There are likely to be a limited number of SSD arrays due to their cost. If there
are volumes that require static extent allocations on SSD arrays, one or more homogeneous extent pools must be configured with one or more SSD ranks. If there are volumes that require Easy Tier automatic mode, one or more heterogeneous extent pools must be configured with one or more SSD ranks. There is no way to share SSD ranks between storage pools. Therefore, a hybrid pool and a non-hybrid pool cannot share space on an SSD rank.
v Volume migration is supported for standard, auxiliary, and ESE volumes. v If you specify a different extent allocation method for a volume, the new extent
allocation method takes effect immediately.
v A volume that is being migrated cannot be expanded and a volume that is being
expanded cannot be migrated.
v When a volume is migrated out of an extent pool that is managed with Easy
Tier, or when Easy Tier is no longer installed, the DS8700 disables Easy Tier and no longer automatically relocates high activity I/O data on that volume between storage devices.
Limitations
The following limitations apply to the use of Easy Tier for DS8700 and DS8800: v TSE logical volumes do not support extent migration. This means these entities
do not support Easy Tier manual mode or Easy Tier automatic mode.
v You cannot merge two extent pools:
– If both extent pools contain TSE volumes. – If there are TSE volumes on the SSD ranks. – If you have selected an extent pool that contains volumes that are being
migrated.
Chapter 3. Data management features 53
v It might be helpful to know that some basic characteristics of Easy Tier might
limit the applicability for your generalized workloads. The granularity of the extent that can be relocated within the hierarchy is large (1 GB). Additionally, the time period over which the monitoring is analyzed is continuous, and long (24 hours). Therefore, some workloads may have hot spots, but when considered over the range of the relocation size, they will not appear, on average, to be hot. Also, some workloads may have hot spots for short periods of time, but when considered over the duration of Easy Tier’s analysis window, the hot spots will not appear, on average, to be hot.

Performance for System z

The DS8000 series supports the following IBM performance enhancements for System z environments.
v Parallel access volumes (PAVs) v Multiple allegiance v z/OS Distributed Data Backup v z/HPF extended distance capability
Parallel access volumes
A PAV capability represents a significant performance improvement by the storage unit over traditional I/O processing. With PAVs, your system can access a single volume from a single host with multiple concurrent requests.
You must configure both your storage unit and operating system to use PAVs. You can use the logical configuration definition to define PAV-bases, PAV-aliases, and their relationship in the storage unit hardware. This unit address relationship creates a single logical volume, allowing concurrent I/O operations.
Static PAV associates the PAV-base address and its PAV aliases in a predefined and fixed method. That is, the PAV-aliases of a PAV-base address remain unchanged. Dynamic PAV, on the other hand, dynamically associates the PAV-base address and its PAV aliases. The device number types (PAV-alias or PAV-base) must match the unit address types as defined in the storage unit hardware.
You can further enhance PAV by adding the IBM HyperPAV feature. IBM HyperPAV associates the volumes with either an alias address or a specified base logical volume number. When a host system requests IBM HyperPAV processing and the processing is enabled, aliases on the logical subsystem are placed in an IBM HyperPAV alias access state on all logical paths with a given path group ID. IBM HyperPAV is only supported on FICON channel paths.
PAV can improve the performance of large volumes. You get better performance with one base and two aliases on a 3390 Model 9 than from three 3390 Model 3 volumes with no PAV support. With one base, it also reduces storage management costs that are associated with maintaining large numbers of volumes. The alias provides an alternate path to the base device. For example, a 3380 or a 3390 with one alias has only one device to write to, but can use two paths.
The storage unit supports concurrent or parallel data transfer operations to or from the same volume from the same system or system image for System z or S/390 hosts. PAV software support enables multiple users and jobs to simultaneously access a logical volume. Read and write operations can be accessed simultaneously
54 Introduction and Planning Guide
to different domains. (The domain of an I/O operation is the specified extents to which the I/O operation applies.)
Multiple allegiance
With multiple allegiance, the storage unit can execute concurrent, multiple requests from multiple hosts.
Traditionally, IBM storage subsystems allow only one channel program to be active to a disk volume at a time. This means that, once the subsystem accepts an I/O request for a particular unit address, this unit address appears "busy" to subsequent I/O requests. This single allegiance capability ensures that additional requesting channel programs cannot alter data that is already being accessed.
By contrast, the storage unit is capable of multiple allegiance (or the concurrent execution of multiple requests from multiple hosts). That is, the storage unit can queue and concurrently execute multiple requests for the same unit address, if no extent conflict occurs. A conflict refers to either the inclusion of a Reserve request by a channel program or a Write request to an extent that is in use.
z/OS Distributed Data Backup
z/OS Distributed Data Backup (zDDB) is an optional licensed feature that allows hosts, attached through a FICON or ESCON interface, to access data on fixed block (FB) volumes through a device address on FICON or ESCON interfaces.

Copy Services

If the zDDB LIC feature key is installed and enabled and a volume group type specifies either FICON or ESCON interfaces, this volume group has implicit access to all FB logical volumes that are configured in addition to all CKD volumes specified in the volume group. In addition, this optional feature enables data backup of open systems from distributed server platforms through a System z host. The feature helps you manage multiple data protection environments and consolidate those into one environment managed by System z. For more information, see “z/OS Distributed Data Backup” on page 135.
z/HPF extended distance
z/HPF extended distance reduces the impact associated with supported commands on current adapter hardware, improving FICON throughput on the DS8000 I/O ports. The DS8000 also supports the new zHPF I/O commands for multitrack I/O operations.
Copy Services functions can help you implement storage solutions to keep your business running 24 hours a day, 7 days a week. Copy Services include a set of disaster recovery, data migration, and data duplication functions.
The DS8000 series supports Copy Service functions that contribute to the protection of your data. These functions are also supported on the IBM TotalStorage Enterprise Storage Server.
Notes:
v If you are creating paths between an older release of the DS8000 (Release
5.1 or earlier), which supports only 4-port host adaptors, and a newer
Chapter 3. Data management features 55
release of the DS8000 (Release 6.0 or later), which supports 8-port host adaptors, the paths should connect only to the lower four ports on the newer storage unit.
| |
| |
The following Copy Services functions are available as optional features:
v Point-in-time copy, which includes IBM System Storage FlashCopy and
v Remote mirror and copy, which includes the following functions:
v Remote mirror and copy for System z environments, which includes z/OS
v The maximum number of FlashCopy relationships allowed on a volume
is 65534. If that number is exceeded, the FlashCopy operation fails.
v The size limit for volumes or extents in a Copy Service relationship is 2
TB.
v Thin provisioning functions in open-system environments are supported
for the following Copy Services functions: – FlashCopy relationships – Global Mirror relationships provided that the Global Copy A and B
volumes are Extent Space Efficient (ESE) volumes. The FlashCopy target volume (Volume C) in the Global Mirror relationship can be a ESE volume, Target Space Efficient (TSE) volume, or standard volume.
v PPRC supports any intermix of T10-protected or standard volumes.
FlashCopy does not support intermix.
Space-Efficient FlashCopy The FlashCopy function enables you to make point-in-time, full volume copies
of data, so that the copies are immediately available for read or write access. In System z environments, you can also use the FlashCopy function to perform data set level copies of your data.
IBM System Storage Metro Mirror (previously known as Synchronous PPRC)
Metro Mirror provides real-time mirroring of logical volumes between two DS8000 storage units that can be located up to 300 km from each other. It is a synchronous copy solution where write operations are completed on both copies (local and remote site) before they are considered to be done.
IBM System Storage Global Copy (previously known as PPRC Extended
Distance) Global Copy is a nonsynchronous long-distance copy function where
incremental updates are sent from the local to the remote site on a periodic basis.
IBM System Storage Global Mirror (previously known as Asynchronous
PPRC) Global Mirror is a long-distance remote copy function across two sites using
asynchronous technology. Global Mirror processing is designed to provide support for virtually unlimited distance between the local and remote sites, with the distance typically limited only by the capabilities of the network and the channel extension technology.
IBM System Storage Metro/Global Mirror (a combination of Metro Mirror
and Global Mirror) Metro/Global Mirror is a three-site remote copy solution, which uses
synchronous replication to mirror data between a local site and an intermediate site, and asynchronous replication to mirror data from an intermediate site to a remote site.
Global Mirror
56 Introduction and Planning Guide
| |
Note: When Flashcopy is used on FB (open) volumes, the source and the target
volumes must have the same protection type of either T10 DIF or standard.
The point-in-time and remote mirror and copy features are supported across various IBM server environments such as IBM i, System p
®
, and System z, as well
as servers from Sun and Hewlett-Packard.
You can manage these functions through a command-line interface called the DS CLI and a Web-based interface called the DS Storage Manager. The DS Storage Manager allows you to set up and manage the following types of data-copy functions from any point where network access is available:
Point-in-time copy (FlashCopy)
The FlashCopy function enables you to make point-in-time, full volume copies of data, with the copies immediately available for read or write access. In System z environments, you can also use the FlashCopy function to perform data set level copies of your data. You can use the copy with standard backup tools that are available in your environment to create backup copies on tape.
FlashCopy is an optional function. To use it, you must purchase one of the point-in-time 242x indicator feature and 239x function authorization features.
The FlashCopy function creates a copy of a source volume on the target volume. This copy is called a point-in-time copy. When you initiate a FlashCopy operation, a FlashCopy relationship is created between a source volume and target volume. A FlashCopy relationship is a mapping of the FlashCopy source volume and a FlashCopy target volume. This mapping allows a point-in-time copy of that source volume to be copied to the associated target volume. The FlashCopy relationship exists between this volume pair from the time that you initiate a FlashCopy operation until the storage unit copies all data from the source volume to the target volume or you delete the FlashCopy relationship, if it is a persistent FlashCopy.
One of the main benefits of the FlashCopy function is that the point-in-time copy is immediately available for creating a backup of production data. The target volume is available for read and write processing so it can be used for testing or backup purposes. Data is physically copied from the source volume to the target volume using a background process. (A FlashCopy operation without a background copy is also possible, which allows only data that is modified on the source to be copied to the target volume.) The amount of time that it takes to complete the background copy depends on the following criteria:
v The amount of data being copied v The number of background copy processes that are occurring v The other activities that are occurring on the storage units
The FlashCopy function supports the following copy options:
Consistency groups
Creates a consistent point-in-time copy of multiple volumes, with negligible host impact. You can enable FlashCopy consistency groups from the DS CLI.
Chapter 3. Data management features 57
Change recording
Activates the change recording function on the volume pair that is participating in a FlashCopy relationship. This enables a subsequent refresh to the target volume.
Establish FlashCopy on existing Metro Mirror source
Allows you to establish a FlashCopy relationship where the target volume is also the source of an existing remote mirror and copy source volume. This enables you to create full or incremental point-in-time copies at a local site and then use remote mirroring commands to copy the data to the remote site.
Fast reverse
Reverses the FlashCopy relationship without waiting for the finish of the background copy of the previous FlashCopy. This option applies to the Global Mirror mode.
Inhibit writes to target
Ensures that write operations are inhibited on the target volume until a refresh FlashCopy operation is complete.
Multiple Relationship FlashCopy
Allows a source volume to have multiple (up to 12) target volumes at the same time.
Persistent FlashCopy
Allows the FlashCopy relationship to remain even after the FlashCopy operation completes. You must explicitly delete the relationship.
Refresh target volume
Provides the ability to refresh a FlashCopy relationship, without recopying all tracks from the source volume to the target volume.
Resynchronizing FlashCopy volume pairs
Provides the ability to update an initial point-in-time copy of a source volume without having to recopy your entire volume.
Reverse restore
Reverses the FlashCopy relationship and copies data from the target volume to the source volume.
Remote Pair FlashCopy
Figure 14 on page 59 illustrates how Remote Pair FlashCopy works. If Remote Pair FlashCopy is used to copy data from Local A to Local B, an equivalent operation is also performed from Remote A to Remote B. FlashCopy can be performed as described for a Full Volume FlashCopy, Incremental FlashCopy, and Dataset Level FlashCopy.
The Remote Pair FlashCopy function prevents the Metro Mirror relationship from changing states and the resulting momentary period where Remote A is out of synchronization with Remote B. This feature provides a solution for data replication, data migration, remote copy, and disaster recovery tasks.
Without Remote Pair FlashCopy, when you established a FlashCopy relationship from Local A to Local B, using a Metro Mirror primary volume as the target of that FlashCopy relationship, the corresponding Metro Mirror volume pair went from "full duplex" state to "duplex pending" state as long as the FlashCopy data was being transferred to the Local B. The time it took to complete the copy of the FlashCopy data, until all Metro
58 Introduction and Planning Guide
Mirror volumes were synchronous again, depended on the amount of data being transferred. During this time, the Local B would be inconsistent if a disaster were to have occurred.
Note: Previously, if you created a FlashCopy relationship with the
"Preserve Mirror, Required" option, using a Metro Mirror primary volume as the target of that FlashCopy relationship, and if the status of the Metro Mirror volume pair was not in a "full duplex" state, the FlashCopy relationship failed. That restriction is now removed. The Remote Pair FlashCopy relationship will complete successfully with the "Preserve Mirror, Required" option, even if the status of the Metro Mirror volume pair is either in a suspended or duplex pending state.
Local Storage Server Remote Storage Server
Local A
FlashCopy
Local B
Figure 14. Remote Pair FlashCopy
Establish
full duplex
Metro Mirror
full duplex
Remote A
FlashCopy
Remote B
f2c01089
Remote mirror and copy
The remote mirror and copy feature is a flexible data mirroring technology that allows replication between a source volume and a target volume on one or two disk storage units. You can also issue remote mirror and copy operations to a group of source volumes on one logical subsystem (LSS) and a group of target volumes on another LSS. (An LSS is a logical grouping of up to 256 logical volumes for which the volumes must have the same disk format, either count key data or fixed block.)
Remote mirror and copy is an optional feature that provides data backup and disaster recovery. To use it, you must purchase at least one of the remote mirror and copy 242x indicator feature and 239x function authorization features.
The remote mirror and copy feature provides synchronous (Metro Mirror) and asynchronous (Global Copy) data mirroring. The main difference is that the Global Copy feature can operate at very long distances, even continental distances, with minimal impact on applications. Distance is limited only by the network and channel extenders technology capabilities. The maximum supported distance for Metro Mirror is 300 km.
Chapter 3. Data management features 59
With Metro Mirror, application write performance is dependent on the available bandwidth. Global Copy allows you to better use your available bandwidth capacity, therefore allowing you to include more of your data to be protected.
The enhancement to Global Copy is Global Mirror, which uses Global Copy and the benefits of FlashCopy to form consistency groups. (A consistency group is a set of volumes that contain consistent and current data to provide a true data backup at a remote site.) Global Mirror uses a master storage unit (along with optional subordinate storage units) to internally, without external automation software, manage data consistency across volumes using consistency groups.
Consistency groups can also be created using the freeze and run functions of Metro Mirror. The freeze and run functions, when used with external automation software, provide data consistency for multiple Metro Mirror volume pairs.
The following sections describe the remote mirror and copy functions.
Synchronous mirroring (Metro Mirror)
Provides real-time mirroring of logical volumes (a source and a target) between two storage units that can be located up to 300 km from each other. With Metro Mirror copying, the source and target volumes can be on the same storage unit or on separate storage units. You can locate the storage unit at another site, some distance away.
Metro Mirror is a synchronous copy feature where write operations are completed on both copies (local and remote site) before they are considered to be complete. Synchronous mirroring means that a storage server constantly updates a secondary copy of a volume to match changes made to a source volume.
The advantage of synchronous mirroring is that there is minimal host impact for performing the copy. The disadvantage is that since the copy operation is synchronous, there can be an impact to application performance because the application I/O operation is not acknowledged as complete until the write to the target volume is also complete. The longer the distance between primary and secondary storage units, the greater this impact to application I/O, and therefore, application performance.
Asynchronous mirroring (Global Copy)
Copies data nonsynchronously and over longer distances than is possible with the Metro Mirror feature. When operating in Global Copy mode, the source volume sends a periodic, incremental copy of updated tracks to the target volume instead of a constant stream of updates. This causes less impact to application writes for source volumes and less demand for bandwidth resources, while allowing a more flexible use of the available bandwidth.
The updates are tracked and periodically copied to the target volumes. As a consequence, there is no guarantee that data is transferred in the same sequence that was applied to the source volume. To get a consistent copy of your data at your remote site, you must periodically switch from Global Copy to Metro Mirror mode, then either stop the application I/O or freeze data to the source volumes using a manual process with freeze and run commands. The freeze and run functions can be used with external automation software such as Geographically Dispersed Parallel Sysplex (GDPS), which is available for System z environments, to ensure data consistency to multiple Metro Mirror volume pairs in a specified logical subsystem.
60 Introduction and Planning Guide
Common options for Metro Mirror and Global Copy include the following modes:
Suspend and resume
If you schedule a planned outage to perform maintenance at your remote site, you can suspend Metro Mirror or Global Copy processing on specific volume pairs during the duration of the outage. During this time, data is no longer copied to the target volumes. Because the primary storage unit keeps track of all changed data on the source volume, you can resume operations at a later time to synchronize the data between the volumes.
Copy out-of-synchronous data
You can specify that only data that was updated on the source volume while the volume pair was suspended be copied to its associated target volume.
Copy an entire volume or not copy the volume
You can copy an entire source volume to its associated target volume to guarantee that the source and target volume contain the same data. When you establish volume pairs and elect not to copy a volume, a relationship is established between the volumes but no data is sent from the source volume to the target volume. In this case, it is assumed that the volumes contain exactly the same data and are consistent, so copying the entire volume is not necessary or required. Only new updates are copied from the source to target volumes.
Global Mirror
Provides a long-distance remote copy across two sites using asynchronous technology. Global Mirror processing is most often associated with disaster recovery or disaster recovery testing. However, it can also be used for everyday processing and data migration.
The Global Mirror function mirrors data between volume pairs of two storage units over greater distances without affecting overall performance. It also provides application-consistent data at a recovery (or remote) site in case of a disaster at the local site. By creating a set of remote volumes every few seconds, the data at the remote site is maintained to be a point-in-time consistent copy of the data at the local site.
Global Mirror operations periodically invoke point-in-time FlashCopy operations at the recovery site, at regular intervals, without disrupting the I/O to the source volume, thus giving a continuous, near up-to-date data backup. By grouping many volumes into a session, which is managed by the master storage unit, you can copy multiple volumes to the recovery site simultaneously while maintaining point-in-time consistency across those volumes. (A session contains a group of source volumes that are mirrored asynchronously to provide a consistent copy of data at the remote site. Sessions are associated with Global Mirror relationships and are defined with an identifier [session ID] that is unique across the enterprise. The ID identifies the group of volumes in a session that are related and that can participate in the Global Mirror consistency group.)
Global Mirror has been enhanced to support up to 32 Global Mirror sessions per storage facility image. Previously, only one session was supported per storage facility image.
Multiple Global Mirror sessions allow you to failover only data that is assigned to one host or application instead of forcing you to failover all
Chapter 3. Data management features 61
data if one host or application fails. This provides increased flexibility to control the scope of a failover operation and to assign different options and attributes to each session.
The DS CLI and DS Storage Manager have been enhanced to display information about the sessions, including the copy state of the sessions.
Metro/Global Mirror
Provides a three-site, long distance disaster recovery replication that combines Metro Mirror with Global Mirror replication for both System z and open systems data. Metro/Global Mirror uses synchronous replication to mirror data between a local site and an intermediate site, and asynchronous replication to mirror data from an intermediate site to a remote site.
In a three-site, Metro/Global Mirror, should an outage occur, a backup site is maintained regardless of which one of the sites is lost. Suppose an outage occurs at the local site, Global Mirror continues to mirror updates between the intermediate and remote sites, maintaining the recovery capability at the remote site. If an outage occurs at the intermediate site, data at the local storage unit is not affected. If an outage occurs at the remote site, data at the local and intermediate sites is not affected. Applications continue to run normally in either case.
With the incremental resynchronization function enabled on a Metro/Global Mirror configuration, should the intermediate site be lost, the local and remote sites can be connected, and only a subset of changed data is copied between the volumes at the two sites. This reduces the amount of data that needs to be copied from the local site to the remote site and the time it takes to do the copy.
z/OS Global Mirror
In the event of workload peaks, which might temporarily overload the bandwidth of the Global Mirror configuration, the enhanced z/OS Global Mirror function initiates a Global Mirror suspension that preserves primary site application performance. If you are installing new high-performance z/OS Global Mirror primary storage subsystems, this function provides improved capacity and application performance during heavy write activity. This enhancement can also allow Global Mirror to be configured to tolerate longer periods of communication loss with the primary storage subsystems. This enables the Global Mirror to stay active despite transient channel path recovery events. In addition, this enhancement can provide fail-safe protection against application system impact related to unexpected data mover system events.
The z/OS Global Mirror function is an optional function. To use it, you must purchase the remote mirror for z/OS 242x indicator feature and 239x function authorization feature.
z/OS Metro/Global Mirror Incremental Resync
z/OS Metro/Global Mirror Incremental Resync is an enhancement for z/OS Metro/Global Mirror. z/OS Metro/Global Mirror Incremental Resync can eliminate the need for a full copy after a HyperSwap 3-site z/OS Metro/Global Mirror configurations. The DS8000 series supports z/OS Metro/Global Mirror which is a 3-site mirroring solution that utilizes IBM System Storage Metro Mirror and z/OS Global Mirror (XRC). The z/OS Metro/Global Mirror Incremental Resync capability is intended to enhance this solution by enabling resynchronization of data
®
situation in
62 Introduction and Planning Guide
between sites using only the changed data from the Metro Mirror target to the z/OS Global Mirror target after a GDPS HyperSwap.
z/OS Global Mirror Multiple Reader (enhanced readers)
z/OS Global Mirror Multiple Reader provides multiple Storage Device Manager readers allowing improved throughput for remote mirroring configurations in System z environments. z/OS Global Mirror Multiple Reader helps maintain constant data consistency between mirrored sites and promotes efficient recovery. This function is supported on the DS8000 series running in a System z environment with version 1.7 or later at no additional charge.
Interoperability with existing and previous generations of the DS8000 series
All of the remote mirroring solutions documented in the sections above use Fibre Channel as the communications link between the primary and secondary storage units. The Fibre Channel ports used for remote mirror and copy can be configured as either a dedicated remote mirror link or as a shared port between remote mirroring and Fibre Channel Protocol (FCP) data traffic.
The remote mirror and copy solutions are optional capabilities of the DS8800 Model 951 and are compatible with previous generations of DS8000. They are available as follows:
v Metro Mirror indicator feature numbers 75xx and 0744 and corresponding
DS8000 series function authorization (2396-LFA MM feature numbers 75xx)
v Global Mirror indicator feature numbers 75xx and 0746 and corresponding
DS8000 series function authorization (2396-LFA GM feature numbers 75xx).
The DS8000 series systems can also participate in Global Copy solutions with the IBM TotalStorage ESS Model 750, IBM TotalStorage ESS Model 800, and IBM System Storage DS6000 series systems for data migration. For more information on data migration and migration services, contact IBM or a Business Partner representative.
Global Copy is a non-synchronous long distance copy option for data migration and backup, and is available under Metro Mirror and Global Mirror licenses or Remote Mirror and Copy license on older DS8000, ESS, or DS6000 systems.

Thin provisioned and Global Mirror volume considerations

This section discusses thin provisioned and Global Mirror volume considerations.
The time it takes to recover from a consistent Global Mirror copy, after a planned or unplanned disaster recovery swap, depends on the configured size of volumes that need to be recovered using the FlashCopy Fast Reverse Restore (FRR) option. The larger the total logical size of volumes, the longer the total recovery time.
For thin provisioned volumes, the time to recover from a Global Mirror copy is related to the total logical size of the volumes, regardless of the actual provisioned space. For example, if the space over-provisioned is a ratio of 2 to 1, the recovery time is equivalent to the recovery for twice the configuration of fully provisioned volumes. The larger the logical configuration (regardless of how much storage is actually provisioned), the proportionally longer the recovery commands take to complete. This might cause the commands to timeout, causing failures in the recovery process.
Chapter 3. Data management features 63
To avoid this situation, the recovery commands you issue from the DS CLI should be limited to 200 TB of configured volume size recovered in a single command. In addition, issue the recovery commands serially. In a Tivoli Productivity Center for Replication (TPC-R) configuration, the timeout value of the recovery commands might need to be increased. In a Geographically Dispersed Parallel Sysplex (GDPS) environment (managing open systems devices), the overall time allowed for the recovery process might also need to be increased. Contact TPC-R or GDPS support personnel to increase timeout values in your environments.
Thin provisioned volumes in open-system environments are supported for Copy Services functions such as FlashCopy, Metro Mirror, Global Mirror, and Metro Global Mirror. In a Metro Mirror relationship, the A and B volumes must be extent space efficient (ESE) volumes. In a Global Mirror relationship, the Global Copy A and B volumes must also be ESE volumes. The FlashCopy target volume (Volume C) can be an ESE volume, target space efficient (TSE) volume, or standard volume.

Disaster recovery using Copy Services

One of the main reasons for using Copy Services functions is to prepare for a possible disaster by backing up, copying, and mirroring your data both at the local (production) and remote sites.
Having a disaster recovery plan can ensure that critical data is recoverable at the time of a disaster. Because most disasters are unplanned, your disaster recovery plan must provide a way that allows you to recover your applications quickly, and more importantly, to access your data. Consistent data to the same point-in-time across all storage units is vital before you can recover your data at a backup (normally your remote) site.
Most users use a combination of remote mirror and copy and point-in-time copy (FlashCopy) features to form a comprehensive enterprise solution for disaster recovery. In an event of a planned event or unplanned disaster, you can use failover and failback modes as part of your recovery solution. Failover and failback modes help to reduce the time that is required to synchronize remote mirror and copy volumes after you switch between the local (or production) and the intermediate or remote sites during planned and unplanned outages. Although failover transmits no data, it changes the status of a device, and the status of the secondary volume changes to a suspended primary volume. The Failback command transmits data and can go in either direction depending on which device the Failback command is issued to.
Recovery procedures that include failover and failback modes use remote mirror and copy functions, such as Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror, and FlashCopy.
Note: See the IBM System Storage DS8000 Command-Line Interface User's Guide for
specific disaster recovery tasks.
Data consistency can be achieved using the following methods:
Manually using external software (without Global Mirror)
If you use Metro Mirror, Global Copy, and FlashCopy functions to create a consistent and restartable copy at your recovery site, you must do a manual and periodic suspend operation at your local site. This means using freeze and run commands together with external automated software and then using the FlashCopy function to make a consistent copy of your
64 Introduction and Planning Guide
target volume for backup or recovery purposes. (Automation software is not provided with the storage unit; it must be supplied by the user.)
Note: Freezing of the data is done at the same point-in-time across all
links and all storage units.
Automatically (with Global Mirror and FlashCopy)
If you use a two-site Global Mirror or a three-site Metro/Global Mirror configuration, the process to create a consistent and restartable copy at your intermediate or remote site is done using an automated process, with minimal or no interruption to your applications. Global Mirror operations automate the process of continually forming consistency groups. It combines Global Copy and FlashCopy operations to provide consistent data at the remote site. A master storage unit (along with subordinate storage units) internally manages data consistency using consistency groups within a Global Mirror configuration. Consistency groups can be created many times per hour to increase the currency of data that is captured in the consistency groups at the remote site.
Note: A consistency group is a collection of volumes (grouped in a
session) across multiple storage units that are managed together in a session during the creation of consistent copies of data. The formation of these consistency groups is coordinated by the master storage unit, which sends commands over remote mirror and copy links to its subordinate storage units.
In a two-site Global Mirror configuration, if you have a disaster at your local site and have to start production at your remote site, you can use the consistent point-in-time data from the consistency group at your remote site to recover when the local site is operational.
In a three-site Metro/Global Mirror configuration, if you have a disaster at your local site and you must start production at either your intermediate or remote site, you can use the consistent point-in-time data from the consistency group at your remote site to recover when the local site is operational.

Resource groups for Copy Services scope limiting

Resource groups are used to define a collection of resources and associate a set of policies relative to how the resources are configured and managed. You can define a network user account so that it has authority to manage a specific set of resources groups.
Copy Services scope limiting overview
Copy services scope limiting is the ability to specify policy-based limitations on Copy Services requests. With the combination of policy-based limitations and other inherent volume-addressing limitations, you can control which volumes can be in a Copy Services relationship, which network users or host LPARs issue Copy Services requests on which resources, and other Copy Services operations.
Use these capabilities to separate and protect volumes in a Copy Services relationship from each other. This can assist you with multi-tenancy support by assigning specific resources to specific tenants, limiting Copy Services relationships so that they exist only between resources within each tenant's scope of resources, and limiting a tenant's Copy Services operators to an "operator only" role.
Chapter 3. Data management features 65
When managing a single-tenant installation, the partitioning capability of resource groups can be used to isolate various subsets of an environment as if they were separate tenants. For example, to separate mainframes from distributed system servers, Windows from UNIX, or accounting departments from telemarketing.
Using resource groups to limit Copy Service operations
Figure 15 illustrates one possible implementation of an exemplary environment that uses resource groups to limit Copy Services operations. Figure 15 shows two tenants (Client A and Client B) that are concurrently operating on shared hosts and storage systems.
Each tenant has its own assigned LPARs on these hosts and its own assigned volumes on the storage systems. For example, a user cannot copy a Client A volume to a Client B volume.
Resource groups are configured to ensure that one tenant cannot cause any Copy Services relationships to be initiated between its volumes and the volumes of another tenant. These controls must be set by an administrator as part of the configuration of the user accounts or access settings for the storage unit.
Hosts with LPARs
Switches
Client A Client A
Client B Client B
Client A Client A
Client B Client B
Site 1
Figure 15. Implementation of multiple-client volume administration
Hosts with LPARs
Switches
Site 2
f2c01638
Resource groups functions provide additional policy-based limitations to DS8000 users, which in conjunction with the inherent volume addressing limitations support secure partitioning of Copy Services resources between user-defined partitions. The process of specifying the appropriate limitations is performed by an administrator using resource groups functions.
Note: User and administrator roles for resource groups are the same user and
administrator roles used for accessing the DS8000. For example, those roles include storage administrator, Copy Services operator, and physical operator.
66 Introduction and Planning Guide
The process of planning and designing the use of resource groups for Copy Services scope limiting can be complex. For more information on the rules and policies that must be considered in implementing resource groups, visit the IBM System Storage DS8000 Information Center, and select Overview > Resource Groups to display topics which provide more detail. For specific DS CLI commands used to implement resource groups, see the IBM System Storage DS8000 Command-Line Interface User's Guide.

Comparison of licensed functions

A key decision that you must make in planning for a disaster is deciding which licensed functions to use to best suit your environment.
Table 9 provides a brief summary of the characteristics of the Copy Services features that are available for the storage unit.
Table 9. Comparison of licensed functions
Licensed function Description Advantages Considerations
Metro/Global Mirror Three-site, long
distance disaster recovery replication
Metro Mirror Synchronous data
copy at a distance
Global Copy Continuous copy
without data consistency
Global Mirror Asynchronous copy Nearly unlimited
z/OS Global Mirror Asynchronous copy
controlled by System z host software
A backup site is maintained regardless of which one of the sites is lost.
No data loss, rapid recovery time for distances up to 300 km.
Nearly unlimited distance, suitable for data migration, only limited by network and channel extenders capabilities.
distance, scalable, and low RPO. The RPO is the time needed to recover from a disaster; that is, the total system downtime.
Nearly unlimited distance, highly scalable, and very low RPO.
Recovery point objective (RPO) might grow if bandwidth capability is exceeded.
Slight performance impact.
Copy is normally fuzzy but can be made consistent through synchronization.
RPO might grow when link bandwidth capability is exceeded.
Additional host server hardware and software is required. The RPO might grow if bandwidth capability is exceeded or host performance might be impacted.

Logical configuration overview

Before you configure your DS8000, it is important to understand IBM terminology for storage concepts and the storage hierarchy.
Chapter 3. Data management features 67
In the storage hierarchy, you begin with a physical disk. Logical groupings of eight disks form an array site. Logical groupings of one array site form an array. After you define your array storage type as CKD or fixed block, you can create a rank. A rank is divided into a number of fixed-size extents. If you work with an open-systems host, an extent is 1 GB. If you work in an IBM System z environment, an extent is the size of an IBM 3390 Mod 1 disk drive.
After you create ranks, your physical storage can be considered virtualized. Virtualization dissociates your physical storage configuration from your logical configuration, so that volume sizes are no longer constrained by the physical size of your arrays.
The available space on each rank is divided into extents. The extents are the building blocks of the logical volumes. An extent is striped across all disks of an array.
Extents of the same storage type are grouped together to form an extent pool. Multiple extent pools can create storage classes that provide greater flexibility in storage allocation through a combination of RAID types, DDM size, DDM speed, and DDM technology. This allows a differentiation of logical volumes by assigning them to the appropriate extent pool for the needed characteristics. Different extent sizes for the same device type (for example, count-key-data or fixed block) can be supported on the same storage unit, but these different extent types must be in different extent pools.
A logical volume is composed of one or more extents. A volume group specifies a set of logical volumes. By identifying different volume groups for different uses or functions (for example, SCSI target, FICON/ESCON control unit, remote mirror and copy secondary volumes, FlashCopy targets, and Copy Services), access to the set of logical volumes that are identified by the volume group can be controlled. Volume groups map hosts to volumes. Figure 16 on page 69 shows a graphic representation of the logical configuration sequence.
When volumes are created, you must initialize logical tracks from the host before the host is allowed read and write access to the logical tracks on the volumes. With the Quick Initialization feature for open system on CKD TSE and FB ESE or TSE volumes, an internal volume initialization process allows quicker access to logical volumes that are used as host volumes and source volumes in Copy Services relationships, such as FlashCopy or Remote Mirror and Copy relationships. This process dynamically initializes logical volumes when they are created or expanded, allowing them to be configured and placed online more quickly.
You can now specify LUN ID numbers through the graphical user interface (GUI) for volumes in a map-type volume group. Do this when you create a new volume group, add volumes to an existing volume group, or add a volume group to a new or existing host. Previously, gaps or holes in LUN ID numbers could result in a "map error" status. The Status field is eliminated from the Volume Groups main page in the GUI and the Volume Groups accessed table on the Manage Host Connections page. You can also assign host connection nicknames and host port nicknames. Host connection nicknames can be up to 28 characters, which is expanded from the previous maximum of 12. Host port nicknames can be 32 characters, which is expanded from the previous maximum of 16.
68 Introduction and Planning Guide
Disk
Array Site
Array
Rank
= CKD Mod1 Extent in IBM
System z environments
= FB 1GB in an Open
systems Host
Extents
Virtualization
Figure 16. Logical configuration sequence
The storage management software can be used in real-time mode. When you are connected to storage devices over your network, you can use the Real-time Manager to manage your hardware or configure your storage.

I/O Priority Manager

The performance group attribute associates the logical volume with a performance group object. Each performance group has an associated performance policy which determines how the I/O Priority Manager processes I/O operations for the logical volume.
Extent Pool
Extents
Logical Volume
Volume Groups Map Hosts to Volumes
Volume Group
Chapter 3. Data management features 69
f2d00137

Encryption

The I/O Priority Manager maintains statistics for the set of logical volumes in each performance group that can be queried. If management is performed for the performance policy, the I/O Priority Manager controls the I/O operations of all managed performance groups to achieve the goals of the associated performance policies. The performance group defaults to 0 if not specified. Table 10 lists performance groups that are predefined and have the associated performance policies:
Table 10. Performance groups and policies
Performance policy
Performance group Performance policy
0 0 No management 1-5 1 Fixed block high priority 6-10 2 Fixed block medium priority 11-15 3 Fixed block low priority Note: Performance group settings can be managed using DS CLI or the DS Storage
Manager.
description
The DS8000 series supports data encryption through the use of the IBM Full Disk Encryption feature and IBM Tivoli Key Lifecycle Manager.
Encryption technology has a number of considerations that are critical to understand to maintain the security and accessibility of encrypted data. This section contains the key information that you have to know to manage IBM encrypted storage and to comply with IBM requirements for using IBM encrypted storage.
Failure to follow these requirements can result in a permanent encryption deadlock, which can result in the permanent loss of all key-server-managed encrypted data at all of your installations.

Encryption concepts

Encryption is the process of transforming data into an unintelligible form in such a way that the original data either cannot be obtained or can be obtained only by using a decryption process.
Data that is encrypted is referred to as ciphertext. Data that is not encrypted is referred to as plaintext. The data that is encrypted into ciphertext is considered securely secret from anyone who does not have the decryption key.
The following encryption algorithms exist:
Symmetric encryption algorithm
A common key is used to both encrypt and decrypt data. Therefore, the encryption key can be calculated from the decryption key and the decryption key can be calculated from the encryption key.
Asymmetric encryption algorithm
70 Introduction and Planning Guide
Two keys are used to encrypt and decrypt data. A public key that is known to everyone and a private key that is known only to the receiver or sender of the message. The public and private keys are related in such a
way that only the public key can be used to encrypt messages and only the corresponding private key can be used to decrypt them.
The following characteristics of encryption create special considerations:
Security exposures
Occurs when an unauthorized person has access to the plain text encryption key and the cipher text.
Data loss
Occurs if all copies of the decryption key are lost. If you lose the decryption key, you cannot decrypt the associated ciphertext. The data that is contained in the ciphertext is considered cryptographically erased. If the only copies of data are cryptographically erased ciphertext, access to that data is permanently lost.
To preserve the security of encryption keys, many implementation techniques can be used to ensure the following conditions:
v No one individual has access to all the information that is necessary to
determine an encryption key. – If only the symmetric encryption algorithm is used, manage encryption keys
so that the data key that is used to encrypt and decrypt data is encrypted or wrapped with a wrapping key that is used to encrypt and decrypt data keys. To decrypt the ciphertext in this case, the wrapping key is first used to decrypt the ciphertext data key and obtain the plaintext data key, which is then used to decrypt the ciphertext and obtain the plaintext. If one unit stores the wrapping keys and a second unit stores the encrypted data key, then neither unit alone has sufficient information to determine the plaintext data key. Similarly, if a person obtains access to the information that is stored on either unit but not both units, there is not sufficient information to determine the plaintext data key. The unit that stores the wrapping keys is referred to as a key server and the unit that stores or has access to the encrypted data keys is referred to as a storage device. A key server is a product that works with the encrypting storage device to resolve most of the security and usability issues that are associated with the key management of encrypted storage. However, even with a key server, there is at least one encryption key that must be maintained manually. For example, the overall key that manages access to all other encryption keys.
v More than one individual has access to any single piece of information that is
required to determine an encryption key. For redundancy, you can do the following actions:
– Use multiple independent key servers that have multiple independent
communication paths to the encrypting storage devices.
– Maintain backups of the data on each key server. If you maintain backups,
the failure of any one key server or any one network does not prevent storage devices from obtaining access to data keys that are required to provide access to data.
– Keep multiple copies of the encrypted data key.

Tivoli Key Lifecycle Manager

The DS8000 supports data encryption with the use of Tivoli Key Lifecycle Manager and the IBM Full Disk Encryption feature.
Chapter 3. Data management features 71
The IBM Tivoli Key Lifecycle Manager implements a key server application and integrates with certain IBM storage products. It is software developed by IBM for managing keys securely for encrypting hardware devices such as disk and tape.
The Tivoli Key Lifecycle Manager server is available as a DS8000 hardware feature code 1760. This feature provides the Tivoli Key Lifecycle Manager server that is required for use with the Tivoli Key Lifecycle Manager software. For more information, see “IBM Tivoli Key Lifecycle Manager server” on page 73.
The Tivoli Key Lifecycle Manager can be installed on a set of servers to implement a set of redundant key servers. Encryption capable storage devices that require key services from the key server are configured to communicate with one or more key servers and the key servers are configured to define the devices to which they are allowed to communicate.
The Tivoli Key Lifecycle Manager supports two key serving methods. The method that is used by the DS8000 is referred to as the wrapped key method. In the wrapped key method, the configuration processes on the Tivoli Key Lifecycle Manager and storage device define one or more key labels. A key label is a user-specified text string that is associated with the asymmetric key pair that Tivoli Key Lifecycle Manager generates when the key label is configured. In the wrapped key method, there are basically two functions that an encryption capable storage device can initiate to a Tivoli Key Lifecycle Manager key server:
Request a new data key
The storage device requests a new data key for one or two specified key labels. The Tivoli Key Lifecycle Manager key server provides one or two properly generated data keys to the storage device in two forms:
Externally Encrypted Data Key
Tivoli Key Lifecycle Manager maintains a public and private key pair for each key label. Tivoli Key Lifecycle Manager keeps the private key a secret. The data key is wrapped with the key label public key and is stored in a structure that is referred to as the externally encrypted data key (EEDK). This structure also contains sufficient information to determine the key label associated with the EEDK. One EEDK is sent for each key label.
Session Encrypted Data Key
The storage device generates a public and private key pair for communicating with the Tivoli Key Lifecycle Manager and provides the public key to the Tivoli Key Lifecycle Manager. The storage device keeps the private key a secret. The data key is wrapped with the public key of the storage device and is stored in a structure called the session encrypted data key (SEDK).
Each EEDK is persistently stored by the storage device for future use. The SEDK is decrypted by the storage device using the private key of the storage device to obtain the data key. The data key is then used to symmetrically encrypt and decrypt either data or the other subordinate data keys that are required to encrypt, decrypt, or gain access to the data.
Unwrap an existing data key
The storage device requests that Tivoli Key Lifecycle Manager unwrap an existing wrapped data key by sending the request to the Tivoli Key Lifecycle Manager instance with all of the EEDKs and the public key of the storage device. The Tivoli Key Lifecycle Manager key server receives each EEDK, unwraps the data key with the private key for the key label to
72 Introduction and Planning Guide
obtain the data key, wraps the data key with the storage device public key to create an SEDK, and returns an SEDK to the storage device.
The storage device does not maintain a persistent copy of the data key. Therefore, the storage device must access the Tivoli Key Lifecycle Manager to encrypt or decrypt data. Different key life cycles are appropriate for different types of storage devices. For example, the EEDKs for a removable media device might be stored on the media when it is initially written and the data key removed from immediate access when the media is dismounted such that each time the media is remounted, the storage device must communicate with the Tivoli Key Lifecycle Manager to obtain the data key. The EEDKs for a nonremovable storage device might be stored as persistent metadata within the storage device. The data key can become inaccessible when the storage device is powered off. Each time the storage device is powered on, it must communicate with the Tivoli Key Lifecycle Manager to obtain the data key. When the wrapped key model is used, access to data that is encrypted with a data key requires access to both the EEDKs and the Tivoli Key Lifecycle Manager with the private key that is required to decrypt the EEDKs to obtain the data key.
Note: On zSeries platforms, the length of the key labels is limited to 32 characters
when the Tivoli Key Lifecycle Manager is configured to use a RACF based key method (either JCERACFKS or JCECCARACFKS) is used. You must limit key labels to 32 characters on those key servers and on storage devices that must interoperate or share keys with zSeries key servers using RACF based key methods.
IBM Tivoli Key Lifecycle Manager server
The IBM Tivoli Key Lifecycle Manager (TKLM) server is available with feature code 1760. A TKLM license is required for use with the TKLM software. The software is purchased separately from the TKLM isolated server hardware.
The TKLM server runs on the Linux operating system (SUSE Linux Enterprise Server 10 Service Pack 3). You must register for Linux support with Novell. Go to the support.novell.com/contact/getsupport.html. Contact Novell directly for all Linux-related problems.
The TKLM server consists of software and hardware:
Hardware
The TKLM server hardware is a specially configured xSeries incorporated into the DS8000 as hardware feature code 1760. For hardware-related problems, contact the IBM hardware support for assistance. Be prepared to provide the correct DS8000 machine type and serial number for which feature code 1760 is a part.
Software
The TKLM server includes licensed TKLM software, which you order separately. For TKLM-related problems, contact IBM software support. Be prepared to provide the software product identification (PID) when you call for assistance.
TKLM installation
®
server,
The TKLM server is installed and configured by the IBM Lab Services group. After the TKLM server is configured, each installation receives key settings of
Chapter 3. Data management features 73
parameters and a copy of the configuration along with recovery instructions. Should a server be replaced, you must reload the TKLM code and restore the TKLM configuration.
Before installing the TKLM server, you must configure the host name of the server in which TKLM is being installed. Ensure that you note and keep the host name of the server because the host name cannot be changed after configuration. (The DB2 database that is used by the TKLM server will not function without correct host names.) Not accessing correct host names can potentially result in temporary loss of access to the storage device that is being managed by the TKLM server.
If you are installing the TKLM server from the DVD that is included with your product, you can store the contents of the DVD in a temporary directory by using the following steps:
1. Right-click the DVD content diagram that displays after you insert the DVD into the media drive.
2. Select the Extract to option and navigate to the temporary directory where you want to store the DVD contents.
Note: For information on installation procedures (including post installation steps)
for the TKLM, see the IBM Tivoli Key Lifecycle Manager/Installation and Configuration Guide that is included in the http://publib.boulder.ibm.com/ infocenter/tivihelp/v4r1/index.jsp.
®
After the installation of the application is complete, ensure that you set the TKLM application to auto start in case of a power outage at your facility. By doing so, you can significantly reduce the time it takes to recover from data loss caused by a power outage.

IBM Security Key Lifecycle Manager for z/OS

IBM Security Key Lifecycle Manager for z/OS generates encryption keys and manages their transfer to and from devices in a System z environment.
IBM Security Key Lifecycle Manager for z/OS is supported on the DS8000. Some of the benefits include, but are not limited to:
v Helps reduce the cost of lost data v Enhances data security while dramatically reducing the number of encryption
keys to be managed
v Centralizes and automates the encryption key management process v Integrates with IBM self-encrypting storage devices to provide creation and
protection of keys to storage devices
For more information, see the http://publib.boulder.ibm.com/infocenter/tivihelp/ v2r1/index.jsp?topic=/com.ibm.tivoli.isklm.doc_11/ic-homepage.html.

DS8000 disk encryption

The DS8000 supports data encryption with the IBM Full Disk Encryption drives.
The IBM Full Disk Encryption feature is available on the DS8700and DS8800. Recovery key and dual key server platform support is available on the DS8700 and DS8800.
74 Introduction and Planning Guide
The DS8800 allows the installation of the following encrypted SAS drives with key management services supported by Tivoli Key Lifecycle Manager (TKLM) software:
v 450 GB 10,000 RPM v 600 GB 10,000 RPM v 900 GB 10K RPM v 3 TB 7.2K RPM
The IBM Full Disk Encryption disk drive sets are optional to the DS8000 series.
Encryption drive set support must be ordered using feature number 1751.
For the DS8700, enterprise-class disks are available in 300 GB or 450 GB capacities and with 15K RPM speed. These drives contain encryption hardware and can perform symmetric encryption and decryption of data at full disk speed with no impact on performance.
To use data encryption, a DS8000 must be ordered from the factory with all IBM Full Disk Encryption drives. At this time, DS8000 does not support intermix of FDE and non-FDE drives so additional drives added to a DS8000 must be consistent with the drives that are already installed. DS8000 systems with IBM Full Disk Encryption drives are referred to as being encryption-capable. Each storage facility image (SFI) on an encryption-capable DS8000 can be configured to either enable or disable encryption for all data that is stored on your disks. To enable encryption, the DS8000 must be configured to communicate with two or more Tivoli Key Lifecycle Manager key servers. The physical connection between the DS8000 HMC and the key server is through a TCP/IP network.
Each IBM Full Disk Encryption drive has an encryption key for the region of the disk that contains data. When the data region is locked, the encryption key for the region is wrapped with an access credential and stored on the disk media. Read and write access to the data on a locked region is blocked following a power loss until the initiator that is accessing the drive authenticates with the currently active access credential. When the data region is unlocked, the encryption key for the region is wrapped with the unique data key that is assigned to this particular disk and stored on the disk media. This data key is accessible to the device and to any initiator that is attached. The data key is visible on any external device labeling. Read and write access to the data on an unlocked region does not require an access credential or any interface protocols that are not used on a non-IBM Full Disk Encryption drive. IBM Full Disk Encryption drives still encrypt and decrypt data with an encryption key. However, the encryption and decryption is done transparently to the initiator.
For DS8000, the IBM Full Disk Encryption drive that is a member of an encryption-enabled rank is locked. An IBM Full Disk Encryption drive that is not assigned, a spare, or a member of an encryption-disabled rank is unlocked. Locking occurs when an IBM Full Disk Encryption drive is added to an encryption-enabled rank. Unlocking occurs when an encryption-enabled rank is deleted or when an encryption-enabled rank member becomes a spare. Unlocking implies a cryptographic erasure of an IBM Full Disk Encryption drive. IBM Full Disk Encryption drives are also cryptographically erased when an encryption-disabled rank is deleted. You can cryptographically erase data for a set of logical volumes in an encryption-capable extent pool by deleting all of the ranks that are associated with the extent pool.
Chapter 3. Data management features 75
IBM Full Disk Encryption drives are not cryptographically erased when the disk fails. In this case, there is no guarantee that the device-adapter intentionally fences the failing drive from the device interface as soon as possible to prevent it from causing any other problems on the interface.
A unique access credential for each locked drive in the SFI is derived from one data key that it obtains from the Tivoli Key Lifecycle Manager key server. The DS8000 stores multiple independent copies of the EEDK persistently and it must be able to communicate with a Tivoli Key Lifecycle Manager key server after a power on to allow access to the disks that have encryption enabled.
In the current implementation of an encryption-capable DS8000, data is persistently stored in one of the following places:
On your disks
Data on your disks (for example, DDM installed through DDM Install Group features) that are members of an encryption-enabled rank is managed through a data key obtained from the Tivoli Key Lifecycle Manager key server. The data is encrypted with an encryption key that is managed through an externally encrypted key. The data on disks that are members of a rank that is not encryption-enabled is encrypted with an encryption key that is encrypted with a derived key and stored on the disk. Therefore, this data is obfuscated.
NVS dump data on system disks
If you start a force power off sequence, write data in flight in the NVS memory is encrypted with an encryption key and stored on the system disk in the DS8000. The data is limited to 8 GBs. The encryption key is encrypted with a derived key and stored on the system disk, hence NVS data is obfuscated. The data on the system disk is cryptographically erased after power is restored and after the data has been restored to the NVS memory during the initial microcode load.
Atomic-parity update (APU) dump data in device flash memories
If a force power off sequence is initiated atomic parity write data in flight within the device adapter memory for RAID 6 arrays is encrypted with an encryption key. The data is stored in flash memory on the device adapter card in the DS8000 system, and is limited to 32 MB per device adapter or 512 MB per storage facility.
For version 6, release 1 and later, the encryption key to unlock the APU data in compact flash is a randomly generated AES-256 key, which is stored externally to each individual device adapter, and encrypted at the FRU level.
Note: The power off requests that are issued through the DS8000 Storage Manager,
the command-line interface or through the IBM System z power control interfaces do not start a force power off sequence. Activation of the Force Power Off service switch or loss of AC power does start a force power off sequence.
Recovery key configuration operations
A storage administrator must start the process to configure a recovery key for the DS8000 SFI before an encryption group is created. Each configured encryption group has an associated recovery key. You can use the recovery key to access data from an encryption group that is in a configured-inaccessible state when access to the encryption group data key through any key server is not possible.
76 Introduction and Planning Guide
The security administrator receives a 256-bit key that is generated from the SFI during the configuration process and must securely maintain it for future use if an encryption deadlock occurs. The SFI does not maintain a copy of the recovery key. The storage administrator must then approve the recovery key configuration request for it to become active. During the configuration process, the following steps take place:
1. The security administrator initiates the configure recovery key function.
2. The SFI generates a recovery key and generates a secure hash of the recovery
key producing the recovery key signature.
3. The SFI generates a random key pair (the private key is referred to as the primary recovery key and the public key is referred to as the secondary recovery key).
4. The SFI stores the encrypted primary recovery key, secondary recovery key, and recovery key signature for future use. The encrypted primary recovery key and secondary recovery key are stored in multiple places for reliability.
5. The SFI provides the recovery key to the security administrator.
6. The SFI sets the primary recovery key and recovery key to zero, puts the
recovery key in the verify-pending state, and completes the configure recovery key function successfully.
7. The security administrator initiates the verify recovery key function and inputs the recovery key.
8. The storage administrator initiates the authorize recovery key function.
9. The storage facility image puts the recovery key in the configured state and
completes the authorize recovery key function successfully.
Within a secure key environment, you might choose to disable the recovery key rather than to configure one. While disabling the recovery key increases the security of the encrypted data in the DS8000, it also increases the risk of encryption deadlock, described under “Encryption deadlock” on page 78.
If you choose to disable the recovery key, you are highly encouraged to strictly follow the guidelines included in “Encryption deadlock prevention” on page 80. Failure to do so might result in permanent loss of all your encrypted data managed by key servers, if an encryption deadlock occurs.
The state of the recovery key must be "Unconfigured" to disable the recovery key. The following includes the process of the recovery key:
1. The security administrator requests that the recovery key be disabled. This action changes the recovery key state from "Unconfigured" to "Disable Authorize Pending."
2. The storage administrator authorizes the recovery key disablement. This action changes the recovery key state from "Disable Authorize Pending" to "Disabled."
Each encryption group configured has its own recovery key that might be configured or disabled. The current DS8000 implementation supports a single encryption group and a single recovery key.
It is possible to re-enable the recovery key of an encryption group once the encryption group is in the unconfigured state. This action implies a prerequisite break down of encrypted volumes, ranks, and extent pools. The following includes the process of enabling the recovery key:
1. The security administrator requests that the recovery key be enabled. This action changes the recovery key state from "Disabled" to "Enable Authorize Pending."
Chapter 3. Data management features 77
2. The storage administrator authorizes the recovery key enablement. This action changes the recovery key state from "Enable Authorize Pending" to "Unconfigured."
3. Normal recovery key configuration steps are followed to configure the recovery key prior to encryption group creation.

Encryption deadlock

An encryption deadlock occurs when all key servers that are within an account cannot become operational because some part of the data in each key server is stored on an encrypting device that is dependent on one of these key servers to access the data.
The key server provides an operating environment for the key server application to run in, to access its keystore on persistent storage, and to interface with client storage devices that require key server services. The keystore data is accessed by the key server application by using your specified password. The keystore data is encrypted independently of where it is stored. However, any online data that is required to initiate the key server cannot be stored on storage that has a dependency on the key server to enable access. If this constraint is not met, the key server cannot perform an initial program load (IPL) and therefore cannot become operational. This data includes the boot image for the operating system that runs on the key server as well as any data that is required by that operating system and its associated software stack to run the key server application, to allow it to access its keystore and to allow the key server to communicate with its storage device clients. Similarly, any backups of the key server environment and data must not be stored on storage that has a dependency on a key server to restore or access the backup data.
While an encryption deadlock exists, you cannot access any encrypted data that is managed by the key servers. If all backups of the keystore are also stored on encrypting storage that is dependent on a key server, and you do not have the recovery keys that would unlock the storage devices, the encryption deadlock can become a permanent encryption deadlock such that all encrypted data that is managed by the key servers is permanently lost.
Note: To avoid encryption deadlock situations, ensure that you follow the
guidelines outlined in “Encryption deadlock prevention” on page 80.
With encryption-capable disks, the probability of an encryption deadlock increases significantly because of the following factors:
v There are a number of layers of virtualization in the I/O stack hierarchy that
make it difficult for you to determine where all the files that are necessary to make the key server and its associated keystore available are stored. The key server can access its data through a database that runs on a file system on a logical volume manager which communicates with a storage subsystem that provisions logical volumes with capacity that is obtained from other subordinate storage arrays. The data that is required by the key server might end up provisioned over various storage devices, each of which might be independently encryption-capable or encryption-enabled.
v Various layers within this I/O stack hierarchy can provide transparent data
relocation either autonomically or because of a user-initiated operations.
v As the availability of encryption-capable devices becomes more pervasive, more
data is migrated from non-encrypted storage to encrypted storage. Even if the key servers are initially configured correctly, it is possible that a storage
78 Introduction and Planning Guide
Loading...