IBM DS8880 Introduction And Planning Manual

Page 1
IBM DS8880
Version 8 Release 5
Introduction and Planning Guide
IBM
GC27-8525-16
Page 2
Note
Before using this information and the product it supports, read the information in “Safety and environmental notices” on page 213 and “Notices” on page 211.
This edition replaces GC27-8525-15.
© Copyright IBM Corporation 2004, 2018.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Page 3

Contents

About this book ........... v
Who should use this book .......... v
Conventions and terminology ........ v
Publications and related information ...... v
IBM Publications Center .......... ix
Sending comments ............ x
Summary of changes......... xi
Chapter 1. Overview ......... 1
Machine types overview .......... 4
Hardware ............... 5
System types ............. 6
Storage enclosures ........... 21
Management console .......... 22
Ethernet switches ........... 22
Processor nodes ............ 22
I/O enclosures ............ 22
Power ............... 23
Functional overview ........... 23
Logical configuration ........... 27
Logical configuration with DS8000 Storage
Management GUI ........... 27
Logical configuration with DS CLI...... 29
RAID implementation .......... 31
Logical subsystems ........... 33
Allocation methods........... 33
Management interfaces .......... 35
DS8000 Storage Management GUI ...... 35
DS command-line interface ........ 35
DS Open Application Programming Interface .. 36
RESTful API ............. 36
IBM Spectrum Control.......... 37
IBM Copy Services Manager........ 37
DS8000 Storage Management GUI supported web
browsers ............... 38
Chapter 2. Hardware features ..... 41
Storage complexes ............ 46
Management console ........... 46
Hardware specifics ............ 46
Storage system structure ......... 46
Disk drives and flash drives ........ 47
Drive maintenance policy......... 47
Host attachment overview ........ 48
I/O load balancing ............ 50
Storage consolidation ........... 50
Count key data ............. 51
Fixed block .............. 51
T10 DIF support ............ 51
Logical volumes ............. 52
Allocation, deletion, and modification of volumes 52
LUN calculation ............. 53
Extended address volumes for CKD ...... 54
Quick initialization ............ 55
Chapter 3. Data management features 57
Transparent cloud tiering .......... 57
Dynamic volume expansion ......... 59
Count key data and fixed block volume deletion
prevention............... 59
Thin provisioning ............ 59
Extent Space Efficient (ESE) capacity controls for
thin provisioning ........... 60
IBM Easy Tier ............. 61
VMware vStorage API for Array Integration support 64
Performance for IBM Z .......... 66
Copy Services ............. 67
Disaster recovery through Copy Services ... 76
Resource groups for Copy Services scope limiting 77
Comparison of Copy Services features ..... 79
I/O Priority Manager ........... 80
Securing data.............. 81
Chapter 4. Planning the physical
configuration ............ 83
Configuration controls........... 83
Determining physical configuration features ... 83
Management console features ........ 84
Primary and secondary management consoles .. 84
Configuration rules for management consoles .. 85
Storage features ............. 85
Storage enclosures and drives ....... 85
Storage-enclosure fillers ......... 88
Device adapters and flash RAID adapters ... 89
Drive cables ............. 89
Configuration rules for storage features .... 91
Physical and effective capacity ....... 93
I/O adapter features ........... 98
I/O enclosures ............ 99
Fibre Channel (SCSI-FCP and FICON) host
adapters and cables........... 99
zHyperLink adapters and cables ...... 101
Feature codes for Transparent cloud tiering
adapters .............. 102
Feature codes for flash RAID adapters .... 102
Configuration rules for I/O adapter features 103
Processor complex features ......... 108
Feature codes for processor licenses ..... 109
Processor memory features ......... 109
Feature codes for system memory ..... 109
Power features ............. 110
Power cords............. 110
Input voltage ............ 112
Direct-current uninterruptible-power supply .. 112
Configuration rules for power features .... 113
Other configuration features ........ 113
Extended power line disturbance ...... 113
BSMI certificate (Taiwan) ........ 113
Shipping weight reduction ........ 114
© Copyright IBM Corp. 2004, 2018 iii
Page 4
Chapter 5. Planning use of licensed
functions ............. 115
Licensed function indicators ........ 115
License scope ............. 115
Ordering licensed functions......... 116
Rules for ordering licensed functions ..... 117
Base Function license ........... 118
Database Protection .......... 119
Encryption Authorization ........ 119
IBM Easy Tier ............ 120
I/O Priority Manager ......... 120
Operating environment license ...... 120
Thin provisioning ........... 120
z-synergy Services license ......... 121
High Performance FICON for z Systems ... 121
IBM HyperPAV............ 122
Parallel Access Volumes ......... 122
Transparent cloud tiering ........ 122
z/OS Distributed Data Backup ...... 122
zHyperLink ............. 123
Copy Services license........... 123
Remote mirror and copy functions ..... 124
FlashCopy function (point-in-time copy) ... 124
Safeguarded Copy ........... 124
||
z/OS Global Mirror .......... 125
z/OS Metro/Global Mirror Incremental Resync 125 Copy Services Manager on the Hardware
Management Console license ........ 125
Planning for encryption-key servers ..... 183
Planning for key lifecycle managers ..... 184
Planning for full-disk encryption activation .. 185
Planning for user accounts and passwords ... 185
Managing secure user accounts ...... 185
Managing secure service accounts ..... 186
Planning for NIST SP 800-131A security
conformance.............. 186
Chapter 10. License activation and
management............ 189
Planning your licensed functions ....... 189
Activation of licensed functions ....... 190
Activating licensed functions ....... 190
Scenarios for managing licensing ....... 191
Adding storage to your machine ...... 191
Managing a licensed feature ....... 192
Appendix A. Accessibility features 193
Appendix B. Warranty information 195
Appendix C. IBM equipment and
documents ............ 197
Installation components .......... 197
Customer components .......... 198
Service components ........... 198
Chapter 6. Delivery and installation
requirements ........... 127
Delivery requirements .......... 127
Acclimation ............. 127
Shipment weights and dimensions ..... 128
Receiving delivery........... 129
Installation site requirements ........ 130
Planning for floor and space requirements... 131
Planning for power requirements...... 155
Planning for environmental requirements ... 163
Planning for safety .......... 169
Planning for network and communications
requirements ............ 169
Chapter 7. Planning your storage
complex setup ........... 173
Company information .......... 173
Management console network settings ..... 173
Remote support settings .......... 174
Notification settings ........... 175
Power control settings .......... 175
Control switch settings .......... 175
Chapter 8. Planning data migration 179
Selecting a data migration method ...... 180
Appendix D. DS8870 to DS8880 model
conversion ............ 199
DS8870 to DS8880 model conversion summary .. 199
Checking your preparations ........ 201
Removing data, configuration, and encryption .. 202
Completing post-conversion tasks ...... 202
Appendix E. New features for models 980, 981, 982, 98B, 98E, and 98F ... 205
Support for High Performance Flash Enclosures
Gen2 ................ 205
zHyperLink .............. 206
Transparent cloud tiering adapters ...... 207
Appendix F. Customization
|
worksheets ............ 209
||
Notices .............. 211
Trademarks .............. 212
Homologation statement ......... 213
Safety and environmental notices....... 213
Safety notices and labels......... 213
Environmental notices ......... 222
Electromagnetic compatibility notices .... 222
Chapter 9. Planning for security ... 183
Planning for data encryption ........ 183
iv DS8880 Introduction and Planning Guide
Index ............... 227
Page 5

About this book

This book describes how to plan for a new installation of DS8880. It includes information about planning requirements and considerations, customization guidance, and configuration worksheets.

Who should use this book

This book is intended for personnel that are involved in planning. Such personnel include IT facilities managers, individuals responsible for power, cooling, wiring, network, and general site environmental planning and setup.

Conventions and terminology

Different typefaces are used in this guide to show emphasis, and various notices are used to highlight key information.
The following typefaces are used to show emphasis:
Typeface Description
Bold Text in bold represents menu items. bold monospace Text in bold monospace represents command names. Italics Text in italics is used to emphasize a word. In command syntax, it
Monospace Text in monospace identifies the data or commands that you type,
is used for variables for which you supply actual values, such as a default directory or the name of a system.
samples of command output, examples of program code or messages from the system, or names of command flags, parameters, arguments, and name-value pairs.
These notices are used to highlight key information:
Notice Description
Note These notices provide important tips, guidance, or advice. Important These notices provide information or advice that might help you
avoid inconvenient or difficult situations.
Attention These notices indicate possible damage to programs, devices, or
data. An attention notice is placed before the instruction or situation in which damage can occur.

Publications and related information

Product guides, other IBM®publications, and websites contain information that relates to the IBM DS8000®series.
To view a PDF file, you need Adobe Reader. You can download it at no charge from the Adobe website(get.adobe.com/reader/).
© Copyright IBM Corp. 2004, 2018 v
Page 6
Online documentation
The IBM DS8000 series online product documentation ( http://www.ibm.com/ support/knowledgecenter/ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/ f2c_securitybp.html) contains all of the information that is required to install, configure, and manage DS8000 storage systems. The online documentation is updated between product releases to provide the most current documentation.
Publications
You can order or download individual publications (including previous versions) that have an order number from the IBM Publications Center website (www.ibm.com/shop/publications/order/). Publications without an order number are available on the documentation CD or can be downloaded here.
Table 1. DS8000 series product publications
Title Description Order number
DS8882F Introduction and Planning Guide
DS8880 Introduction and Planning Guide
DS8870 Introduction and Planning Guide
DS8800 and DS8700 Introduction and Planning Guide
This publication provides an overview of the DS8882F, the latest storage system in the DS8000 series. The DS8882F provides the new model 983. This publication provides an overview of the product and technical concepts for DS8882F.
This publication provides an overview of the product and technical concepts for DS8880. It also describes the ordering features and how to plan for an installation and initial configuration of the storage system.
This publication provides an overview of the product and technical concepts for DS8870. It also describes the ordering features and how to plan for an installation and initial configuration of the storage system.
This publication provides an overview of the product and technical concepts for DS8800 and DS8700. It also describes ordering features and how to plan for an installation and initial configuration of the storage system.
V8.5.0 GC27-9259-00
V8.5.0 GC27-8525-16 V8.4.0 GC27-8525-15 V8.3.3 GC27-8525-14 V8.3.1 GC27-8525-13 V8.3.0 GC27-8525-12 V8.2.3 GC27-8525-11 V8.2.1 GC27-8525-09 V8.2.0 GC27-8525-07 V8.1.1 GC27-8525-06 V8.1.0 GC27-8525-05 V8.0.1 GC27-8525-04 GC27-8525-03 V8.0.0 GC27-8525-02
V7.5.0 GC27-4209-11 V7.4.0 GC27-4209-10 V7.3.0 GC27-4209-09 V7.2.0 GC27-4209-08 V7.1.0 GC27-4209-05 V7.0.0 GC27-4209-02
V6.3.0 GC27-2297-09 V6.2.0 GC27-2297-07
vi DS8880 Introduction and Planning Guide
Page 7
Table 1. DS8000 series product publications (continued)
Title Description Order number
Command-Line Interface User's Guide
This publication describes how to use the DS8000 command-line interface (DS CLI) to manage DS8000 configuration and Copy Services relationships, and write customized scripts for a host system. It also includes a complete list of CLI commands with descriptions and example usage.
V8.5.0 SC27-8526-09 V8.3.3 SC27-8526-08 V8.3.1 SC27-8526-07 V8.3.0 SC27-8526-06 V8.2.3 SC27-8526-05 V8.2.2 SC27-8526-04 V8.2.0 SC27-8526-03 V8.1.1 SC27-8526-02 V8.1.0 SC27-8526-01 V8.0.0 SC27-8526-00 V7.5.0 GC27-4212-06 V7.4.0 GC27-4212-04 V7.3.0 GC27-4212-03 V7.2.0 GC27-4212-02 V7.1.0 GC27-4212-01 V7.0.0 GC27-4212-00 V6.3.0 GC53-1127-07
Host Systems Attachment Guide
This publication provides information about attaching hosts to the storage system. You can use various host attachments to consolidate storage capacity and workloads for open systems and IBM Z hosts.
V8.0.0 SC27-8527-00 V7.5.0 GC27-4210-04 V7.4.0 GC27-4210-03 V7.2.0 GC27-4210-02 V7.1.0 GC27-4210-01 V7.0.0 GC27-4210-00 V6.3.0 GC27-2298-02
IBM Storage System Multipath Subsystem Device Driver User's Guide
This publication provides information regarding the installation and use of the Subsystem Device Driver (SDD),
Download
Subsystem Device Driver Path Control Module (SDDPCM), and Subsystem Device Driver Device Specific Module (SDDDSM) on open systems hosts.
Application Programming Interface Reference
This publication provides reference information for the DS8000 Open application programming interface (DS Open API) and instructions for installing the Common Information Model Agent,
V7.3.0 GC27-4211-03 V7.2.0 GC27-4211-02 V7.1.0 GC27-4211-01 V7.0.0 GC35-0516-10 V6.3.0 GC35-0516-10
which implements the API.
RESTful API Guide This publication provides an overview of
the Representational State Transfer (RESTful) API, which provides a platform independent means by which to
V1.3 SC27-9235-00 V1.2 SC27-8502-02 V1.1 SC27-8502-01
V1.0 SC27-8502-00 initiate create, read, update, and delete operations in the DS8000 and supporting storage devices.
Table 2. DS8000 series warranty, notices, and licensing publications
Title Location
Warranty Information
http://www.ibm.com/support/docview.wss?uid=ssg1S7005239
for DS8000 series
IBM Safety Notices Search for G229-9054 on the IBM Publications Center website IBM Systems
http://ibm.co/1fBgWFI
Environmental Notices
About this book vii
Page 8
Table 2. DS8000 series warranty, notices, and licensing publications (continued)
Title Location
International Agreement for Acquisition of Software Maintenance (Not all software will offer Software Maintenance under this agreement.)
License Agreement for Machine Code
Other Internal Licensed Code
International Program License Agreement and International License Agreement for Non-Warranted Programs
http://ibm.co/1fBmKPz
http://ibm.co/1mNiW1U
http://ibm.co/1kvABXE
http://www-03.ibm.com/software/sla/sladb.nsf/pdf/ipla/$file/ ipla_en.pdf http://www-304.ibm.com/jct03001c/software/sla/sladb.nsf/pdf/ ilan/$file/ilan_en.pdf
See the Agreements and License Information CD that was included with the DS8000 series for the following documents:
v License Information v Notices and Information v Supplemental Notices and Information
Related websites
View the websites in the following table to get more information about DS8000 series.
Table 3. DS8000 series related websites
Title Description
IBM website (ibm.com®) Find more information about IBM products and
services.
IBM Support Portal website(www.ibm.com/storage/ support)
IBM Directory of Worldwide Contacts website(www.ibm.com/ planetwide)
IBM DS8000 series website (www.ibm.com/servers/storage/ disk/ds8000)
IBM Redbooks website(www.redbooks.ibm.com/)
IBM System Storage®Interoperation Center (SSIC) website (www.ibm.com/systems/support/ storage/config/ssic)
®
Find support-related information such as downloads, documentation, troubleshooting, and service requests and PMRs.
Find contact information for general inquiries, technical support, and hardware and software support by country.
Find product overviews, details, resources, and reviews for the DS8000 series.
Find technical information developed and published by IBM International Technical Support Organization (ITSO).
Find information about host system models, operating systems, adapters, and switches that are supported by the DS8000 series.
viii DS8880 Introduction and Planning Guide
Page 9
Table 3. DS8000 series related websites (continued)
Title Description
IBM Storage SAN (www.ibm.com/systems/storage/ san)
IBM Data storage feature activation (DSFA) website (www.ibm.com/storage/dsfa)
IBM Fix Central (www-933.ibm.com/support/ fixcentral)
IBM Java™SE (JRE)(www.ibm.com/ developerworks/java/jdk)
IBM Security Key Lifecycle Manager online product documentation(www.ibm.com/ support/knowledgecenter/ SSWPVP/)
IBM Spectrum Control™online product documentation in IBM Knowledge Center (www.ibm.com/support/ knowledgecenter)
DS8700 Code Bundle Information website(www.ibm.com/support/ docview.wss?uid=ssg1S1003593)
DS8800 Code Bundle Information website(www.ibm.com/support/ docview.wss?uid=ssg1S1003740)
DS8870 Code Bundle Information website(www.ibm.com/support/ docview.wss?uid=ssg1S1004204)
DS8880 Code Bundle Information website(www.ibm.com/support/ docview.wss?uid=ssg1S1005392)
Find information about IBM SAN products and solutions, including SAN Fibre Channel switches.
Download licensed machine code (LMC) feature keys that you ordered for your DS8000 storage systems.
Download utilities such as the IBM Easy Tier®Heat Map Transfer utility and Storage Tier Advisor tool.
Download IBM versions of the Java SE Runtime Environment (JRE), which is often required for IBM products.
This online documentation provides information about IBM Security Key Lifecycle Manager, which you can use to manage encryption keys and certificates.
This online documentation provides information about IBM Spectrum Control, which you can use to centralize, automate, and simplify the management of complex and heterogeneous storage environments including DS8000 storage systems and other components of your data storage infrastructure.
Find information about code bundles for DS8700. See section 3 for web links to SDD information.
The version of the currently active installed code bundle displays with the DS CLI ver command when you specify the -l parameter.
Find information about code bundles for DS8800. See section 3 for web links to SDD information.
The version of the currently active installed code bundle displays with the DS CLI ver command when you specify the -l parameter.
Find information about code bundles for DS8870. See section 3 for web links to SDD information.
The version of the currently active installed code bundle displays with the DS CLI ver command when you specify the -l parameter.
Find information about code bundles for DS8880.
The version of the currently active installed code bundle displays with the DS CLI ver command when you specify the -l parameter.

IBM Publications Center

The IBM Publications Center is a worldwide central repository for IBM product publications and marketing material.
About this book ix
Page 10
Procedure
The IBM Publications Center website (ibm.com/shop/publications/order) offers customized search functions to help you find the publications that you need. You can view or download publications at no charge.

Sending comments

Your feedback is important in helping to provide the most accurate and highest quality information.
Procedure
To submit any comments about this publication or any other IBM storage product documentation:
Send your comments by email to ibmkc@us.ibm.com. Be sure to include the following information:
v Exact publication title and version v Publication form number (for example, GA32-1234-00) v Page, table, or illustration numbers that you are commenting on v A detailed description of any information that should be changed
x DS8880 Introduction and Planning Guide
Page 11

Summary of changes

DS8000 Version 8, Release 5 introduces the following new features.
Version 8.5
This table provides the current technical changes and enhancement to the IBM DS8000 as of September 8, 2018. Changed and new information is indicated by a vertical bar (|) to the left of the change.
| | |
Note: The new system type DS8882F model 983 is covered in a separate publication. For information on the DS8882F model 983, refer to the DS888F Introduction and Planning Guide (GC27-9259-00).
Function Description
Safeguarded Copy See “Safeguarded Copy” on page 124 for more
information.
Client-side encryption for transparent cloud tiering
Releasing space on CKD volumes that use thin provisioning
Easy Tier enhancements Easy Tier infrastructure is enhanced to improve
See “Transparent cloud tiering” on page 57 for more information.
See “Thin provisioning” on page 59 for more information.
speed and efficiency. See “IBM Easy Tier” on page 61 for more information about Easy Tier.
© Copyright IBM Corp. 2004, 2018 xi
Page 12
xii DS8880 Introduction and Planning Guide
Page 13

Chapter 1. Overview

The IBM DS8880 is a high-performance, high-capacity storage system that supports continuous operation, data security, and data resiliency. For high-availability, the hardware components are redundant.
DS8880 system types with frames included add a base frame and expansion frame to the 283x machine type family, and the 533x all-flash machine type family.
| | |
Note: The modular rack-mountable system DS8882F model 983 is covered in a separate publication. For information on the DS8882F model 983, refer to the DS888F Introduction and Planning Guide (GC27-9259-00).
v The base frame contains the processor nodes, I/O enclosures, Ethernet switches,
and the Hardware Management Console (HMC), in addition to power and storage enclosures. The HMC is a small form factor computer and uses a keyboard and monitor that are stored in the base frame. An optional secondary HMC is also available in the base frame. A secondary HMC can provide high-availability, particularly for important processes such as encryption, Copy Services, and the HMC storage management functions.
v Depending on the system configuration, you can add up to four expansion
frames to the storage system. Only the first expansion frame contains I/O enclosures, which provide more host adapters, device adapters, and High Performance Flash Enclosure Gen2 flash RAID adapters.
The DS8880 features five system types with frames: DS8884, DS8884F, DS8886, DS8886F, and DS8888F. The DS8884 (machine type 283x models 984 and 84E) is an entry-level, high-performance storage system. The DS8884F (machine type 533x model 984) is an entry-level, high-performance storage system featuring all High Performance Flash Enclosures Gen2. The DS8886 is a high-density, high-performance storage system with either single-phase power (machine type 283x models 985 and 85E) or three-phase power (machine type 283x models 986 and 86E). The DS8886F is a high-density, high-performance storage system with either single-phase power (machine type 533x models 985 and 85E) or three-phase power (machine type 533x models 986 and 86E) featuring all High Performance Flash Enclosures Gen2. The DS8888F (machine type 533x models 988 and 88E) is a high-performance, high-efficiency storage system featuring all High Performance Flash Enclosures Gen2.
Note: Previously available DS8880 models (980, 98B, 981, 98E, 982, 98F) are still supported, but not covered in this version documentation. For information on models not documented here, refer to previous version documentation (GC27-8525-06).
v The DS8884 (machine type 283x models 984 and 84E) storage system includes
6-core (12-core with zHyperLink support) processors and is scalable with up to 96 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, up to 768 standard drives, up to 256 GB system memory, and up to 64 host adapter ports. The DS8884 includes a base frame (model 984), up to two expansion frames (model 84E), and a 40U capacity in each frame.
v The DS8884F (machine type 533x model 984) storage system includes 6-core
(12-core with zHyperLink support) processors and is scalable with up to 192
© Copyright IBM Corp. 2004, 2018 1
Page 14
Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, up to 256 GB system memory, and up to 64 host adapter ports. The DS8884F includes a base frame (model 984) and a 40U capacity.
v The DS8886 (machine type 283x models 985 and 85E) has a single-phase power
storage system and is scalable with up to 24-core processors, up to 192 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, up to 1,536 standard drives, up to 2048 GB system memory, and up to 128 host adapter ports. The DS8886 includes a base frame (model 985), up to four expansion frames (model 85E), and an expandable 40-46U capacity in each frame.
v The DS8886F (machine type 533x models 985 and 85E) has a single-phase power
storage system and is scalable with up to 24-core processors, up to 384 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, up to 2048 GB system memory, and up to 128 host adapter ports. The DS8886F includes a base frame (model 985) and one expansion frame (model 85E).
v The DS8886 (machine type 283x models 986 and 86E) has a three-phase power
storage system and is scalable with up to 24-core processors, up to 192 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, up to 1,440 standard drives, up to 2048 GB system memory, and up to 128 host adapter ports. The DS8886 includes a base frame (model 986), up to four expansion frames (model 86E), and an expandable 40-46U capacity in each frame.
v The DS8886F (machine type 533x models 986 and 86E) has a three-phase power
storage system and is scalable with up to 24-core processors, up to 384 Flash Tier 0, Flash Tier1, or Flash Tier 2 drives, up to 2048 GB system memory, and up to 128 host adapter ports. The DS8886F includes a base frame (model 986) and one expansion frame (model 86E).
v The DS8888F (machine type 533x models 988 and 88E) has a three-phase power
storage system and is scalable with up to 48-core processors, up to 768 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, up to 2048 GB system memory, and up to 128 host adapter ports. The DS8888F includes a base frame (model 988) and up to two expansion frame (model 88E).
The DS8880 features standard 19-inch wide frames and 19-inch wide frames with 6U extensions (DS8886 and DS8888F only).
Energy efficient DC-UPS modules provide 8 KW of single-phase power (model 984 and 985) and 8 KW of three-phase power (model 986 and 988).
DS8880 integrates High Performance Flash Enclosures Gen2 and flash drives for all models documented here to provide a higher level of performance. Previously available models (980, 98B, 981, 98E, 982, 98F) integrated High-Performance Flash Enclosures. For information on models 980, 98B, 981, 98E, 982, and 98F, refer to previous version documentation (GC27-8525-06).
Licensed functions are available in four groups:
Base Function
The Base Function license is required for each DS8880 storage system. The licensed functions include Database Protection, Encryption Authorization, Easy Tier, I/O Priority Manager, the Operating Environment License, and Thin Provisioning.
z-synergy Services
The z-synergy Services include z/OS®functions that are supported on the storage system. The licensed functions include zHyperLink, transparent cloud tiering, High Performance FICON®for z Systems®, HyperPAV, PAV, and z/OS Distributed Data Backup.
2 DS8880 Introduction and Planning Guide
Page 15
Copy Services
Copy Services features help you implement storage solutions to keep your business running 24 hours a day, 7 days a week by providing data duplication, data migration, and disaster recovery functions. The licensed functions include Global Mirror, Metro Mirror, Metro/Global Mirror,
| |
Point-in-Time Copy/FlashCopy®, z/OS Global Mirror, Safeguarded Copy, and z/OS Metro/Global Mirror Incremental Resync (RMZ).
Copy Services Manager on Hardware Management Console
The Copy Services Manager on Hardware Management Console (CSM on HMC) license enables IBM Copy Services Manager to run on the Hardware Management Console, which eliminates the need to maintain a separate server for Copy Services functions.
DS8880 also includes features such as:
v POWER8®processors v Power®-usage reporting v National Institute of Standards and Technology (NIST) SP 800-131A enablement
Other functions that are supported in both the DS8000 Storage Management GUI and the DS command-line interface (DS CLI) include:
v Easy Tier v Data encryption v Thin provisioning
You can use the DS8000 Storage Management GUI and the DS command-line interface (DS CLI) to manage and logically configure the storage system.
Functions that are supported in only the DS command-line interface (DS CLI) include:
v Point-in-time copy functions with IBM FlashCopy v Remote Mirror and Copy functions, including
– Metro Mirror – Global Copy – Global Mirror – Metro/Global Mirror – z/OS Global Mirror – z/OS Metro/Global Mirror – Multiple Target PPRC
v I/O Priority Manager
DS8880 meets hazardous substances (RoHS) requirements by conforming to the following EC directives:
v Directive 2011/65/EU of the European Parliament and of the Council of 8 June
2011 on the restriction of the use of certain hazardous substances in electrical and electronic equipment. It has been demonstrated that the requirements specified in Article 4 are met.
v EN 50581:2012 technical documentation for the assessment of electrical and
electronic products regarding the restriction of hazardous substances.
The IBM Security Key Lifecycle Manager stores data keys that are used to secure the key hierarchy that is associated with the data encryption functions of various devices, including the DS8000 series. It can be used to provide, protect, and maintain encryption keys that are used to encrypt information that is written to and decrypt information that is read from encryption-enabled disks. IBM Security Key Lifecycle Manager operates on various operating systems.
Chapter 1. Overview 3
Page 16

Machine types overview

There are several machine type options available for the DS8000 series. Order a hardware machine type for the storage system and a corresponding function authorization machine type for the licensed functions that are planned for use.
| | |
The modular rack-mountable system DS8882F model 983 is not documented in this publication. For information on the DS8882F model 983, refer to the DS8882F Introduction and Planning Guide (GC27-9259-00).
The following tables list the available hardware machine types and their corresponding function authorization machine types.
Table 4. Available hardware and function-authorization machine types that support both standard drive enclosures and High Performance Flash Enclosures Gen2
Hardware Licensed functions
Available hardware
Hardware machine
type
2831 (1-year warranty period)
2832 (2-year warranty period)
2833 (3-year warranty period)
2834 (4-year warranty period)
models that support both standard drive
enclosures and High
Performance Flash
Enclosures Gen2
984 and 84E
985 and 85E
986 and 86E
Corresponding
function authorization machine type
2836 (1-year warranty
period)
2837 (2-year warranty
period)
2838 (3-year warranty
period)
2839 (4-year warranty
period)
Available function
authorization models
LF8
Table 5. Available hardware and function-authorization machine types that support all-flash system types
Hardware Licensed functions
Hardware machine
type
5331 (1-year warranty period)
5332 (2-year warranty period)
5333 (3-year warranty period)
5334 (4-year warranty period)
Available hardware
models that support
High Performance
Flash Enclosures
Gen2
984
985 and 85E
986 and 86E
988 and 88E
Corresponding
function authorization machine type
9046 (1-year warranty
period)
9047 (2-year warranty
period)
9048 (3-year warranty
period)
9049 (4-year warranty
period)
Available function
authorization models
LF8
Note: Previously available DS8880 models (980, 98B, 981, 98E, 982, 98F) are still supported, but not covered in this version documentation. For information on models not documented here, refer to previous version documentation (GC27-8525-06).
4 DS8880 Introduction and Planning Guide
Page 17

Hardware

f2c01869
HOST adapters
HOST adapters
Adaptor processors
Adapter processors
Protocol management
Protocol management
Shared processors
cache
Shared processors
cache
Shared processors
cache
Shared processors
cache
Power server
Power server
Flash RAID adapters and device adapters
Flash RAID adapters and device adapters
Adapter processors RAID & sparing management
RAID & sparing managementAdapter processors
Power server
Power server
The machine types for the DS8000 series specify the service warranty period. The warranty is used for service entitlement checking when notifications for service are called home. All DS8000 series models report 2107 as the machine type to attached host systems.
The architecture of the IBM DS8000 series is based on three major elements that provide function specialization and three tiers of processing power.
Figure 1 illustrates the following elements: v Host adapters manage external I/O interfaces that use Fibre Channel protocols
for host-system attachment and for replicating data between storage systems.
v Flash RAID adapters and device adapters manage the internal storage devices.
They also manage the SAS paths to drives, RAID protection, and drive sparing.
v A pair of high-performance redundant active-active Power servers is functionally
positioned between the adapters and a key feature of the architecture. The internal Power servers support the bulk of the processing to be done in the
storage system. Each Power server has multiple processor cores. The cores are managed as a symmetric multiprocessing (SMP) pool of shared processing power to process the work that is done on the Power server. Each Power server runs an AIX®kernel that manages the processors, manages processor memory as a data cache, and more. For more information, see IBM DS8000 Architecture and Implementation on the IBM Redbooks website(www.redbooks.ibm.com/)
Figure 1. DS8000 series architecture
The DS8000 series architecture has the following major benefits. v Server foundation
– Promotes high availability and high performance by using field-proven Power
servers
Chapter 1. Overview 5
Page 18
– Reduces custom components and design complexity – Positions the storage system to reap the benefits of server technology
advances
v Operating environment
– Promotes high availability and provides a high-quality base for the storage
system software through a field-proven AIX operating-system kernel
– Provides an operating environment that is optimized for Power servers,
including performance and reliability, availability, and serviceability – Provides shared processor (SMP) efficiency – Reduces custom code and design complexity – Uses Power firmware and software support for networking and service
functions

System types

Starting with version 8.2.1, DS8880 supports five system types with frames: DS8884 (machine type 283x models 984 and 84E), DS8884F (machine type 533x model
984),DS8886 (machine type 283x single-phase power models 985 and 85E, or three-phase power models 986 and 86E), DS8886F (machine type 533x single-phase power models 985 and 85E, or three-phase power models 986 and 86E), and DS8888F (machine type 533x models 988 and 88E).
| | |
Note: The modular rack-mountable system DS8882F model 983 is covered in a separate publication. For information on the DS8882F model 983, refer to the DS888F Introduction and Planning Guide (GC27-9259-00).
For more specifications, see the IBM DS8000 series specifications web site(www.ibm.com/systems/storage/disk/ds8000/specifications.html).
DS8884 (machine type 283x models 984 and 84E)
The DS8884 is an entry-level, high-performance, high-capacity storage system that includes standard disk enclosures and High Performance Flash Enclosures Gen2.
DS8884 storage systems feature 6-core (12-core with zHyperLink support) processors and are scalable and support up to 96 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, and up to 768 standard drives. They are optimized and configured for cost, by minimizing the number of device adapters and maximizing the number of storage enclosures that are attached to each storage system. The frame is 19 inches wide and 40U high.
Model 984 supports the following storage enclosures: v Up to 4 standard drive enclosure pairs and up to 1 High Performance Flash
Enclosure Gen2 pair in a base frame (model 984).
v Up to 5 standard drive enclosure pairs and up to 1 High Performance Flash
Enclosure Gen2 pair in a first expansion frame (model 84E).
v Up to 7 standard drive enclosure pairs in a second expansion frame (model 84E).
The DS8884 uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre Channel Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8 Gbps adapters only) protocol. The High Performance FICON (HPF) feature is also supported.
The DS8884 supports single-phase power.
For more specifications, see the IBM DS8000 series specifications web site(www.ibm.com/systems/storage/disk/ds8000/specifications.html).
6 DS8880 Introduction and Planning Guide
Page 19
The following tables list the hardware components and maximum capacities that are supported for the DS8884, depending on the amount of memory that is available.
Table 6. Components for the DS8884 (models 984 and 84E)
Host
Processors
6-core 64 GB 32 GB 1 2 - 8 0 - 1 0 - 4 0 - 1 0 6-core 128 GB 64 GB 2 2 - 16 0 - 4 0 - 16 0 - 2 0 - 2 6-core 256 GB 128 GB 2 2 - 16 0 - 4 0 - 16 0 - 2 0 - 2
4
12-core
1. Standard drive and High Performance Flash Enclosures Gen2 are installed in pairs.
2. This configuration of the DS8880 must be populated with either one standard drive enclosure pair (feature code 1241) or one High Performance
Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.
4. 12-core processors are supported only with zHyperLink.
System memory
256 GB 128 GB 2 2 - 16 0 - 4 0 - 16 0 - 2 0 - 2
Processor memory
I/O enclosure pairs
adapters (8 or 4 port)
Device adapter pairs
Standard drive enclosure pairs
2
Table 7. Maximum capacity for the DS8884 (models 984 and 84E)
Maximum storage
Processors
System memory
Maximum
2.5-in. standard disk drives
capacity for
2.5-in. standard disk drives
Maximum
3.5-in. standard disk drives
6-core 64 GB 192 346 TB 96 576 TB 48 365 TB 240 6-core 128 GB 768 1.38 PB 384 2.3 PB 96 730 TB 864 6-core 256 GB 768 1.38 PB 384 2.3 PB 96 730 TB 864
2
12-core
256 GB 768 1.38 PB 384 2.3 PB 96 730 TB 864
Maximum storage capacity for
3.5-in. standard disk drives
Maximum
2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives
High Performance
1,
Flash Enclosure Gen2 pairs
Maximum storage capacity for
2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives
1, 2, 3
Expansion frames
Maximum total drives
1
1. Combined total of 2.5-in. standard disk drives and 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives.
2. 12-core processors are supported only with zHyperLink.
Base frame (model 984) overview:
The DS8884 (model 984) includes a base frame.
The base frame includes the following components:
v Standard storage enclosures v High Performance Flash Enclosures Gen2 v Hardware Management Console (HMC), including a keyboard and monitor v Second HMC (optional) v Ethernet switches v Processor nodes (available with POWER8 processors) v I/O enclosures v Direct-current uninterruptible power supplies (DC-UPS) v Rack power control (RPC) cards
Expansion frame (model 84E) overview:
The DS8884 (model 84E) supports up to two expansion frames that can be added to a base frame. A minimum of 128 GB system memory is required, if expansion frames are added.
Chapter 1. Overview 7
Page 20
The first expansion frame supports up to 240 standard 2.5-inch disk drives. The
A B C
f2c02367
A B C
f2c02368
Up to 20 M
second expansion frame supports up to 336 2.5-inch standard disk drives. When all three frames are installed, the DS8884 (models 984 and 84E) can support a total of 768 2.5-inch standard disk drives in a compact footprint, creating a high-density storage system and preserving valuable floor space in data center environments.
Only the first expansion frame includes I/O enclosures. You can add up to one High Performance Flash Enclosure Gen2 pair to the first expansion frame. The second expansion frame does not include I/O enclosures or High Performance Flash Enclosures Gen2.
The main power area is at the rear of the expansion frame. The power system in each frame is a pair of direct-current uninterruptible power supplies (DC-UPSs) with internal batteries.
DS8884 (models 984 and 84E) expansion frame location options:
In addition to the standard expansion frame location, the DS8884 offers a remote expansion frame option.
With the standard DS8884 expansion frame location, the first expansion frame is located next to the base frame, and the second expansion frame is located next to the first expansion frame.
Figure 2. DS8884 standard expansion frame locations
With the DS8884 remote expansion frame option, the first expansion frame is located next to the base frame, and the second expansion frame can be located up to 20 meters away from the first expansion frame. This option requires the extended drive cable group C (feature code 1266).
Figure 3. DS8884 remote expansion frame option
DS8884F (machine type 533x model 984)
The DS8884F is an entry-level, high-performance, high-capacity storage system that
8 DS8880 Introduction and Planning Guide
includes only High Performance Flash Enclosures Gen2.
Page 21
DS8884F storage systems feature 6-core (12-core with zHyperLink support) processors and are scalable and support up to 192 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives. The frame is 19 inches wide and 40U high.
DS8884F supports up to four High Performance Flash Enclosure Gen2 pair in a base frame (model 984).
The DS8884F uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre Channel Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8 Gbps adapters only) protocol. The High Performance FICON (HPF) feature is also supported.
The DS8884F supports single-phase power.
For more specifications, see the IBM DS8000 series specifications web site(www.ibm.com/systems/storage/disk/ds8000/specifications.html).
The following tables list the hardware components and maximum capacities that are supported for the DS8884F, depending on the amount of memory that is available.
Table 8. Components for the DS8884F (model 984)
Processors System memory Processor memory
6-core 64 GB 32 GB 1 2 - 8 1 N/A 6-core 128 GB 64 GB 1 - 2 2 - 16 1 - 4 N/A 6-core 256 GB 128 GB 1 - 2 2 - 16 1 - 4 N/A
4
12-core
1. High Performance Flash Enclosures Gen2 are installed in pairs.
2. This configuration of the DS8880 must be populated with at least one High Performance Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.
4. 12-core processors are supported only with zHyperLink.
256 GB 128 GB 1 - 2 2 - 16 1 - 4 N/A
I/O enclosure pairs
Host adapters (8 or 4 port)
High Performance Flash Enclosure Gen2
1, 2, 3
pairs
Expansion frames
Table 9. Maximum capacity for the DS8884F (model 984)
Maximum 2.5-in. Flash Tier
Processors System memory
6-core 64 GB 96 730 TB 96 6-core 128 GB 192 1459 TB 192 6-core 256 GB 192 1459 TB 192 12-core 256 GB 192 1459 TB 192
0, Flash Tier 1, or Flash Tier 2 drives
Maximum storage capacity for 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives Maximum total drives
DS8884F base frame (model 984) overview:
The DS8884F (model 984) includes a base frame.
The base frame includes the following components:
v High Performance Flash Enclosures Gen2 v Hardware Management Console (HMC), including a keyboard and monitor v Second HMC (optional) v Ethernet switches v Processor nodes (available with POWER8 processors)
Chapter 1. Overview 9
Page 22
v I/O enclosure (optional second I/O enclosure with 128 GB or more system
memory)
v Direct-current uninterruptible power supplies (DC-UPS) v Rack power control (RPC) cards
DS8886 (machine type 283x models 985 and 85E or 986 and 86E)
The DS8886 is a high-density, high-performance, high-capacity storage system that includes standard disk enclosures and High Performance Flash Enclosures Gen2.
The DS8886 models 985 and 85E support single-phase power. The DS8886 models 986 and 86E support three-phase power.
DS8886 (machine type 283x models 985 and 85E):
The DS8886 (machine type 283x models 985 and 85E) is a high-density, high-performance, high-capacity storage system that includes standard disk enclosures and High Performance Flash Enclosures Gen2, and supports single-phase power.
DS8886 (machine type 283x models 985 and 85E) storage systems are scalable with up to 24-core processors, up to 192 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, and up to 1,536 standard drives. They are optimized and configured for performance and throughput, by maximizing the number of device adapters and paths to the storage enclosures. The frame is 19 inches wide and expandable from 40U - 46U. They support the following storage enclosures:
v Up to 3 standard drive enclosure pairs and up to 2 High Performance Flash
Enclosure Gen2 pairs in a base frame (model 985).
v Up to 5 standard drive enclosure pairs and up to 2 High Performance Flash
Enclosure Gen2 pairs in a first expansion frame (model 85E).
v Up to 9 standard drive enclosure pairs in a second expansion frame. v Up to 9 standard drive enclosure pairs in a third expansion frame. v Up to 6 standard drive enclosure pairs in a fourth expansion frame.
The DS8886 uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre Channel Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8 Gbps adapters only) protocol. The High Performance FICON (HPF) feature is also supported.
For more specifications, see the IBM DS8000 series specifications web site(www.ibm.com/systems/storage/disk/ds8000/specifications.html).
The following tables list the hardware components and maximum capacities that are supported for the DS8886 (models 985 and 85E), depending on the amount of memory that is available.
Table 10. Components for the DS8886 (machine type 283x models 985 and 85E)
Host
Processors
8-core 128 GB
16-core 256 GB
24-core 1024 GB
System memory
256 GB
512 GB
2048 GB
Processor memory
64 GB 128 GB
128 GB 256 GB
512 GB 1024 GB
I/O enclosure pairs
2 2 - 16 0 - 3 0 - 3 0 - 2 0
4 2 - 32 0 - 8 0 - 32 0 - 4 0 - 4
4 2 - 32 0 - 8 0 - 32 0 - 4 0 - 4
adapters (8 or 4 port)
Device adapter pairs
Standard drive enclosure pairs
2
10 DS8880 Introduction and Planning Guide
High Performance
1,
Flash Enclosure Gen2 pairs
1, 2, 3
Expansion frames
Page 23
Table 10. Components for the DS8886 (machine type 283x models 985 and 85E) (continued)
Host
Processors
1. Standard drive and High Performance Flash Enclosures Gen2 are installed in pairs.
2. This configuration of the DS8880 must be populated with either one standard drive enclosure pair (feature code 1241) or one High Performance
Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.
System memory
Processor memory
I/O enclosure pairs
adapters (8 or 4 port)
Device adapter pairs
Standard drive enclosure pairs
2
High Performance
1,
Flash Enclosure Gen2 pairs
1, 2, 3
Expansion frames
Table 11. Maximum capacity for the DS8886 (machine type 283x models 985 and 85E)
Maximum
Maximum
Maximum
Processors
8-core 128 GB
16-core 256 GB
24-core 1024 GB
1. Combined total of 2.5-in. disk drives and 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives.
System memory
256 GB
512 GB
2048 GB
2.5-in. standard disk drives
144 259 TB 72 432 TB 96 730 TB 240
1536 2.76 PB 768 4.61 PB 192 1459 TB 1728
1536 2.76 PB 768 4.61 PB 192 1459 TB 1728
storage capacity for
2.5-in. standard disk drives
Maximum
3.5-in. standard disk drives
Maximum storage capacity for
3.5-in. standard disk drives
Maximum
2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives
storage capacity for
2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives
Maximum total drives
1
Base frame (model 985) overview:
The DS8886 (model 985) includes a base frame.
The base frame includes the following components:
v Standard storage enclosures v High Performance Flash Enclosures Gen2 v Hardware Management Console (HMC), including a keyboard and monitor v Second HMC (optional) v Ethernet switches v Processor nodes (available with POWER8 processors) v I/O enclosures v Direct-current uninterruptible power supplies (DC-UPS) v Rack power control (RPC) cards
Expansion frame (model 85E) overview:
The DS8886 supports up to four expansion frames (model 85E) that can be added to a base frame (model 985). A minimum of 256 GB system memory and a 16-core processor is required, if expansion frames are added.
The first expansion frame supports up to 240 2.5-inch standard disk drives. The second and third expansion frames support up to 432 2.5-inch standard disk drives. A fourth expansion frame supports an extra 288 2.5-inch standard disk drives. When all four frames are added, the DS8886 can support a total of 1,536
2.5-inch disk drives in a compact footprint, creating a high-density storage system and preserving valuable floor space in data center environments.
Chapter 1. Overview 11
Page 24
Only the first expansion frame includes I/O enclosures. You can add up to two High Performance Flash Enclosure Gen2 pairs to the first expansion frame. The second, third, and fourth expansion frames do not include I/O enclosures or High Performance Flash Enclosures Gen2.
The main power area is at the rear of the expansion frame. The power system in each frame is a pair of direct-current uninterruptible power supplies (DC-UPSs) with internal batteries.
DS8886 (machine type 283x models 986 and 86E):
The DS8886 (machine type 283x models 986 and 86E) is a high-density, high-performance, high-capacity storage system that includes standard disk enclosures and High Performance Flash Enclosures Gen2, and supports three-phase power.
DS8886 (machine type 283x models 986 and 86E) storage systems are scalable with up to 24-core processors, up to 192 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, and up to 1,440 standard drives. They are optimized and configured for performance and throughput, by maximizing the number of device adapters and paths to the storage enclosures. The frame is 19 inches wide and expandable from 40U - 46U. They support the following storage enclosures:
v Up to 2 standard drive enclosure pairs and up to 2 High Performance Flash
Enclosure Gen2 pairs in a base frame (model 986).
v Up to 4 standard drive enclosure pairs and up to 2 High Performance Flash
Enclosure Gen2 pairs in a first expansion frame (model 86E).
v Up to 9 standard drive enclosure pairs in a second expansion frame. v Up to 9 standard drive enclosure pairs in a third expansion frame. v Up to 9 standard drive enclosure pairs in a fourth expansion frame.
The DS8886 uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre Channel Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8 Gbps adapters only) protocol. The High Performance FICON (HPF) feature is also supported.
For more specifications, see the IBM DS8000 series specifications web site(www.ibm.com/systems/storage/disk/ds8000/specifications.html).
The following tables list the hardware components and maximum capacities that are supported for the DS8886 (models 986 and 86E), depending on the amount of memory that is available.
Table 12. Components for the DS8886 (machine type 283x models 986 and 86E)
Host
Processors
8-core 128 GB
16-core 256 GB
24-core 1024 GB
System memory
256 GB
512 GB
2048 GB
Processor memory
64 GB 128 GB
128 GB 256 GB
512 GB 1024 GB
I/O enclosure pairs
2 2 - 16 0 - 2 0 - 2 0 - 2 0
4 2 - 32 0 - 8 0 - 30 0 - 4 0 - 4
4 2 - 32 0 - 8 0 - 30 0 - 4 0 - 4
adapters (8 or 4 port)
Device adapter pairs
Standard drive enclosure pairs
2
High Performance
1,
Flash Enclosure Gen2 pairs
1, 2, 3
Expansion frames
12 DS8880 Introduction and Planning Guide
Page 25
Table 12. Components for the DS8886 (machine type 283x models 986 and 86E) (continued)
Host
Processors
1. Standard drive and High Performance Flash Enclosures Gen2 are installed in pairs.
2. This configuration of the DS8880 must be populated with either one standard drive enclosure pair (feature code 1241) or one High Performance
Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.
System memory
Processor memory
I/O enclosure pairs
adapters (8 or 4 port)
Device adapter pairs
Standard drive enclosure pairs
2
High Performance
1,
Flash Enclosure Gen2 pairs
1, 2, 3
Expansion frames
Table 13. Maximum capacity for the DS8886 (machine type 283x models 986 and 86E)
Maximum
Maximum
Maximum
Processors
8-core 128 GB
16-core 256 GB
24-core 1024 GB
1. Combined total of 2.5-in. disk drives and 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives.
System memory
256 GB
512 GB
2048 GB
2.5-in. standard disk drives
96 173 TB 48 288 TB 96 730 TB 192
1440 2.59 PB 720 4.32 PB 192 1459 TB 1632
1440 2.59 PB 720 4.32 PB 192 1459 TB 1632
storage capacity for
2.5-in. standard disk drives
Maximum
3.5-in. standard disk drives
Maximum storage capacity for
3.5-in. standard disk drives
Maximum
2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives
storage capacity for
2.5-in.Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives
Maximum total drives
1
Base frame (model 986) overview:
The DS8886 (model 986) includes a base frame.
The base frame includes the following components:
v Standard storage enclosures v High Performance Flash Enclosures Gen2 v Hardware Management Console (HMC), including a keyboard and monitor v Second HMC (optional) v Ethernet switches v Processor nodes (available with POWER8 processors) v I/O enclosures v Direct-current uninterruptible power supplies (DC-UPS) v Rack power control (RPC) cards
Expansion frame (model 86E) overview:
The DS8886 supports up to four expansion frames (model 86E) that can be added to a base frame (model 986). A minimum of 256 GB system memory and a 16-core processor is required, if expansion frames are added.
The first expansion frame supports up to 192 2.5-inch standard disk drives. The second, third, and fourth expansion frames support up to 384 2.5-inch standard disk drives. When all four frames are used, the DS8886 can support a total of 1,440
2.5-inch standard disk drives in a compact footprint, creating a high-density storage system and preserving valuable floor space in data center environments.
Chapter 1. Overview 13
Page 26
Only the first expansion frame includes I/O enclosures. You can add up to two
A B C
f2c02369
D E
A B C
f2c02370
D E
Up to 20 M
High Performance Flash Enclosure Gen2 pairs to the first expansion frame. The second, third, and fourth expansion frames do not include I/O enclosures or High Performance Flash Enclosures Gen2.
The main power area is at the rear of the expansion frame. The power system in each frame is a pair of direct-current uninterruptible power supplies (DC-UPSs) with internal batteries.
DS8886 expansion frame location options:
In addition to the standard expansion frame location, the DS8886 offers four remote expansion frame options that allow expansion frames to be located up to 20 meters apart.
With the standard DS8886 expansion frame location, the first expansion frame is located next to the base frame, the second expansion frame is located next to the first expansion frame, and each consecutive expansion frame is located next to the previous one.
Figure 4. DS8886 standard expansion frame locations
The DS8886 offers a remote expansion frame option with one remote expansion frame. This option requires the extended drive cable group E (feature code 1254).
Figure 5. DS8886 with one remote expansion frame
The DS8886 offers a remote expansion frame option with two remote expansion frames. This option requires the extended drive cable group D (feature code 1253).
14 DS8880 Introduction and Planning Guide
Page 27
A B C
f2c02371
D E
Up to 20 M
Figure 6. DS8886 with two remote expansion frames
A B C
f2c02372
D E
Up to 20 M
The DS8886 offers a remote expansion frame option with three remote expansion frames. This option requires the extended drive cable group C (feature code 1252).
Figure 7. DS8886 with three remote expansion frames
The DS8886 offers a remote expansion frame option with three separate remote expansion frames. This option requires the extended drive cable groups C, D, and E (feature codes 1252, 1253, and 1254).
Chapter 1. Overview 15
Page 28
A B
C
f2c02373
D
E
Up to 20 M
Up to 20 M
Up to 20 M
Up to 20 M
Up to 20 M
Figure 8. DS8886 with three separate remote expansion frames
DS8886F (machine type 533x models 985 and 85E or 986 and 86E)
The DS8886F is a high-density, high-performance, high-capacity storage system that includes only High Performance Flash Enclosures Gen2.
The DS8886F models 985 and 85E support single-phase power. The DS8886F models 986 and 86E support three-phase power.
DS8886F (machine type 533x models 985 and 85E):
The DS8886F (machine type 533x models 985 and 85E) is a high-density, high-performance, high-capacity storage system that includes High Performance Flash Enclosures Gen2, and supports single-phase power.
DS8886F (machine type 533x models 985 and 85E) storage systems are scalable with up to 24-core processors, and up to 384 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives. They are optimized and configured for performance and throughput by maximizing the number of paths to the storage enclosures. The frame is 19 inches wide and 46U high. They support the following storage enclosures:
v Up to 4 High Performance Flash Enclosure Gen2 pairs in a base frame (model
985).
v Up to 4 High Performance Flash Enclosure Gen2 pairs in an expansion frame
(model 85E).
16 DS8880 Introduction and Planning Guide
Page 29
The DS8886F uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre Channel Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8 Gbps adapters only) protocol. The High Performance FICON (HPF) feature is also supported.
For more specifications, see the IBM DS8000 series specifications web site(www.ibm.com/systems/storage/disk/ds8000/specifications.html).
The following tables list the hardware components and maximum capacities that are supported for the DS8886F (models 985 and 85E), depending on the amount of memory that is available.
Table 14. Components for the DS8886F (machine type 533x models 985 and 85E)
Processors System memory Processor memory
8-core 128 GB
256 GB
16-core 256 GB
512 GB
24-core 1024 GB
2048 GB
1. High Performance Flash Enclosures Gen2 are installed in pairs.
2. This configuration of the DS8880 must be populated with at least one High Performance Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.
64 GB 128 GB
128 GB 256 GB
512 GB 1024 GB
I/O enclosure pairs
2 2 - 16 1 - 4 0
4 2 - 32 1 - 8 0 - 1
4 2 - 32 1 - 8 0 - 1
Host adapters (8 or 4 port)
High Performance Flash Enclosure Gen2
1, 2, 3
pairs
Expansion frames
Table 15. Maximum capacity for the DS8886F (machine type 533x models 985 and 85E)
Maximum storage capacity for 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives Maximum total drives
Processors System memory
8-core 128 GB
256 GB
16-core 256 GB
512 GB
24-core 1024 GB
2048 GB
Maximum 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives
192 1459 TB 192
384 2918 TB 384
384 2918 TB 384
DS8886F base frame (model 985) overview:
The DS8886F (model 985) includes a base frame.
The base frame includes the following components:
v High Performance Flash Enclosures Gen2 v Hardware Management Console (HMC), including a keyboard and monitor v Second HMC (optional) v Ethernet switches v Processor nodes (available with POWER8 processors) v I/O enclosures v Direct-current uninterruptible power supplies (DC-UPS) v Rack power control (RPC) cards
DS8886F expansion frame (model 85E) overview:
The DS8886F supports one expansion frame (model 85E) that can be added to a base frame (model 985). A minimum of 256 GB system memory and a 16-core processor is required to add the expansion frame.
Chapter 1. Overview 17
Page 30
The expansion frame includes I/O enclosures. You can add up to four High Performance Flash Enclosure Gen2 pairs to the expansion frame.
The main power area is at the rear of the expansion frame. The power system in is a pair of direct-current uninterruptible power supplies (DC-UPSs) with internal batteries.
DS8886F (machine type 533x models 986 and 86E):
The DS8886F (machine type 533x models 986 and 86E) is a high-density, high-performance, high-capacity storage system that includes High Performance Flash Enclosures Gen2, and supports three-phase power.
DS8886F (machine type 533x models 986 and 86E) storage systems are scalable with up to 24-core processors, and up to 384 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives. They are optimized and configured for performance and throughput by maximizing the number of paths to the storage enclosures. The frame is 19 inches wide and 46U high. They support the following storage enclosures:
v Up to 4 High Performance Flash Enclosure Gen2 pairs in a base frame (model
986).
v Up to 4 High Performance Flash Enclosure Gen2 pairs in an expansion frame
(model 86E).
The DS8886F uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre Channel Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8 Gbps adapters only) protocol. The High Performance FICON (HPF) feature is also supported.
For more specifications, see the IBM DS8000 series specifications web site(www.ibm.com/systems/storage/disk/ds8000/specifications.html).
The following tables list the hardware components and maximum capacities that are supported for the DS8886F (models 986 and 86E), depending on the amount of memory that is available.
Table 16. Components for the DS8886F (machine type 533x models 986 and 86E)
Processors System memory Processor memory
8-core 128 GB
256 GB
16-core 256 GB
512 GB
24-core 1024 GB
2048 GB
1. High Performance Flash Enclosures Gen2 are installed in pairs.
2. This configuration of the DS8880 must be populated with at least one High Performance Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.
64 GB 128 GB
128 GB 256 GB
512 GB 1024 GB
I/O enclosure pairs
2 2 - 16 1 - 4 0
4 2 - 32 1 - 8 0 - 1
4 2 - 32 1 - 8 0 - 1
Host adapters (8 or 4 port)
Table 17. Maximum capacity for the DS8886F (machine type 533x models 986 and 86E)
Maximum storage capacity for 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives Maximum total drives
Processors System memory
8-core 128 GB
256 GB
Maximum 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives
192 1459 TB 192
High Performance Flash Enclosure Gen2
1, 2, 3
pairs
Expansion frames
18 DS8880 Introduction and Planning Guide
Page 31
Table 17. Maximum capacity for the DS8886F (machine type 533x models 986 and 86E) (continued)
Maximum storage capacity for 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives Maximum total drives
Processors System memory
16-core 256 GB
512 GB
24-core 1024 GB
2048 GB
Maximum 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives
384 2918 TB 384
384 2918 TB 384
DS8886F base frame (model 986) overview:
The DS8886F (model 986) includes a base frame.
The base frame includes the following components:
v High Performance Flash Enclosures Gen2 v Hardware Management Console (HMC), including a keyboard and monitor v Second HMC (optional) v Ethernet switches v Processor nodes (available with POWER8 processors) v I/O enclosures v Direct-current uninterruptible power supplies (DC-UPS) v Rack power control (RPC) cards
DS8886F expansion frame (model 86E) overview:
The DS8886F supports one expansion frame (model 86E) that can be added to a base frame (model 986). A minimum of 256 GB system memory and a 16-core processor is required for the expansion frame to be added.
The expansion frame includes I/O enclosures. You can add up to four High Performance Flash Enclosure Gen2 pairs to the expansion frame.
The main power area is at the rear of the expansion frame. The power system is a pair of direct-current uninterruptible power supplies (DC-UPSs) with internal batteries.
DS8888F (machine type 533x models 988 and 88E)
The DS8888F (machine type 533x models 988 and 88E) is a high-performance, high-efficiency, high-capacity storage system that includes only High Performance Flash Enclosures Gen2.
DS8888F storage systems (machine type 533x models 988 and 88E) are scalable with up to 48-core processors, 16 High Performance Flash Enclosures Gen2 pairs, and up to 768 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives. They are optimized and configured for cost. The frame is 19 inches wide with a 46U capacity. They support the following storage enclosures: v Up to 4 High Performance Flash Enclosure Gen2 pairs in the base frame (model
988).
v Up to 6 High Performance Flash Enclosure Gen2 pairs each in the first
expansion frame (model 88E).
v Up to 6 High Performance Flash Enclosure Gen2 pairs each in the second
expansion frame.
Chapter 1. Overview 19
Page 32
The DS8888F uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre Channel Protocol (FCP), FICON, or (for 8 Gbps adapters only) Fibre Channel Arbitrated Loop (FC-AL) protocol. The High Performance FICON (HPF) feature is also supported.
The DS8888F supports three-phase power only.
For more specifications, see the IBM DS8000 series specifications web site(www.ibm.com/systems/storage/disk/ds8000/specifications.html).
The following tables list the hardware components and maximum capacities that are supported for the DS8888F (models 988 and 88E), depending on the amount of memory that is available.
Table 18. Components for the DS8888F (machine type 533x models 988 and 88E)
Processors System memory Processor memory I/O enclosures
24-core 1024 GB 512 GB 4 2 - 16 1 - 4 0 48-core 2048 GB 1024 GB 8 2 - 32 1 - 16 0 - 2
1. High Performance Flash Enclosures Gen2 are installed in pairs.
2. A maximum of 8 feature code 1600 are supported in the system. For the remaining 8 High Performance Flash Enclosure Gen2 pair, feature code
1602 should be ordered with feature code 1604.
Host adapters (8 or 4 port)
Table 19. Maximum capacity for the DS8888F (machine type 533x models 988 and 88E)
Maximum 2.5-in. Flash Tier
Processors System memory
24-core 1024 GB 192 1459 TB 192 48-core 2048 GB 768 5837 TB 768
0, Flash Tier 1, or Flash Tier 2 drives
Maximum storage capacity for 2.5-in. Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives Maximum total drives
High Performance Flash Enclosure Gen2
1, 2, 3
pairs
Expansion frames
DS8888F base frame (model 988) overview:
The DS8888F includes a base frame (model 988).
The base frame includes the following components:
v High Performance Flash Enclosures Gen2 v Hardware Management Console (HMC), including a keyboard and monitor v Second HMC (optional) v Ethernet switches v Processor nodes (available with POWER8+ processors) v I/O enclosures v Direct-current uninterruptible power supplies (DC-UPS) v Rack power control (RPC) cards
DS8888F expansion frame (model 88E) overview:
The DS8888F supports up to two expansion frames (model 88E) that can be added to a base frame. A minimum of 2048 GB system memory and a 48-core processor is required, if an expansion frame is used.
The first expansion frame includes I/O enclosures. You can add up to six High Performance Flash Enclosure Gen2 pairs to each expansion frame.
20 DS8880 Introduction and Planning Guide
Page 33
The main power area is at the rear of the expansion frame. The power system in each frame is a pair of direct-current uninterruptible power supplies (DC-UPSs) with internal batteries.

Storage enclosures

DS8880 version 8.2 and later integrates one of two types of storage enclosures: High Performance Flash Enclosures Gen2 and standard drive enclosures.
High Performance Flash Enclosures Gen2 pair
The High Performance Flash Enclosure Gen2 is a 2U storage enclosure that is installed in pairs.
The High Performance Flash Enclosure Gen2 pair provides two 2U storage enclosures with associated RAID controllers and cabling. This combination of components forms a high-performance, fully-redundant flash storage array.
The High Performance Flash Enclosure Gen2 pair contains the following hardware components:
v Two 2U 24-slot SAS flash drive enclosures. Each of the two enclosures contains
the following components: – Two power supplies with integrated cooling fans – Two SAS Expander Modules with two SAS ports each – One midplane or backplane for plugging components that provides
maintenance of flash drives, Expander Modules, and power supplies
v Two High Performance Flash Enclosure Gen2 flash RAID adapters configured
for redundant access to the 2U flash enclosures. Each RAID adapter supports concurrent maintenance and includes the following components:
– High Performance ASIC RAID engine – Four SAS ports and cables connected to the four SAS Expander Modules
providing fully-redundant access from each RAID adapter to both 2U
enclosures – Integrated cooling – x8 PCIe Gen2 cable port for direct connection to the I/O enclosure
Standard drive enclosures
The standard drive enclosure is a 2U storage enclosure that is installed in pairs.
Each standard drive enclosure contains the following hardware components:
v Up to 12 large-form factor (LFF), 3.5-inch drive enclosures v Up to 24 small form factor (SFF), 2.5-inch SAS drives
Note: Drives can be disk drives or flash drives (also known as solid-state drives or SSDs). You cannot intermix drives of different types in the same enclosure.
v Two power supplies with integrated cooling fans v Two Fibre Channel interconnect cards that connect four Fibre Channel 8 Gbps
interfaces to a pair of device adapters or another standard drive enclosure.
v One back plane for plugging components
The 2.5-inch disk drives and flash drives (SSDs) are available in sets of 16 drives. The 3.5-inch SAS disk drives are available in half-drive sets of eight drives.
Chapter 1. Overview 21
Page 34

Management console

The management console is also referred to as the Hardware Management Console (or HMC). It supports storage system hardware and firmware installation and maintenance activities. The HMC includes a keyboard and monitor that are stored on the left side of the base frame.
The HMC connects to the customer network and provides access to functions that can be used to manage the storage system. Management functions include logical configuration, problem notification, call home for service, remote service, and Copy Services management. You can perform management functions from the DS8000 Storage Management GUI, DS command-line interface (DS CLI), or other storage management software that supports the storage system.
Each base frame includes one HMC and space for a second HMC, which is available as a separately orderable feature to provide redundancy.

Ethernet switches

The Ethernet switches provide internal communication between the management consoles and the processor complexes. Two redundant Ethernet switches are provided.

Processor nodes

The processor nodes drive all functions in the storage system. Each node consists of a Power server that contains POWER8 processors and memory.

I/O enclosures

The I/O enclosure provides connectivity between the adapters and the processor complex.
The I/O enclosure uses PCIe interfaces to connect I/O adapters in the I/O enclosure to both processor nodes. A PCIe device is an I/O adapter or a processor node.
To improve I/O operations per second (IOPS) and sequential read/write throughput, the I/O enclosure is connected to each processor node with a point-to-point connection.
The I/O enclosure contain the following adapters:
Flash interface connectors
Interface connector that provides PCIe cable connection from the I/O enclosure to the High Performance Flash Enclosure Gen2.
Device adapters
PCIe-attached adapter with four 8 Gbps Fibre Channel arbitrated loop (FC-AL) ports. These adapters connect the processor nodes to standard drive enclosures and provide RAID controllers for RAID support.
Host adapters
An I/O enclosure can support up to 16 host ports. For example, if an 8-port adapter is used, then only 2 additional 4-port adapters can be installed, or one more 8-port adapter. If only 8-port adapters are used, then only 2 host adapters can be installed in each I/O enclosure. If only 4-port adapters are used, then 4 host adapters can be installed in each I/O enclosure.
22 DS8880 Introduction and Planning Guide
Page 35

Power

For PCIe-attached adapters with four or eight 8 Gbps Fibre Channel ports, each port can be independently configured to use SCSI/FCP, SCSI/FC-AL, or FICON/zHPF protocols. For PCIe-attached adapters with 4 16 Gbps Fibre Channel ports, each port can be independently configured to use SCSI/FCP or FICON/zHPF protocols. Both longwave and shortwave adapter versions that support different maximum cable lengths are available. The host-adapter ports can be directly connected to attached hosts systems or storage systems, or connected to a storage area network. SCSI/FCP ports are used for connections between storage systems. SCSI/FCP ports that are attached to a SAN can be used for both host and storage system connections.
The High Performance FICON Extension (zHPF) protocol can be used by FICON host channels that have zHPF support. The use of zHPF protocols provides a significant reduction in channel usage. This reduction improves I/O input on a single channel and reduces the number of FICON channels that are required to support the workload.
The power system in each frame is a pair of direct-current uninterruptible power supplies (DC-UPSs) with internal batteries. The DC-UPSs distribute rectified AC power and provide power switching for redundancy. A single DC-UPS has sufficient capacity to power and provide battery backup to the entire frame if one DC-UPS is out of service. DS8880 uses three-phase and single-phase power.
There are two AC-power cords, each feeding one DC-UPS. If AC power is not present at the input line, the output is switched to rectified AC power from the partner DC-UPS. If neither AC-power input is active, the DC-UPS switches to 180 V DC battery power. Storage systems that have the extended power line disturbance (ePLD) option are protected from a power-line disturbance for up to 40 seconds. Storage systems without the ePLD option are protected for 4 seconds.
An integrated pair of rack-power control (RPC) cards manages the efficiency of power distribution within the storage system. The RPC cards are attached to each processor node. The RPC card is also attached to the primary power system in each frame.

Functional overview

The following list provides an overview of some of the features that are associated with DS8880.
Note: Some storage system functions are not available or are not supported in all environments. See the IBM System Storage Interoperation Center (SSIC) website (www.ibm.com/systems/support/storage/config/ssic) for the most current information on supported hosts, operating systems, adapters, and switches.
Nondisruptive and disruptive activities
DS8880 supports hardware redundancy. It is designed to support nondisruptive changes: hardware upgrades, repair, and licensed function upgrades. In addition, logical configuration changes can be made nondisruptively. For example:
v The flexibility and modularity means that expansion frames can be
added and physical storage capacity can be increased within a frame without disrupting your applications.
Chapter 1. Overview 23
Page 36
v An increase in license scope is nondisruptive and takes effect
immediately. A decrease in license scope is also nondisruptive but does not take effect until the next IML.
v Easy Tier helps keep performance optimized by periodically
redistributing data to help eliminate drive hot spots that can degrade performance. This function helps balance I/O activity across the drives in an existing drive tier. It can also automatically redistribute some data to new empty drives added to a tier to help improve performance by taking advantage of the new resources. Easy Tier does this I/O activity rebalancing automatically without disrupting access to your data.
The following examples include activities that are disruptive: v The installation of an earthquake resistance kit on a raised or nonraised
floor.
v The removal of an expansion frame from the base frame.
Energy reporting
You can use DS8880 to display the following energy measurements through the DS CLI:
v Average inlet temperature in Celsius v Total data transfer rate in MB/s v Timestamp of the last update for values
The derived values are averaged over a 5-minute period. For more information about energy-related commands, see the commands reference.
You can also query power usage and data usage with the showsu command. For more information, see the showsu description in the Command-Line Interface User's Guide.
National Institute of Standards and Technology (NIST) SP 800-131A security enablement
NIST SP 800-131A requires the use of cryptographic algorithms that have security strengths of 112 bits to provide data security and data integrity for secure data that is created in the cryptoperiod starting in 2014. The DS8880 is enabled for NIST SP 800-131A. Conformance with NIST SP 800-131A depends on the use of appropriate prerequisite management software versions and appropriate configuration of the DS8880 and other network-related entities.
Storage pool striping (rotate capacity)
Storage pool striping is supported on the DS8000 series, providing improved performance. The storage pool striping function stripes new volumes across all arrays in a pool. The striped volume layout reduces workload skew in the system without requiring manual tuning by a storage administrator. This approach can increase performance with minimal operator effort. With storage pool striping support, the system automatically performs close to highest efficiency, which requires little or no administration. The effectiveness of performance management tools is also enhanced because imbalances tend to occur as isolated problems. When performance administration is required, it is applied more precisely.
You can configure and manage storage pool striping by using the DS8000 Storage Management GUI, DS CLI, and DS Open API. The rotate capacity allocation method (also referred to as rotate volumes) is an alternative allocation method that tends to prefer volumes that are allocated to a single managed array, and is not recommended. The rotate extents option (storage pool striping) is designed to provide the best performance by
24 DS8880 Introduction and Planning Guide
Page 37
striping volumes across arrays in the pool. Existing volumes can be reconfigured nondisruptively by using manual volume migration and volume rebalance.
The storage pool striping function is provided with the DS8000 series at no additional charge.
Performance statistics
You can use usage statistics to monitor your I/O activity. For example, you can monitor how busy the I/O ports are and use that data to help manage your SAN. For more information, see documentation about performance monitoring in the DS8000 Storage Management GUI.
Sign-on support that uses Lightweight Directory Access Protocol (LDAP)
The DS8000 system provides support for both unified sign-on functions (available through the DS8000 Storage Management GUI), and the ability to specify an existing Lightweight Directory Access Protocol (LDAP) server. The LDAP server can have existing users and user groups that can be used for authentication on the DS8000 system.
Setting up unified sign-on support for the DS8000 system is achieved by using IBM Copy Services Manager or IBM Spectrum Control. .
Note: Other supported user directory servers include IBM Directory Server and Microsoft Active Directory.
Easy Tier
Easy Tier is designed to determine the appropriate tier of storage based on data access requirements and then automatically and nondisruptively move data, at the subvolume or sub-LUN level, to the appropriate tier on the DS8000 system. Easy Tier is an optional feature that offers enhanced capabilities through features such as auto-rebalancing, hot spot management, rank depopulation, and manual volume migration.
Easy Tier enables the DS8880 system to automatically balance I/O access to drives to avoid hot spots on arrays. Easy Tier can place data in the storage tier that best suits the access frequency of the data. Highly accessed data can be moved nondisruptively to a higher tier, and likewise cooler data is moved to a lower tier (for example, to Nearline drives).
Easy Tier also can benefit homogeneous drive pools because it can move data away from over-utilized arrays to under-utilized arrays to eliminate hot spots and peaks in drive response times.
z-synergy
The DS8880 storage system can work in cooperation with IBM Z hosts to provide the following performance enhancement functions.
v Extended Address Volumes v High Performance FICON for IBM Z v I/O Priority Manager with z/OS Workload Manager v Parallel Access Volumes and HyperPAV (also referred to as aliases) v Quick initialization for IBM Z v Transparent cloud tiering v ZHyperLink technology that speeds up transaction processing and
improves active log throughput
Copy Services
The DS8880 storage system supports a wide variety of Copy Service
Chapter 1. Overview 25
Page 38
functions, including Remote Mirror, Remote Copy, and Point-in-Time functions. The following includes key Copy Service functions:
v FlashCopy v Remote Pair FlashCopy (Preserve Mirror)
|
Multitenancy support (resource groups)
v Safeguarded Copy v Remote Mirror and Copy:
– Metro Mirror – Global Copy – Global Mirror – Metro/Global Mirror – Multi-Target PPRC – z/OS Global Mirror – z/OS Metro/Global Mirror
Resource groups provide additional policy-based limitations. Resource groups, together with the inherent volume addressing limitations, support secure partitioning of Copy Services resources between user-defined partitions. The process of specifying the appropriate limitations is performed by an administrator using resource groups functions. DS CLI support is available for resource groups functions.
Multitenancy can be supported in certain environments without the use of resource groups, if the following constraints are met:
v Either Copy Services functions are disabled on all DS8000 systems that
share the same SAN (local and remote sites) or the landlord configures the operating system environment on all hosts (or host LPARs) attached to a SAN, which has one or more DS8000 systems, so that no tenant can issue Copy Services commands.
v The z/OS Distribute Data backup feature is disabled on all DS8000
systems in the environment (local and remote sites).
v Thin provisioned volumes (ESE or TSE) are not used on any DS8000
systems in the environment (local and remote sites).
v On zSeries systems there is only one tenant running in an LPAR, and the
volume access is controlled so that a CKD base volume or alias volume is only accessible by a single tenant’s LPAR or LPARs.
I/O Priority Manager
The I/O Priority Manager function can help you effectively manage quality of service levels for each application running on your system. This function aligns distinct service levels to separate workloads in the system to help maintain the efficient performance of each DS8000 volume. The I/O Priority Manager detects when a higher-priority application is hindered by a lower-priority application that is competing for the same system resources. This detection might occur when multiple applications request data from the same drives. When I/O Priority Manager encounters this situation, it delays lower-priority I/O data to assist the more critical I/O data in meeting its performance targets.
Use this function to consolidate more workloads on your system and to ensure that your system resources are aligned to match the priority of your applications.
The default setting for this feature is disabled.
26 DS8880 Introduction and Planning Guide
Page 39
Restriction of hazardous substances (RoHS)
ds800001
Volumes
CKD
Volumes
FB
Pools
FB
CKD
Pools
CKD
LSSs
Arrays
z Systems
Hosts
Open Systems
Hosts

Logical configuration

You can use either the DS8000 Storage Management GUI or the DS CLI to configure storage. Although the end result of storage configuration is similar, each interface has specific terminology, concepts and procedures.
Note: LSS is synonymous with logical control unit (LCU) and subsystem identification (SSID).
Note: If the I/O Priority Manager LIC key is activated, you can enable I/O Priority Manager on the Advanced tab of the System settings page in the DS8000 Storage Management GUI. If I/O Priority Manager is enabled on your storage system, you cannot use a zHyperLink connection.
The DS8880 system meets RoHS requirements. It conforms to the following EC directives:
v Directive 2011/65/EU of the European Parliament and of the Council of
8 June 2011 on the restriction of the use of certain hazardous substances in electrical and electronic equipment. It has been demonstrated that the requirements specified in Article 4 have been met.
v EN 50581:2012 technical documentation for the assessment of electrical
and electronic products with respect to the restriction of hazardous substances.

Logical configuration with DS8000 Storage Management GUI

Before you configure your storage system, it is important to understand the storage concepts and sequence of system configuration.
Figure 9 illustrates the concepts of configuration.
Figure 9. Logical configuration sequence
The following concepts are used in storage configuration.
Chapter 1. Overview 27
Page 40
Arrays
An array, also referred to as a managed array, is a group of storage devices that provides capacity for a pool. An array generally consists of 8 drives that are managed as a Redundant Array of Independent Disks (RAID).
Pools A storage pool is a collection of storage that identifies a set of storage
resources. These resources provide the capacity and management requirements for arrays and volumes that have the same storage type, either fixed block (FB) or count key data (CKD).
Volumes
A volume is a fixed amount of storage on a storage device.
LSS The logical subsystem (LSS) that enables one or more host I/O interfaces to
access a set of devices.
Hosts A host is the computer system that interacts with the storage system. Hosts
defined on the storage system are configured with a user-designated host type that enables the storage system to recognize and interact with the host. Only hosts that are mapped to volumes can access those volumes.
Logical configuration of the storage system begins with managed arrays. When you create storage pools, you assign the arrays to pools and then create volumes in the pools. FB volumes are connected through host ports to an open systems host. CKD volumes require that logical subsystems (LSSs) be created as well so that they can be accessed by an IBM Z host.
Pools must be created in pairs to balance the storage workload. Each pool in the pool pair is controlled by a processor node (either Node 0 or Node 1). Balancing the workload helps to prevent one node from doing most of the work and results in more efficient I/O processing, which can improve overall system performance. Both pools in the pair must be formatted for the same storage type, either FB or CKD storage. You can create multiple pool pairs to isolate workloads.
When you create a pair of pools, you can choose to automatically assign all available arrays to the pools, or assign them manually afterward. If the arrays are assigned automatically, the system balances them across both pools so that the workload is distributed evenly across both nodes. Automatic assignment also ensures that spares and device adapter (DA) pairs are distributed equally between the pools.
If you are connecting to a IBM Z host, you must create a logical subsystem (LSS) before you can create CKD volumes.
You can create a set of volumes that share characteristics, such as capacity and storage type, in a pool pair. The system automatically balances the volumes between both pools. If the pools are managed by Easy Tier, the capacity in the volumes is automatically distributed among the arrays. If the pools are not managed by Easy Tier, you can choose to use the rotate capacity allocation method, which stripes capacity across the arrays.
If the volumes are connecting to a IBM Z host, the next steps of the configuration process are completed on the host.
If the volumes are connecting to an open systems host, map the volumes to the host, add host ports to the host, and then map the ports to the I/O ports on the storage system.
28 DS8880 Introduction and Planning Guide
Page 41
FB volumes can only accept I/O from the host ports of hosts that are mapped to the volumes. Host ports are zoned to communicate only with certain I/O ports on the storage system. Zoning is configured either within the storage system by using I/O port masking, or on the switch. Zoning ensures that the workload is spread properly over I/O ports and that certain workloads are isolated from one another, so that they do not interfere with each other.
The workload enters the storage system through I/O ports, which are on the host adapters. The workload is then fed into the processor nodes, where it can be cached for faster read/write access. If the workload is not cached, it is stored on the arrays in the storage enclosures.

Logical configuration with DS CLI

Before you configure your storage system with the DS CLI, it is important to understand IBM terminology for storage concepts and the storage hierarchy.
In the storage hierarchy, you begin with a physical disk. Logical groupings of eight disks form an array site. Logical groupings of one array site form an array. After you define your array storage type as CKD or fixed block, you can create a rank. A rank is divided into a number of fixed-size extents. If you work with an open-systems host, a large extent is 1 GiB, and a small extent is 16 MiB. If you work in an IBM Z environment, a large extent is the size of an IBM 3390 Mod 1 disk drive (1113 cylinders), and a small extent is 21 cylinders.
After you create ranks, your physical storage can be considered virtualized. Virtualization dissociates your physical storage configuration from your logical configuration, so that volume sizes are no longer constrained by the physical size of your arrays.
The available space on each rank is divided into extents. The extents are the building blocks of the logical volumes. An extent is striped across all disks of an array.
Extents of the same storage type are grouped to form an extent pool. Multiple extent pools can create storage classes that provide greater flexibility in storage allocation through a combination of RAID types, DDM size, DDM speed, and DDM technology. This configuration allows a differentiation of logical volumes by assigning them to the appropriate extent pool for the needed characteristics. Different extent sizes for the same device type (for example, count-key-data or fixed block) can be supported on the same storage unit. The different extent types must be in different extent pools.
A logical volume is composed of one or more extents. A volume group specifies a set of logical volumes. Identify different volume groups for different uses or functions (for example, SCSI target, remote mirror and copy secondary volumes, FlashCopy targets, and Copy Services). Access to the set of logical volumes that are identified by the volume group can be controlled. Volume groups map hosts to volumes. Figure 10 on page 31 shows a graphic representation of the logical configuration sequence.
When volumes are created, you must initialize logical tracks from the host before the host is allowed read and write access to the logical tracks on the volumes. The Quick Initialization feature for open system on FB ESE volumes allows quicker access to logical volumes. The volumes include host volumes and source volumes that can be used Copy Services relationships, such as FlashCopy or Remote Mirror
Chapter 1. Overview 29
Page 42
and Copy relationships. This process dynamically initializes logical volumes when they are created or expanded, allowing them to be configured and placed online more quickly.
You can specify LUN ID numbers through the graphical user interface (GUI) for volumes in a map-type volume group. You can create a new volume group, add volumes to an existing volume group, or add a volume group to a new or existing host. Previously, gaps or holes in LUN ID numbers might result in a "map error" status. The Status field is eliminated from the volume groups main page in the GUI and the volume groups accessed table on the Manage Host Connections page. You can also assign host connection nicknames and host port nicknames. Host connection nicknames can be up to 28 characters, which is expanded from the previous maximum of 12. Host port nicknames can be 32 characters, which are expanded from the previous maximum of 16.
30 DS8880 Introduction and Planning Guide
Page 43
Disk
ArraySite
Array
Rank
Extents
=CKDMod1ExtentinIBM
Systemzenvironments
=FB1GBinanOpen
systemsHost
Virtualization
ExtentPool
Extents
LogicalVolume
VolumeGroup
VolumeGroups MapHoststo Volumes
f2d00137
Figure 10. Logical configuration sequence

RAID implementation

RAID implementation improves data storage reliability and performance.
Redundant array of independent disks (RAID) is a method of configuring multiple drives in a storage subsystem for high availability and high performance. The collection of two or more drives presents the image of a single drive to the system. If a single device failure occurs, data can be read or regenerated from the other drives in the array.
Chapter 1. Overview 31
Page 44
RAID implementation provides fault-tolerant data storage by storing the data in different places on multiple drives. By placing data on multiple drives, I/O operations can overlap in a balanced way to improve the basic reliability and performance of the attached storage devices.
Physical capacity for the storage system can be configured as RAID 5, RAID 6, or RAID 10. RAID 5 can offer excellent performance for some applications, while RAID 10 can offer better performance for selected applications, in particular, high random, write content applications in the open systems environment. RAID 6 increases data protection by adding an extra layer of parity over the RAID 5 implementation.
| |
RAID 6 is the recommended and default RAID type for all drives over 1 TB. RAID 6 and RAID 10 are the only supported RAID types for 3.8 TB Flash Tier 1 drives. RAID 6 is the only supported RAID type for 7.6 TB Flash Tier 2 drives.
RAID 5 overview
RAID 5 is a method of spreading volume data across multiple drives.
RAID 5 increases performance by supporting concurrent accesses to the multiple drives within each logical volume. Data protection is provided by parity, which is stored throughout the drives in the array. If a drive fails, the data on that drive can be restored using all the other drives in the array along with the parity bits that were created when the data was stored.
RAID 5 is not supported for drives larger than 1 TB and requires a request for price quote (RPQ). For information, contact your sales representative.
| | |
| | |
| | | |
Note: RAID 6 is the recommended and default RAID type for all drives over 1 TB. RAID 6 and RAID 10 are the only supported RAID types for 3.8 TB Flash Tier 1 drives. RAID 6 is the only supported RAID type for 7.6 TB Flash Tier 2 drives.
RAID 6 overview
RAID 6 is a method of increasing the data protection of arrays with volume data spread across multiple disk drives.
RAID 6 increases data protection by adding an extra layer of parity over the RAID 5 implementation. By adding this protection, RAID 6 can restore data from an array with up to two failed drives. The calculation and storage of extra parity slightly reduces the capacity and performance compared to a RAID 5 array.
| |
The default RAID type for all drives over 1 TB is RAID 6. RAID 6 and RAID 10 are the only supported RAID types for 3.8 TB Flash Tier 1 drives. RAID 6 is the only supported RAID type for 7.6 TB Flash Tier 2 drives.
RAID 10 overview
RAID 10 provides high availability by combining features of RAID 0 and RAID 1.
RAID 0 increases performance by striping volume data across multiple disk drives. RAID 1 provides disk mirroring, which duplicates data between two disk drives. By combining the features of RAID 0 and RAID 1, RAID 10 provides a second optimization for fault tolerance.
RAID 10 implementation provides data mirroring from one disk drive to another disk drive. RAID 10 stripes data across half of the disk drives in the RAID 10 configuration. The other half of the array mirrors the first set of disk drives. Access
32 DS8880 Introduction and Planning Guide
Page 45
to data is preserved if one disk in each mirrored pair remains available. In some cases, RAID 10 offers faster data reads and writes than RAID 5 because it is not required to manage parity. However, with half of the disk drives in the group used for data and the other half used to mirror that data, RAID 10 arrays have less capacity than RAID 5 arrays.
Note: RAID 6 is the recommended and default RAID type for all drives over 1 TB.
| |
RAID 6 and RAID 10 are the only supported RAID types for 3.8 TB Flash Tier 1 drives. RAID 6 is the only supported RAID type for 7.6 TB Flash Tier 2 drives.

Logical subsystems

To facilitate configuration of a storage system, volumes are partitioned into groups of volumes. Each group is referred to as a logical subsystem (LSS).
As part of the storage configuration process, you can configure the maximum number of LSSs that you plan to use. The storage system can contain up to 255 LSSs and each LSS can be connected to 16 other LSSs using a logical path. An LSS is a group of up to 256 volumes that have the same storage type, either count key data (CKD) for IBM Z hosts or fixed block (FB) for open systems hosts.
An LSS is uniquely identified within the storage system by an identifier that consists of two hex characters (0-9 or uppercase AF) for which the volumes are associated. A fully qualified LSS is designated using the storage system identifier and the LSS identifier, such as IBM.2107-921-12FA123/1E. The LSS identifiers are important for Copy Services operations. For example, for FlashCopy operations, you specify the LSS identifier when choosing source and target volumes because the volumes can span LSSs in a storage system.
The storage system has a 64K volume address space that is partitioned into 255 LSSs, where each LSS contains 256 logical volume numbers. The 255 LSS units are assigned to one of 16 address groups, where each address group contains 16 LSSs, or 4K volume addresses.
Storage system functions, including some that are associated with FB volumes, might have dependencies on LSS partitions. For example:
v The LSS partitions and their associated volume numbers must identify volumes
that are specified for storage system Copy Services operations.
v To establish Remote Mirror and Copy pairs, a logical path must be established
between the associated LSS pair.
v FlashCopy pairs must reside within the same storage system.
If you increase storage system capacity, you can increase the number of LSSs that you have defined. This modification to increase the maximum is a nonconcurrent action. If you might need capacity increases in the future, leave the number of LSSs set to the maximum of 255.
Note: If you reduce the CKD LSS limit to zero for IBM Z hosts, the storage system does not process Remote Mirror and Copy functions. The FB LSS limit must be no lower then eight to support Remote Mirror and Copy functions for open-systems hosts.

Allocation methods

Allocation methods (also referred to as extent allocation methods) determine the means by which volume capacity is allocated within a pool.
Chapter 1. Overview 33
Page 46
All extents of the ranks that are assigned to an extent pool are independently available for allocation to logical volumes. The extents for a LUN or volume are logically ordered, but they do not have to come from one rank and the extents do not have to be contiguous on a rank. This construction method of using fixed extents to form a logical volume in the storage system allows flexibility in the management of the logical volumes. You can delete volumes, resize volumes, and reuse the extents of those volumes to create other volumes, different sizes. One logical volume can be deleted without affecting the other logical volumes that are defined on the same extent pool.
Because the extents are cleaned after you delete a volume, it can take some time until these extents are available for reallocation. The reformatting of the extents is a background process.
There are three allocation methods that are used by the storage system: rotate capacity (also referred to as storage pool striping), rotate volumes, and managed.
Rotate capacity allocation method
The default allocation method is rotate capacity, which is also referred to as storage pool striping. The rotate capacity allocation method is designed to provide the best performance by striping volume extents across arrays in a pool. The storage system keeps a sequence of arrays. The first array in the list is randomly picked at each power-on of the storage subsystem. The storage system tracks the array in which the last allocation started. The allocation of a first extent for the next volume starts from the next array in that sequence. The next extent for that volume is taken from the next rank in sequence, and so on. The system rotates the extents across the arrays.
If you migrate a volume with a different allocation method to a pool that has the rotate capacity allocation method, then the volume is reallocated. If you add arrays to a pool, the rotate capacity allocation method reallocates the volumes by spreading them across both existing and new arrays.
You can configure and manage this allocation method by using the DS8000 Storage Management GUI, DS CLI, and DS Open API.
Rotate volumes allocation method
Volume extents can be allocated sequentially. In this case, all extents are taken from the same array until there are enough extents for the requested volume size or the array is full, in which case the allocation continues with the next array in the pool.
If more than one volume is created in one operation, the allocation for each volume starts in another array. You might want to consider this allocation method when you prefer to manage performance manually. The workload of one volume is allocated to one array. This method makes the identification of performance bottlenecks easier; however, by putting all the volume data onto just one array, you might introduce a bottleneck, depending on your actual workload.
Managed allocation method
When a volume is managed by Easy Tier, the allocation method of the volume is referred to as managed. Easy Tier allocates the capacity in ways that might differ from both the rotate capacity and rotate volume allocation methods.
34 DS8880 Introduction and Planning Guide
Page 47

Management interfaces

You can use various IBM storage management interfaces to manage your storage system.
These interfaces include DS8000 Storage Management GUI, DS Command-Line Interface (DS CLI), the DS Open Application Programming Interface, DS8000 RESTful API, IBM Storage Mobile Dashboard, IBM Spectrum Controland IBM Copy Services Manager.

DS8000 Storage Management GUI

Use the DS8000 Storage Management GUI to configure and manage storage and monitor performance and Copy Services functions.
DS8000 Storage Management GUI is a web-based GUI that is installed on the Hardware Management Console (HMC). You can access the DS8000 Storage Management GUI from any network-attached system by using a supported web browser. For a list of supported browsers, see “DS8000 Storage Management GUI supported web browsers” on page 38.
You can access the DS8000 Storage Management GUI from a browser by using the following web address, where HMC_IP is the IP address or host name of the HMC.
https://HMC_IP
If the DS8000 Storage Management GUI does not display as anticipated, clear the cache for your browser, and try to log in again.
Notes:
v If the storage system is configured for NIST SP 800-131A security conformance, a
version of Java that is NIST SP 800-131A compliant must be installed on all systems that run the DS8000 Storage Management GUI. For more information about security requirements, see information about configuring your environment for NIST SP 800-131A compliance in the IBM DS8000 series online product documentation ( http://www.ibm.com/support/knowledgecenter/ ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/f2c_securitybp.html).
v User names and passwords are encrypted for HTTPS protocol. You cannot access
the DS8000 Storage Management GUI over the non-secure HTTP protocol (port
8451).

DS command-line interface

The IBM DS command-line interface (DS CLI) can be used to create, delete, modify, and view Copy Services functions and the logical configuration of a storage system. These tasks can be performed either interactively, in batch processes (operating system shell scripts), or in DS CLI script files. A DS CLI script file is a text file that contains one or more DS CLI commands and can be issued as a single command. DS CLI can be used to manage logical configuration, Copy Services configuration, and other functions for a storage system, including managing security settings, querying point-in-time performance information or status of physical resources, and exporting audit logs.
Note: Java™1.8 must be installed on systems that run the DS CLI.
The DS CLI provides a full-function set of commands to manage logical configurations and Copy Services configurations. The DS CLI is available in the
Chapter 1. Overview 35
Page 48
DS8000 Storage Management GUI. The DS CLI client can also be installed on and is supported in many different environments, including the following platforms:
v AIX 6.1, 7.1, 7.2 v Linux, Red Hat Enterprise Linux [RHEL] 6 and 7 v Linux, SUSE Linux, Enterprise Server (SLES) 11 and 12 v VMware ESX 5.5, 6 Console v IBM i 7.1, 7.2 v Oracle Solaris 10 and 11 v Microsoft Windows Server 2008, 2012 and Windows 7, 8, 8.1, 10
Note: If the storage system is configured for NIST SP 800-131A security conformance, a version of Java that is NIST SP 800-131A compliant must be installed on all systems that run DS CLI client. For more information about security requirements, see documentation about configuring your environment for NIST SP 800-131A compliance in IBM Knowledge Center (https://www.ibm.com/ support/knowledgecenter/ST5GLJ_8.5.0/com.ibm.storage.ssic.help.doc/ f2c_securitybp_nist.html).

DS Open Application Programming Interface

The DS Open Application Programming Interface (API) is a nonproprietary storage management client application that supports routine LUN management activities. Activities that are supported include: LUN creation, mapping and masking, and the creation or deletion of RAID 5, RAID 6, and RAID 10 volume spaces.
The DS Open API helps integrate configuration management support into storage resource management (SRM) applications, which help you to use existing SRM applications and infrastructures. The DS Open API can also be used to automate configuration management through customer-written applications. Either way, the DS Open API presents another option for managing storage units by complementing the use of the IBM Storage Management GUI web-based interface and the DS command-line interface.
Note: The DS Open API supports the storage system and is an embedded component.
You can implement the DS Open API without using a separate middleware application. For example, you can implement it with the IBM Common Information Model (CIM) agent, which provides a CIM-compliant interface. The DS Open API uses the CIM technology to manage proprietary devices as open system devices through storage management applications. The DS Open API is used by storage management applications to communicate with a storage unit.

RESTful API

The RESTful API is an application on the DS8000 HMC for initiating simple storage operations through the Web.
The RESTful (Representational State Transfer) API is a platform independent means by which to initiate create, read, update, and delete operations in the storage system and supporting storage devices. These operations are initiated with the HTTP commands: POST, GET, PUT, and DELETE.
The RESTful API is intended for use in the development, testing, and debugging of DS8000 client management infrastructures. You can use the RESTful API with a
36 DS8880 Introduction and Planning Guide
Page 49
CURL command or through standard Web browsers. For instance, you can use the storage system with the RESTClient add-on.

IBM Spectrum Control

IBM Spectrum Control is an integrated software solution that can help you improve and centralize the management of your storage environment through the integration of products. With IBM Spectrum Control, it is possible to manage multiple DS8000 systems from a single point of control.
Note: IBM Spectrum Control is not required for the operation of a storage system. However, it is recommended. IBM Spectrum Control can be ordered and installed as a software product on various servers and operating systems. When you install IBM Spectrum Control, ensure that the selected version supports the current system functions. Optionally, you can order a server on which IBM Spectrum Control is preinstalled.
IBM Spectrum Control simplifies storage management by providing the following benefits:
v Centralizing the management of heterogeneous storage network resources with
IBMstorage management software
v Providing greater synergy between storage management software and
IBMstorage devices
v Reducing the number of servers that are required to manage your software
infrastructure
v Migrating from basic device management to storage management applications
that provide higher-level functions
For more information, see IBM Spectrum Control online product documentation in IBM Knowledge Center (www.ibm.com/support/knowledgecenter).

IBM Copy Services Manager

IBM Copy Services Manager controls Copy Services in storage environments. Copy Services are features that are used by storage systems, such as DS8000, to configure, manage, and monitor data-copy functions.
IBM Copy Services Manager provides both a graphical interface and command line that you can use for configuring and managing Copy Services functions across
| |
storage units. Copy Services include the point-in-time function – IBM FlashCopy and Safeguarded Copy, and the remote mirror and copy functions – Metro Mirror, Global Mirror, and Metro Global Mirror. Copy Services Manager can automate the administration and configuration of these services; and monitor and manage copy sessions.
You can use Copy Services Manager to complete the following data replication tasks and help reduce the downtime of critical applications:
v Plan for replication when you are provisioning storage v Keep data on multiple related volumes consistent across storage systems for a
planned or unplanned outage
v Monitor and track replication operations v Automate the mapping of source volumes to target volumes
Starting with DS8000 Version 8.1, Copy Services Manager also comes preinstalled on the Hardware Management Console (HMC). Therefore, you can enable the
Chapter 1. Overview 37
Page 50
Copy Services Manager software that is already on the hardware system. Doing so results in less setup time; and eliminates the need to maintain a separate server for Copy Services functions.
You can also use Copy Services Manager to connect to an LDAP repository for remote authentication. For more information, see the DS8000 online product documentation at http://www.ibm.com/support/knowledgecenter/ST5GLJ/ ds8000_kcwelcome.html and search for topics that are related to remote authentication.
For more information, see the Copy Services Manager online product documentation at http://www.ibm.com/support/knowledgecenter/SSESK4/ csm_kcwelcome.html. The "What's new" topic provides details on the features added for each version of Copy Services Manager that can be used by DS8000, including HyperSwap for multi-target sessions, and incremental FlashCopy support.

DS8000 Storage Management GUI supported web browsers

To access the DS8000 Storage Management GUI, you must ensure that your web browser is supported and has the appropriate settings enabled.
The DS8000 Storage Management GUI supports the following web browsers:
Table 20. Supported web browsers
DS8000 series version Supported browsers
8.5 Mozilla Firefox 38 Mozilla Firefox Extended Support Release (ESR) 38 Microsoft Internet Explorer 11 Google Chrome 43
IBM supports higher versions of the browsers as long as the vendors do not remove or disable functionality that the product relies upon. For browser levels higher than the versions that are certified with the product, customer support accepts usage-related and defect-related service requests. As with operating system and virtualization environments, if the support center cannot re-create the issue in the our lab, we might ask the client to re-create the problem on a certified browser version to determine whether a product defect exists. Defects are not accepted for cosmetic differences between browsers or browser versions that do not affect the functional behavior of the product. If a problem is identified in the product, defects are accepted. If a problem is identified with the browser, IBM might investigate potential solutions or workaround that the client can implement until a permanent solution becomes available.
Enabling TLS 1.2 support
If the security requirements for your storage system require conformance with NIST SP 800-131A, enable transport layer security (TLS) 1.2 on web browsers that use SSL/TLS to access the DS8000 Storage Management GUI. See your web browser documentation for instructions on enabling TLS 1.2. For Internet Explorer, complete the following steps to enable TLS 1.2.
1. On the Tools menu, click Internet Options.
2. On the Advanced tab, under Settings, select Use TLS 1.2.
Note: Firefox, Release 24 and later, supports TLS 1.2. However, you must configure Firefox to enable TLS 1.2 support.
38 DS8880 Introduction and Planning Guide
Page 51
For more information about security requirements, see .
Selecting browser security settings
You must select the appropriate web browser security settings to access the DS8000 Storage Management GUI. In Internet Explorer, use the following steps.
1. On the Tools menu, click Internet Options.
2. On the Security tab, select Internet and click Custom level.
3. Scroll to Miscellaneous, and select Allow META REFRESH.
4. Scroll to Scripting, and select Active scripting.
Configuring Internet Explorer to access the DS8000 Storage Management GUI
If DS8000 Storage Management GUI is accessed through IBM Spectrum Control with Internet Explorer, complete the following steps to properly configure the web browser.
1. Disable the Pop-up Blocker.
Note: If a message indicates that content is blocked without a signed by a valid security certificate, click the Information Bar at the top and select Show blocked content.
2. Add the IP address of the DS8000 Hardware Management Console (HMC) to
the Internet Explorer list of trusted sites.
For more information, see your browser documentation.
Chapter 1. Overview 39
Page 52
40 DS8880 Introduction and Planning Guide
Page 53

Chapter 2. Hardware features

Use this information to assist you with planning, ordering, and managing your storage system.
The following table lists feature codes that are used to order hardware features for your system.
Table 21. Feature codes for hardware features
Feature code Feature Description
0101 Single-phase input power indicator,
200 - 220 V, 30 A
0102 Single-phase input power indicator,
220 - 240 V, 30 A
0170 Top expansion For models 985 and 85E, 986 and
0200 Shipping weight reduction Maximum shipping weight of any
0400 BSMI certification documents Required when the storage system
||| |
0403 Non-encryption certification key Required when the storage system
1050 Battery service modules Single-phase DC-UPS 1052 Battery service modules Three-phase DC-UPS 1055 Extended power line disturbance An optional feature that is used to
1062 Single-phase power cord, 200 - 240 V,
60 A, 3-pin connector
86E, 988 and 88E, increases frame from 40U to 46U
storage system base model or expansion model does not exceed 909 kg (2000 lb) each. Packaging adds 120 kg (265 lb).
model is shipped to Taiwan.
model is shipped to China or Russia.
protect the storage system from a power-line disturbance for up to 40 seconds.
HBL360C6W, Pin and Sleeve Connector, IEC 60309, 2P3W
HBL360R6W, AC Receptacle, IEC 60309, 2P3W
1063 Single-phase power cord, 200 - 240 V,
63 A, no connector
1086 Three-phase high voltage (five-wire
3P+N+G), 380-415V (nominal), 30 A, IEC 60309 5-pin customer connector
1087 Three-phase low voltage (four-wire
3P+G), 200-240V, 30 A, IEC 60309 4-pin customer connector
© Copyright IBM Corp. 2004, 2018 41
Inline Connector: not applicable
Receptacle: not applicable HBL530C6V02, Pin and Sleeve
Connector, IEC 60309, 4P5W
HBL530R6V02, AC Receptacle, IEC 60309, 4P5W
HBL430C9W, Pin and Sleeve Connector, IEC 60309, 3P4W
HBL430R9W, AC Receptacle, IEC 60309, 3P4W
Page 54
Table 21. Feature codes for hardware features (continued)
Feature code Feature Description
1088 Three-phase high voltage (five-wire
3P+N+G), 380-415V, 40 A, no customer connector provided
1089 Three-phase low voltage (four-wire
3P+G), 200-240V, 60 A, IEC 60309 4-pin customer connector
1101 5 ft. ladder For models 984 and 84E, 985 and
1102 3 ft. platform ladder For models 984 and 84E, 985 and
1103 Rolling step stool For models 984, 985, 986, and 988 1141 Primary management console A primary management consoles is
1151 Secondary management console Redundant management console for
1241 Drive enclosure pair total Admin feature totaling all disk
1242 Standard drive enclosure pair For 2.5-inch disk drives 1244 Standard drive enclosure pair For 3.5-inch disk drives 1245 Standard drive enclosure pair For 400 GB flash drives 1246 Drive cable group A Connects the disk drives to the
1247 Drive cable group B Connects the disk drives to the
1248 Drive cable group C Connects the disk drives from a
1249 Drive cable group D Connects the disk drives from a third
1251 Drive cable group E Connects the disk drives from a
1252 Extended drive cable group C For model 85E or 86E, 20 meter cable
Inline Connector: not applicable
Receptacle: not applicable
HBL460C9W, Pin and Sleeve Connector, IEC 60309, 3P4W
HBL460R9W, AC Receptacle, IEC 60309, 3P4W
85E, 986 and 86E, 988 and 88E
85E, 986 and 86E, 988 and 88E
required to be installed in the model 984, 985, 986, and 988
high availability
For model 984, 985, 986, and 988, this feature is optional
enclosure pairs installed in the model
device adapters within the same base model 985 or 986
device adapters in the first expansion model 85E or 86E
second expansion model 85E or 86E to the base model 985 or 986 and first expansion model 85E or 86E
expansion model 85E or 86E to a second expansion model 85E or 86E
fourth expansion model 85E or 86E to a third expansion model 85E or 86E
to connect disk drives from a third expansion model 85E or 86E to a second expansion model 85E or 86E
42 DS8880 Introduction and Planning Guide
Page 55
Table 21. Feature codes for hardware features (continued)
Feature code Feature Description
1253 Extended drive cable group D For model 85E or 86E, 20 meter cable
to connect disk drives from a fourth expansion model 85E or 86E to a third expansion model 85E or 86E
1254 Extended drive cable group E For model 85E or 86E, 20 meter cable
to connect disk drives from a fifth expansion model 85E or 86E to a
fourth expansion model 85E or 86E 1256 Standard drive enclosure pair For 800 GB flash drives 1257 Standard drive enclosure pair For 1.6 TB flash drives 1261 Drive cable group A Connects the disk drives to the
device adapters within the same base
model 984 1262 Drive cable group B Connects the disk drives to the
device adapters in the first expansion
model 84E 1263 Drive cable group C Connects the disk drives from a
second expansion model 84E to the
base model 984 and first expansion
model 84E 1266 Extended drive cable group C For model 84E, 20 meter cable to
connect disk drives from a second
expansion model 84E to the base
model 984 and first expansion model
84E 1303 Gen2 I/O enclosure pair 1320 PCIe cable group 1 Connects device and host adapters in
an I/O enclosure pair to the
processor. 1321 PCIe cable group 2 Connects device and host adapters in
I/O enclosure pairs to the processor. 1400 Top-exit bracket for Fibre Channel
cable
1410 Fibre Channel cable 40 m (131 ft), 50 micron OM3 or
higher, multimode LC 1411 Fibre Channel cable 31 m (102 ft), 50 micron OM3 or
higher, multimode LC/SC 1412 Fibre Channel cable 2 m (6.5 ft), 50 micron OM3 or
higher, multimode LC/SC Jumper 1420 Fibre Channel cable 31 m (102 ft), 9 micron OS1 or higher,
single mode LC 1421 Fibre Channel cable 31 m (102 ft), 9 micron OS1 or higher,
single mode LC/SC 1422 Fibre Channel cable 2 m (6.5 ft), 9 micron OS1 or higher,
single mode LC/SC Jumper 1600 High Performance Flash Enclosure
For flash drives
Gen2 pair with flash RAID controllers
Chapter 2. Hardware features 43
Page 56
Table 21. Feature codes for hardware features (continued)
Feature code Feature Description
1602 High Performance Flash Enclosure
Gen2 pair
1604 Flash RAID adapter pair For DS8888F (models 988 and 88E),
1610 400 GB 2.5-inch Flash Tier 0 drive set Flash drive set (16 drives) 1611 800 GB 2.5-inch Flash Tier 0 drive set Flash drive set (16 drives) 1612 1.6 TB 2.5-inch Flash Tier 0 drive set Flash drive set (16 drives) 1613 3.2 TB 2.5-inch Flash Tier 0 drive set Flash drive set (16 drives) 1623 3.8 TB 2.5-inch Flash Tier 1 drive set Flash drive set (16 drives)
1624 7.6 TB 2.5-inch Flash Tier 2 drive set Flash drive set (16 drives)
1699 High Performance Flash Enclosure
Gen2 filler set
1761 External SKLM isolated-key appliance Model AP1 single server
1762 Secondary external SKLM isolated-key
appliance
1884 DS8000 Licensed Machine Code R8.4 Microcode bundle 88.x.xx.x for base
|||
1885 DS8000 Licensed Machine Code R8.5 Microcode bundle 88.x.xx.x for base
|
1906 Earthquake resistance kit 1984 DS8000 Licensed Machine Code R8.4 Microcode bundle 88.x.xx.x for
|||
1985 DS8000 Licensed Machine Code R8.5 Microcode bundle 88.x.xx.x for
| |
2997 Disk enclosure filler set For 3.5-in. DDMs; includes eight
2999 Disk enclosure filler set For 2.5-in. DDMs; includes 16 fillers 3053 Device adapter pair 4-port, 8 Gb 3153 Fibre Channel host-adapter 4-port, 8 Gbps shortwave FCP and
3157 Fibre Channel host-adapter 8-port, 8 Gbps shortwave FCP and
3253 Fibre Channel host-adapter 4-port, 8 Gbps longwave FCP and
For DS8888F (models 988 and 88E)
Requires feature code 1604
required after first 8 feature code 1600
Requires feature code 1602
No intermix with Flash Tier 0 or Flash Tier 2 drive sets
No intermix with Flash Tier 0 or Flash Tier 1 drive sets
Includes 16 fillers
configuration Model AP1 dual server configuration
models 984, 985, 986, and 988
model 984, 985, 986, and 988
expansion models 84E, 85E, 86E, and 88E
expansion models 84E, 85E, 86E, and 88E
fillers
FICON host adapter PCIe
FICON host adapter PCIe
FICON host adapter PCIe
44 DS8880 Introduction and Planning Guide
Page 57
Table 21. Feature codes for hardware features (continued)
Feature code Feature Description
3257 Fibre Channel host-adapter 8-port, 8 Gbps longwave FCP and
FICON host adapter PCIe 3353 Fibre Channel host-adapter 4-port, 16 Gbps shortwave FCP and
FICON host adapter PCIe 3453 Fibre Channel host-adapter 4-port, 16 Gbps longwave FCP and
FICON host adapter PCIe
Requires feature code 3065 or 3066 3500 zHyperLink I/O-adapter Required for feature code 1450 and
1451 3600 Transparent cloud tiering adapter pair
for 2U processor complex (optional)
2-port 10 Gbps SFP+ optical/2-port 1
Gbps RJ-45 copper longwave adapter
pair for model 984 3601 Transparent cloud tiering adapter pair
for 4U processor complex (optional)
2-port 10 Gbps SFP+ optical/2-port 1
Gbps RJ-45 copper longwave adapter
pair for models 985, 986, and 988 4233 64 GB system memory (6-core) 4234 128 GB system memory (6-core) 4235 256 GB system memory (6-core and 12-core) 4334 128 GB system memory (8-core) 4335 256 GB system memory (8-core and 16-core) 4336 512 GB system memory (16-core) 4337 1 TB system memory (24-core) 4338 2 TB system memory (24-core) 4421 6-core POWER8 processors Requires feature code 4233, 4234, or
4235 4422 8-core POWER8 processors Requires feature code 4334, or 4335 4423 16-core POWER8 processors Requires feature code 4335, or 4336 4424 24-core POWER8 processors Requires feature code 4337, or 4338 4425 12-core POWER8 processors For zHyperLink support on
model 984
Requires feature code 4235 4497 1 TB system memory (24-core) 4498 2 TB system memory (48-core) 4878 24-core POWER8+ processors Requires feature code 4497 4898 48-core POWER8+ processors Requires feature code 4498 5308 300 GB 15 K FDE disk-drive set SAS 5618 600 GB 15 K FDE disk-drive set SAS 5708 600 GB 10K FDE disk-drive set SAS 5768 1.2 TB 10K FDE disk-drive set SAS 5778 1.8 TB 10K FDE disk-drive set SAS 5868 4 TB 7.2 K FDE disk-drive set SAS 5878 6 TB 7.2 K FDE disk-drive set SAS
Chapter 2. Hardware features 45
Page 58
Table 21. Feature codes for hardware features (continued)
Feature code Feature Description
6158 400 GB FDE flash-drive set SAS 6258 800 GB FDE flash-drive set SAS 6358 1.6 TB SSD FDE drive set SAS 6458 3.2 TB SSD FDE drive set SAS

Storage complexes

A storage complex is a set of storage units that are managed by management console units.
You can associate one or two management console units with a storage complex. Each storage complex must use at least one of the management console units in one of the storage units. You can add a second management console for redundancy.

Management console

The management console supports storage system hardware and firmware installation and maintenance activities.
The management console is a dedicated processor unit that is located inside your storage system, and can automatically monitor the state of your system, and notify you and IBM when service is required.
To provide continuous availability of your access to the management-console functions, use an additional management console , especially for storage environments that use encryption. Both management consoles share a keyboard and display that are stored on the left side of the base frame.

Hardware specifics

The storage system models offer a high degree of availability and performance through the use of redundant components that can be replaced while the system is operating. You can use a storage system model with a mix of different operating systems and clustered and nonclustered variants of the same operating systems.
Contributors to the high degree of availability and reliability include the structure of the storage unit, the host systems that are supported, and the memory and speed of the processors.

Storage system structure

The design of the storage system contributes to the high degree of availability. The primary components that support high availability within the storage unit are the storage server, the processor complex, and the power control card.
Storage system
The storage unit contains a storage server and one or more pair of storage enclosures that are packaged in one or more frames with associated power supplies, batteries, and cooling.
46 DS8880 Introduction and Planning Guide
Page 59
Storage server
The storage server consists of two processor complexes, two or more I/O enclosures, and a pair of power control cards.
Processor complex
The processor complex controls and manages the storage server functions in the storage system. The two processor complexes form a redundant pair such that if either processor complex fails, the remaining processor complex controls and manages all storage server functions.
Power control card
A redundant pair of power control cards coordinate the power management within the storage unit. The power control cards are attached to the service processors in each processor complex, the primary power supplies in each frame, and indirectly to the fan/sense cards and storage enclosures in each frame.

Disk drives and flash drives

The storage system provides you with a choice of drives.
The following drives are available: v 2.5-inch Flash Tier 0 drives with FDE
– 400 GB – 800 GB – 1.6 TB – 3.2 TB
v 2.5-inch Flash Tier 1 drives with FDE
– 3.8 TB
v 2.5-inch Flash Tier 2 drives with FDE
– 7.6 TB
v 2.5-inch flash drives with FDE
– 400 GB – 800 GB – 1.6 TB
v 2.5-inch disk drives with FDE
– 300 GB, 15 K RPM – 600 GB, 15 K RPM – 600 GB, 10 K RPM – 1.2 TB, 10 K RPM – 1.8 TB, 10 K RPM
v 3.5-inch disk drives with FDE
– 4 TB, 7.2 K RPM – 6 TB, 7.2 K RPM
| |
Note: Intermix of Flash Tier 0, Flash Tier 1, and Flash Tier 2 drives is not supported.

Drive maintenance policy

The internal maintenance functions use an Enhanced Sparing process that delays a service call for drive replacement if there are sufficient spare drives. All drive repairs are managed according to Enhanced Sparing rules.
A minimum of two spare drives are allocated in a device adapter loop. Internal maintenance functions continuously monitor and report (by using the call home
Chapter 2. Hardware features 47
Page 60
feature) to IBM when the number of drives in a spare pool reaches a preset threshold. This design ensures continuous availability of devices while it protects data and minimizing any service disruptions.
It is not recommended to replace a drive unless an error is generated indicating that service is needed.

Host attachment overview

The storage system provides various host attachments so that you can consolidate storage capacity and workloads for open-systems hosts and IBM Z.
The storage system provides extensive connectivity using Fibre Channel adapters across a broad range of server environments.
Host adapter intermix support
Both 4-port and 8-port host adapters (HAs) are available in systems with frames. These systems can have a maximum of four host adapters per I/O enclosure including 4-port 16 Gbps adapters, 4- or 8-port 8 Gbps adapters, or a combination of each.
Models 984, 985, 986, and 988
A maximum of 16 ports per I/O enclosure is supported, which provides for a maximum of 128 ports in a system. Eight-port 8 Gbps adapters are allowed only in slots C1 and C4. If an 8-port adapter is present in slot C1, no adapter can be installed in slot C2. If an 8-port adapter is present in slot C4, no adapter can be installed in slot C5.
The following table shows the host adapter plug order.
Table 22. Plug order for 4- and 8-port HA slots for two and four I/O enclosures
Slot number I/O enclosures For two I/O enclosures
Top I/O enclosure 1
Bottom I/O enclosure 3
Top I/O enclosure 2
Bottom I/O enclosure 4
For four I/O enclosures in a DS8884, DS8886 configuration
Top I/O enclosure 1
Bottom I/O enclosure 3
Top I/O enclosure 2
Bottom I/O enclosure 4
C1 C2 C3 C4 C5 C6
3 7 1 5
2 8 4 6
7 15 3 11
5 13 1 9
4 12 8 16
2 10 6 14
The following HA-type plug order is used during manufacturing when different types of HA cards are installed.
48 DS8880 Introduction and Planning Guide
Page 61
1. 8-port 8 Gbps longwave host adapters
2. 8-port 8 Gbps shortwave host adapters
3. 4-port 16 Gbps longwave host adapters
4. 4-port 16 Gbps shortwave host adapters
5. 4-port 8 Gbps longwave host adapters
6. 4-port 8 Gbps shortwave host adapters
Open-systems host attachment with Fibre Channel adapters
You can attach a storage system to an open-systems host with Fibre Channel adapters.
The storage system supports SAN speeds of up to 16 Gbps with the current 16 Gbps host adapters, or up to 8 Gbps with the 8 Gbps host adapters. The storage system detects and operates at the greatest available link speed that is shared by both sides of the system.
Fibre Channel technology transfers data between the sources and the users of the information. Fibre Channel connections are established between Fibre Channel ports that reside in I/O devices, host systems, and the network that interconnects them. The network consists of elements like switches, bridges, and repeaters that are used to interconnect the Fibre Channel ports.
FICON attached IBM Z hosts overview
The storage system can be attached to FICON attached IBM Z host operating systems under specified adapter configurations.
Each storage system Fibre Channel adapter has four ports or eight ports, depending on the adapter type. Each port has a unique worldwide port name (WWPN). You can configure the port to operate with the FICON upper-layer protocol.
With Fibre Channel adapters that are configured for FICON, the storage system provides the following configurations:
v Either fabric or point-to-point topologies v A maximum of 64 ports on DS8884F storage systems, 64 ports on
DS8884 storage systems, and a maximum of 128 ports on DS8886, DS8886F and DS8888F storage systems
v A maximum of 509 logins per Fibre Channel port v A maximum of 8,192 logins per storage system v A maximum of 1,280 logical paths on each Fibre Channel port v Access to all 255 control-unit images (65,280 CKD devices) over each
FICON port
v A maximum of 512 logical paths per control unit image
|
Note: IBM z13®and IBM z14™servers support 32,768 devices per FICON host channel, while IBM zEnterprise®EC12 and IBM zEnterprise BC12 servers support 24,576 devices per FICON host channel. Earlier IBM Z servers support 16,384 devices per FICON host channel. To fully access 65,280 devices, it is necessary to connect multiple FICON host channels to the storage system. You can access the devices through a Fibre Channel switch or FICON director to a single storage system FICON port.
The storage system supports the following operating systems for IBM Z hosts:
v Linux
Chapter 2. Hardware features 49
Page 62
For the most current information on supported hosts, operating systems, adapters, and switches, go to the IBM System Storage Interoperation Center (SSIC) website (www.ibm.com/systems/support/storage/config/ssic).

I/O load balancing

You can maximize the performance of an application by spreading the I/O load across processor nodes, arrays, and device adapters in the storage system.
During an attempt to balance the load within the storage system, placement of application data is the determining factor. The following resources are the most important to balance, roughly in order of importance:
v Activity to the RAID drive groups. Use as many RAID drive groups as possible
for the critical applications. Most performance bottlenecks occur because a few drive are overloaded. Spreading an application across multiple RAID drive groups ensures that as many drives as possible are available. This is extremely important for open-system environments where cache-hit ratios are usually low.
v Activity to the nodes. When selecting RAID drive groups for a critical
application, spread them across separate nodes. Because each node has separate memory buses and cache memory, this maximizes the use of those resources.
v Activity to the device adapters. When selecting RAID drive groups within a
cluster for a critical application, spread them across separate device adapters.
v Activity to the Fibre Channel ports. Use the IBM Multipath Subsystem Device
Driver (SDD) or similar software for other platforms to balance I/O activity across Fibre Channel ports.
v Transaction Processing Facility (TPF) v Virtual Storage Extended/Enterprise Storage Architecture v z/OS v z/VM v z/VSE
®
®
Note: For information about SDD, see IBM Multipath Subsystem Device Driver User's Guide (http://www-01.ibm.com/support/
docview.wss?uid=ssg1S7000303). This document also describes the product engineering tool, the ESSUTIL tool, which is supported in the pcmpath commands and the datapath commands.

Storage consolidation

When you use a storage system, you can consolidate data and workloads from different types of independent hosts into a single shared resource.
You can mix production and test servers in an open systems environment or mix open systems and IBM Z hosts. In this type of environment, servers rarely, if ever, contend for the same resource.
Although sharing resources in the storage system has advantages for storage administration and resource sharing, there are more implications for workload planning. The benefit of sharing is that a larger resource pool (for example, drives or cache) is available for critical applications. However, you must ensure that uncontrolled or unpredictable applications do not interfere with critical work. This requires the same workload planning that you use when you mix various types of work on a server.
50 DS8880 Introduction and Planning Guide
Page 63

Count key data

Fixed block

If your workload is critical, consider isolating it from other workloads. To isolate the workloads, place the data as follows:
v On separate RAID drive groups. Data for open systems or IBM Z hosts is
automatically placed on separate arrays, which reduce the contention for drive use.
v On separate device adapters. v In separate processor nodes, which isolate use of memory buses,
microprocessors, and cache resources. Before you decide, verify that the isolation of your data to a single node provides adequate data access performance for your application.
In count-key-data (CKD) disk data architecture, the data field stores the user data.
Because data records can be variable in length, in CKD they all have an associated count field that indicates the user data record size. The key field enables a hardware search on a key. The commands used in the CKD architecture for managing the data and the storage devices are called channel command words.
In fixed block (FB) architecture, the data (the logical volumes) are mapped over fixed-size blocks or sectors.
With an FB architecture, the location of any block can be calculated to retrieve that block. This architecture uses tracks and cylinders. A physical disk contains multiple blocks per track, and a cylinder is the group of tracks that exists under the disk heads at one point in time without performing a seek operation.

T10 DIF support

American National Standards Institute (ANSI) T10 Data Integrity Field (DIF) standard is supported on IBM Z for SCSI end-to-end data protection on fixed block (FB) LUN volumes. This support applies to the IBM DS8880 unit (98x models). IBM Z support applies to FCP channels only.
IBM Z provides added end-to-end data protection between the operating system and the DS8880 unit. This support adds protection information consisting of CRC (Cyclic Redundancy Checking), LBA (Logical Block Address), and host application tags to each sector of FB data on a logical volume.
Data protection using the T10 Data Integrity Field (DIF) on FB volumes includes the following features:
v Ability to convert logical volume formats between standard and protected
formats supported through PPRC between standard and protected volumes
v Support for earlier versions of T10-protected volumes on the DS8880 with non
T10 DIF-capable hosts
v Allows end-to-end checking at the application level of data stored on FB disks v Additional metadata stored by the storage facility image (SFI) allows host
adapter-level end-to-end checking data to be stored on FB disks independently of whether the host uses the DIF format.
Notes:
Chapter 2. Hardware features 51
Page 64
v This feature requires changes in the I/O stack to take advantage of all the
v T10 DIF volumes can be used by any type of Open host with the exception of
v T10 DIF volumes can accept SCSI I/O of either T10 DIF or standard type, but if

Logical volumes

A logical volume is the storage medium that is associated with a logical disk. It typically resides on two or more hard disk drives.
For the storage unit, the logical volumes are defined at logical configuration time. For count-key-data (CKD) servers, the logical volume size is defined by the device emulation mode and model. For fixed block (FB) hosts, you can define each FB volume (LUN) with a minimum size of a single block (512 bytes) to a maximum size of 232blocks or 16 TB.
A logical device that has nonremovable media has one and only one associated logical volume. A logical volume is composed of one or more extents. Each extent is associated with a contiguous range of addressable data units on the logical volume.
capabilities the protection offers.
iSeries, but active protection is supported only for Linux on IBM Z or AIX on IBM Power Systems™. The protection can only be active if the host server has T10 DIF enabled.
the FB volume type is standard, then only standard SCSI I/O is accepted.

Allocation, deletion, and modification of volumes

Extent allocation methods (namely, rotate volumes and pool striping) determine the means by which actions are completed on storage system volumes.
All extents of the ranks assigned to an extent pool are independently available for allocation to logical volumes. The extents for a LUN or volume are logically ordered, but they do not have to come from one rank and the extents do not have to be contiguous on a rank. This construction method of using fixed extents to form a logical volume in the storage system allows flexibility in the management of the logical volumes. You can delete volumes, resize volumes, and reuse the extents of those volumes to create other volumes, different sizes. One logical volume can be deleted without affecting the other logical volumes defined on the same extent pool.
Because the extents are cleaned after you delete a volume, it can take some time until these extents are available for reallocation. The reformatting of the extents is a background process.
There are two extent allocation methods used by the storage system: rotate volumes and storage pool striping (rotate extents).
Storage pool striping: extent rotation
The default storage allocation method is storage pool striping. The extents of a volume can be striped across several ranks. The storage system keeps a sequence of ranks. The first rank in the list is randomly picked at each power on of the storage subsystem. The storage system tracks the rank in which the last allocation started. The allocation of a first extent for the next volume starts from the next rank in that sequence. The next extent for that volume is taken from the next rank in sequence, and so on. The system rotates the extents across the ranks.
52 DS8880 Introduction and Planning Guide
Page 65
If you migrate an existing non-striped volume to the same extent pool with a rotate extents allocation method, then the volume is "reorganized." If you add more ranks to an existing extent pool, then the "reorganizing" existing striped volumes spreads them across both existing and new ranks.
You can configure and manage storage pool striping using the DS Storage Manager, and DS CLI, and DS Open API. The default of the extent allocation method (EAM) option that is allocated to a logical volume is now rotate extents. The rotate extents option is designed to provide the best performance by striping volume extents across ranks in extent pool.
Managed EAM: Once a volume is managed by Easy Tier, the EAM of the volume is changed to managed EAM, which can result in placement of the extents differing from the rotate volume and rotate extent rules. The EAM only changes when a volume is manually migrated to a non-managed pool.
Rotate volumes allocation method
Extents can be allocated sequentially. In this case, all extents are taken from the same rank until there are enough extents for the requested volume size or the rank is full, in which case the allocation continues with the next rank in the extent pool.
If more than one volume is created in one operation, the allocation for each volume starts in another rank. When allocating several volumes, rotate through the ranks. You might want to consider this allocation method when you prefer to manage performance manually. The workload of one volume is going to one rank. This method makes the identification of performance bottlenecks easier; however, by putting all the volumes data onto just one rank, you might introduce a bottleneck, depending on your actual workload.

LUN calculation

The storage system uses a volume capacity algorithm (calculation) to provide a logical unit number (LUN).
In the storage system, physical storage capacities are expressed in powers of 10. Logical or effective storage capacities (logical volumes, ranks, extent pools) and processor memory capacities are expressed in powers of 2. Both of these conventions are used for logical volume effective storage capacities.
On open volumes with 512 byte blocks (including T10-protected volumes), you can specify an exact block count to create a LUN. You can specify a standard LUN size (which is expressed as an exact number of binary GiBs (230)) or you can specify an ESS volume size (which is expressed in decimal GiBs (109) accurate to 0.1 GB). The unit of storage allocation for fixed block open systems volumes is one extent. The extent sizes for open volumes is either exactly 1 GiB, or 16 MiB. Any logical volume that is not an exact multiple of 1 GiB does not use all the capacity in the last extent that is allocated to the logical volume. Supported block counts are from 1 to 4 194 304 blocks (2 binary TiB) in increments of one block. Supported sizes are from 1 to 16 TiB in increments of 1 GiB. The supported ESS LUN sizes are limited to the exact sizes that are specified from 0.1 to 982.2 GB (decimal) in increments of
0.1 GB and are rounded up to the next larger 32 K byte boundary. The ESS LUN sizes do not result in standard LUN sizes. Therefore, they can waste capacity. However, the unused capacity is less than one full extent. ESS LUN sizes are typically used when volumes must be copied between the storage system and ESS.
Chapter 2. Hardware features 53
Page 66
On open volumes with 520 byte blocks, you can select one of the supported LUN sizes that are used on IBM i processors to create a LUN. The operating system uses 8 of the bytes in each block. This leaves 512 bytes per block for your data. Variable volume sizes are also supported.
Table 23 shows the disk capacity for the protected and unprotected models. Logically unprotecting a storage LUN allows the IBM i host to start system level mirror protection on the LUN. The IBM i system level mirror protection allows normal system operations to continue running in the event of a failure in an HBA, fabric, connection, or LUN on one of the LUNs in the mirror pair.
Note: On IBM i, logical volume sizes in the range 17.5 GB to 141.1 GB are supported as load source units. Logical volumes smaller than 17.5 GB or larger than 141.1 GB cannot be used as load source units.
Table 23. Capacity and models of disk volumes for IBM i hosts running IBM i operating system
Size Protected model Unprotected model
8.5 GB A01 A81
17.5 GB A02 A82
35.1 GB A05 A85
70.5 GB A04 A84
141.1 GB A06 A86
282.2 GB A07 A87 1 GB to 2000 GB 099 050
On CKD volumes, you can specify an exact cylinder count or a standard volume size to create a LUN. The standard volume size is expressed as an exact number of Mod 1 equivalents (which is 1113 cylinders). The unit of storage allocation for CKD volumes is one CKD extent. The extent size for a CKD volume is either exactly a Mod-1 equivalent (which is 1113 cylinders), or it is 21 cylinders when using the small-extents option. Any logical volume that is not an exact multiple of 1113 cylinders (1 extent) does not use all the capacity in the last extent that is allocated to the logical volume. For CKD volumes that are created with 3380 track formats, the number of cylinders (or extents) is limited to either 2226 (1 extent) or 3339 (2 extents). For CKD volumes that are created with 3390 track formats, you can specify the number of cylinders in the range of 1 - 65520 (x'0001' - x'FFF0') in increments of one cylinder, for a standard (non-EAV) 3390. The allocation of an EAV volume is expressed in increments of 3390 mod1 capacities (1113 cylinders) and can be expressed as integral multiples of 1113 between 65,667 - 1,182,006 cylinders or as the number of 3390 mod1 increments in the range of 59 - 1062.

Extended address volumes for CKD

Count key data (CKD) volumes now support the additional capacity of 1 TB. The 1 TB capacity is an increase in volume size from the previous 223 GB.
This increased volume capacity is referred to as extended address volumes (EAV) and is supported by the 3390 Model A. Use a maximum size volume of up to 1,182,006 cylinders for the IBM z/OS. This support is available to you for the z/OS version 12.1, and later.
54 DS8880 Introduction and Planning Guide
Page 67
You can create a 1 TB IBM Z CKD volume. A IBM Z CKD volume is composed of one or more extents from a CKD extent pool. CKD extents are 1113 cylinders in size. When you define a IBM Z CKD volume, you must specify the number of cylinders that you want for the volume. The storage system and the z/OS have limits for the CKD EAV sizes. You can define CKD volumes with up to 1,182,006 cylinders, about 1 TB on the DS8880.
If the number of cylinders that you specify is not an exact multiple of 1113 cylinders, then some space in the last allocated extent is wasted. For example, if you define 1114 or 3340 cylinders, 1112 cylinders are wasted. For maximum storage efficiency, consider allocating volumes that are exact multiples of 1113 cylinders. In fact, multiples of 3339 cylinders should be considered for future compatibility. If you want to use the maximum number of cylinders for a volume (that is 1,182,006 cylinders), you are not wasting cylinders, because it is an exact multiple of 1113 (1,182,006 divided by 1113 is exactly 1062). This size is also an even multiple (354) of 3339, a model 3 size.

Quick initialization

Quick initialization improves device initialization speed and allows a Copy Services relationship to be established after a device is created.
Quick volume initialization for IBM Z environments is supported. This support helps users who frequently delete volumes by reconfiguring capacity without waiting for initialization. Quick initialization initializes the data logical tracks or block within a specified extent range on a logical volume with the appropriate initialization pattern for the host.
Normal read and write access to the logical volume is allowed during the initialization process. Therefore, the extent metadata must be allocated and initialized before the quick initialization function is started. Depending on the operation, the quick initialization can be started for the entire logical volume or for an extent range on the logical volume.
Chapter 2. Hardware features 55
Page 68
56 DS8880 Introduction and Planning Guide
Page 69

Chapter 3. Data management features

The storage system is designed with many management features that allow you to securely process and access your data according to your business needs, even if it is 24 hours a day and 7 days a week.
This section contains information about the data management features in your storage system. Use the information in this section to assist you in planning, ordering licenses, and in the management of your storage system data management features.

Transparent cloud tiering

Transparent cloud tiering is a licensed function that enables volume data to be copied and transferred to cloud storage. DS8000 transparent cloud tiering is a feature in conjunction with z/OS and DFSMShsm that provides server-less movement of archive and backup data directly to an object storage solution. Offloading the movement of the data from the host to the DS8000 unlocks DFSMShsm efficiencies and saves z/OS CPU cycles.
DFSMShsm has been the leading z/OS data archive solution for over 30 years. Its architecture is designed and optimized for tape, being the medium in which the data is transferred and archived.
Due to this architectural design point, there are inherent inefficiencies that consume host CPU cycles, including the following examples:
Movement of data through the host
All of the data must move from the disk through the host and out to the tape device.
Dual Data Movement
DSS must read the data from the disk and then pass the data from DSS to HSM, which then moves the data from the host to the tape.
16K block sizes
HSM separates the data within z/OS into small 16K blocks.
Recycle
When a tape is full, HSM must continually read the valid data from that tape volume and write it to a new tape.
HSM inventory
Reorgs, audits, and backups of the HSM inventory via the OCDS.
Transparent cloud tiering resolves these inefficiencies by moving the data directly from the DS8000 to the cloud object storage. This process eliminates the movement of data through the host, dual data movement, and the small 16K block size requirement. This process also eliminates recycle processing and the OCDS.
Transparent cloud tiering translates into significant savings in CPU utilization within z/OS, specifically when you are using both DFSMShsm and transparent cloud tiering.
© Copyright IBM Corp. 2004, 2018 57
Page 70
Modern enterprises adopted cloud storage to overcome the massive amount of data growth. The transparent cloud tiering system supports creating connections to cloud service providers to store data in private or public cloud storage. With transparent cloud tiering, administrators can move older data to cloud storage to free up capacity on the system. Point-in-time snapshots of data can be created on the system and then copied and stored on the cloud storage.
An external cloud service provider manages the cloud storage, which helps to reduce storage costs for the system. Before data can be copied to cloud storage, a connection to the cloud service provider must be created from the system. A cloud account is an object on the system that represents a connection to a cloud service provider by using a particular set of credentials. These credentials differ depending on the type of cloud service provider that is being specified. Most cloud service providers require the host name of the cloud service provider and an associated password, and some cloud service providers also require certificates to authenticate users of the cloud storage.
Public clouds use certificates that are signed by well-known certificate authorities. Private cloud service providers can use either self-signed certificate or a certificate that is signed by a trusted certificate authority. These credentials are defined on the cloud service provider and passed to the system through the administrators of the cloud service provider. A cloud account defines whether the system can successfully communicate and authenticate with the cloud service provider by using the account credentials. If the system is authenticated, it can then access cloud storage to either copy data to the cloud storage or restore data that is copied to cloud storage back to the system. The system supports one cloud account to a single cloud service provider. Migration between providers is not supported.
| | | | | |
| |
| | |
| |
|
Client-side encryption for transparent cloud tiering ensures that data is encrypted before it is transferred to cloud storage. The data remains encrypted in cloud storage and is decrypted after it is transferred back to the storage system. You can use client-side encryption for transparent cloud tiering to download and decrypt data on any DS8000 storage system that uses the same set of key servers as the system that first encrypted the data.
Notes:
v Client-side encryption for transparent cloud tiering requires IBM Security Key
Lifecycle Manager v3.0.0.2 or higher. For more information, see the IBM Security Key Lifecycle Manager online product documentation(www.ibm.com/support/ knowledgecenter/SSWPVP/).
v Transparent cloud tiering supports the Key Management Interoperability
Protocol (KMIP) only.
Cloud object storage is inherently multi-tenant, which allows multiple users to store data on the device, segregated from the other users. Each cloud service provider divides cloud storage into segments for each client that uses the cloud storage. These objects store only data specific to that client. Within the segment that is controlled by the user’s name, DFSMShsm and its inventory system controls the creation and segregation of containers that it uses to store the client data objects.
The storage system supports the OpenStack Swift and Amazon S3 APIs. The storage system also supports the IBM TS7700 as an object storage target and the following cloud service providers:
v Amazon S3
58 DS8880 Introduction and Planning Guide
Page 71
v IBM Bluemix - Cloud Object Storage v OpenStack Swift Based Private Cloud

Dynamic volume expansion

Dynamic volume expansion is the capability to increase volume capacity up to a maximum size while volumes are online to a host and not in a Copy Services relationship.
Dynamic volume expansion increases the capacity of open systems and IBM Z volumes, while the volume remains connected to a host system. This capability simplifies data growth by providing volume expansion without taking volumes offline.
Some operating systems do not support a change in volume size. Therefore, a host action is required to detect the change after the volume capacity is increased.
The following volume sizes are the maximum that are supported for each storage type.
v Open systems FB volumes: 16 TB v IBM Z CKD volume types 3390 model 9 and custom: 65520 cylinders v IBM Z CKD volume type 3390 model 3: 3339 cylinders v IBM Z CKD volume types 3390 model A: 1,182,006 cylinders
Note: Volumes cannot be in Copy Services relationships (point-in-time copy, FlashCopy SE, Metro Mirror, Global Mirror, Metro/Global Mirror, and z/OS Global Mirror) during expansion.

Count key data and fixed block volume deletion prevention

By default, DS8000 attempts to prevent volumes that are online and in use from being deleted. The DS CLI and DS Storage Manager provides an option to force the deletion of count key data (CKD) and fixed block (FB) volumes that are in use.
For CKD volumes, in use means that the volumes are participating in a Copy Services relationship or are in a path group. For FB volumes, in use means that the volumes are participating in a Copy Services relationship or there is I/O access to the volume in the last five minutes.
If you specify the -safe option when you delete an FB volume, the system determines whether the volumes are assigned to non-default volume groups. If the volumes are assigned to a non-default (user-defined) volume group, the volumes are not deleted.
If you specify the -force option when you delete a volume, the storage system deletes volumes regardless of whether the volumes are in use.

Thin provisioning

Thin provisioning defines logical volume sizes that are larger than the physical capacity installed on the system. The volume allocates capacity on an as-needed basis as a result of host-write actions.
The thin provisioning feature enables the creation of extent space efficient logical volumes. Extent space efficient volumes are supported for FB and CKD volumes
Chapter 3. Data management features 59
Page 72
and are supported for all Copy Services functionality, including FlashCopy targets where they provide a space efficient FlashCopy capability.
| | | |
| | |
| | | | | |
Releasing space on CKD volumes that use thin provisioning
On an IBM Z®host, the DFSMSdss SPACEREL utility can release space from thin provisioned CKD volumes that are used by either Global Copy or Global Mirror.
For Global Copy, space is released on the primary and secondary copies. If the secondary copy is the primary copy of another Global Copy relationship, space is also released on secondary copies of that relationship.
For Global Mirror, space is released on the primary copy after a new consistency group is formed. Space is released on the secondary copy after the next consistency group is formed and a FlashCopy commit is performed. If the secondary copy is the primary copy of another Global Mirror relationship, space is also released on secondary copies of that relationship.

Extent Space Efficient (ESE) capacity controls for thin provisioning

Use of thin provisioning can affect the amount of storage capacity that you choose to order. ESE capacity controls allow you to allocate storage appropriately.
With the mixture of thin-provisioned (ESE) and fully-provisioned (non-ESE) volumes in an extent pool, a method is needed to dedicate some of the extent-pool storage capacity for ESE user data usage, as well as limit the ESE user data usage within the extent pool. Another thing that is needed is the ability to detect when the available storage space within the extent pool for ESE volumes is running out of space.
ESE capacity controls provide extent pool attributes to limit the maximum extent pool storage available for ESE user data usage, and to guarantee a proportion of the extent pool storage to be available for ESE user data usage.
An SNMP trap that is associated with the ESE capacity controls notifies you when the ESE extent usage in the pool exceeds an ESE extent threshold set by you. You are also notified when the extent pool is out of storage available for ESE user data usage.
ESE capacity controls include the following attributes:
ESE Extent Threshold
The percentage that is compared to the actual percentage of storage capacity available for ESE customer extent allocation when determining the extent pool ESE extent status.
ESE Extent Status
One of the three following values: v 0: the percent of the available ESE capacity is greater than the ESE extent
threshold
v 1: the percent of the available ESE capacity is greater than zero but less
than or equal to the ESE extent threshold
v 10: the percent of the available ESE capacity is zero
Note: When the size of the extent pool remains fixed or is only increased, the allocatable physical capacity remains greater than or equal to the allocated physical
60 DS8880 Introduction and Planning Guide
Page 73

IBM Easy Tier

capacity. However, a reduction in the size of the extent pool can cause the allocatable physical capacity to become less than the allocated physical capacity in some cases.
For example, if the user requests that one of the ranks in an extent pool be depopulated, the data on that rank are moved to the remaining ranks in the pool causing the rank to become not allocated and removed from the pool. The user is advised to inspect the limits and threshold on the extent pool following any changes to the size of the extent pool to ensure that the specified values are still consistent with the user’s intentions.
Easy Tier is an optional feature that is provided at no cost. It can greatly increase the performance of your system by ensuring frequently accessed data is put on faster storage. Its capabilities include manual volume capacity rebalance, auto performance rebalancing in both homogeneous and hybrid pools, hot spot management, rank depopulation, manual volume migration, and thin provisioning support (ESE volumes only). Easy Tier determines the appropriate tier of storage that is based on data access requirements and then automatically and non-disruptively moves data, at the subvolume or sub-LUN level, to the appropriate tier in the storage system.
Use Easy Tier to dynamically move your data to the appropriate drive tier in your storage system with its automatic performance monitoring algorithms. You can use this feature to increase the efficiency of your flash drives and the efficiency of all the tiers in your storage system.
You can use the features of Easy Tier between three tiers of storage within a DS8880.
Easy Tier features help you to effectively manage your system health, storage performance, and storage capacity automatically. Easy Tier uses system configuration and workload analysis with warm demotion to achieve effective overall system health. Simultaneously, data promotion and auto-rebalancing address performance while cold demotion works to address capacity.
Easy Tier data in memory persists in local storage or storage in the peer server, ensuring the Easy Tier configurations are available at failover, cold start, or Easy Tier restart.
With Easy Tier Application, you can also assign logical volumes to a specific tier. This feature can be useful when certain data is accessed infrequently, but needs to always be highly available.
Easy Tier Application is enhanced by two related functions: v Easy Tier Application for IBM Z provides comprehensive data-placement
management policy support from application to storage.
v Easy Tier application controls over workload learning and data migration
provides a granular pool-level and volume-level Easy Tier control as well as volume-level tier restriction where a volume can be excluded from the Nearline tier.
Chapter 3. Data management features 61
Page 74
The Easy Tier Heat Map Transfer utility replicates Easy Tier primary storage workload learning results to secondary storage sites, synchronizing performance characteristics across all storage systems. In the event of data recovery, storage system performance is not sacrificed.
You can also use Easy Tier to help with the management of your ESE thin provisioning on fixed block (FB) or count key data (CKD) volumes.
An additional feature provides the capability for you to use Easy Tier manual processing for thin provisioning. Rank depopulation is supported on ranks with ESE volumes allocated (extent space-efficient) or auxiliary volumes.
Use the capabilities of Easy Tier to support:
Drive classes
The following drive classes are available, in order from highest to lowest performance. A pool can contain up to three drive classes.
Flash Tier 0 drives
The highest performance drives, which provide high I/O throughput and low latency.
Flash Tier 1 drives
The first tier of high capacity drives.
Flash Tier 2 drives
The second tier of high capacity drives.
Enterprise drives
Nearline drives
Three tiers
Using three tiers (each representing a separate drive class) and efficient algorithms improves system performance and cost effectiveness.
You can select from four drive classes to create up to three tiers. The drives within a tier must be homogeneous.
The following table lists the possible tier assignments for the drive classes. The tiers are listed according to the following values:
0 Hot data tier, which contain the most active data. This tier can also
1 Mid-data tier, which can be combined with one or both of the
2 Cold data tier, which contains the least active data.
SAS (10-K or 15-K RPM) disk drives.
Nearline (7.2-K RPM) disk drives, which provide large data capacity but lower performance.
serve as the home tier for new data allocations.
other tiers and will contain data not moved to either of these tiers. This is by default the home tier for new data allocations.
62 DS8880 Introduction and Planning Guide
Page 75
Table 24. Drive class combinations and tiers for systems with Flash Tier 0 drives as the highest performance drive class
Drive class combinations
Flash
Tier 0 Flash Tier 0
Flash
Drive classes
Flash Tier 0 0 0 0 0 0 0 0 0 0 0 0 Flash Tier 1 1 1 1 1 Flash Tier 2 1 2 1 1 Enterprise
(Ent) Nearline (NL) 2 2 2 1
Table 25. Drive class combinations and tiers for systems with Flash Tier 1 drives as the highest performance drive class
Drive classes
Flash Tier 1 0 0 0 0 0 0 0 Flash Tier 2 1 1 1 Enterprise (Ent) 2 1 1 Nearline (NL) 2 2 1
Tier 0
Flash Tier 1
+ Flash Tier 1
Flash Tier 0 + Flash Tier 2
Flash Tier 1 + Flash Tier 2
+ Flash
Tier 1
+ Flash
Tier 2
Flash Tier 0 + Flash Tier 1 + Ent
2 2 1 1
Drive class combinations
Flash Tier 1 + Flash Tier 2 + Ent
Flash Tier 0 + Flash Tier 2 + Ent
Flash Tier 1 + Flash Tier 2 + NL
Flash Tier 0 + Flash Tier 1 + NL
Flash Tier 1 + Ent
Flash Tier 0 + Flash Tier 2 + NL
Flash Tier 0 + Ent
Flash Tier 1 + Ent + NL
Flash Tier 0 + Ent + NL
Flash Tier 1 + NL
Flash Tier 0 + NL
Table 26. Drive class combinations and tiers for systems with Flash Tier 2 drives as the highest performance drive class
Drive class combinations
Flash Tier 2 + Ent +
Drive classes
Flash Tier 2 0 0 0 0 Enterprise (Ent) 1 1 Nearline (NL) 2 1
Flash Tier 2 Flash Tier 2 + Ent
Table 27. Drive class combinations and tiers for systems with Enterprise or Nearline drives as the highest performance drive class
Drive classes
Enterprise (Ent) 1 1 Nearline (NL) 2 2
Ent Ent + NL NL
NL Flash Tier 2 + NL
Drive class combinations
Cold demotion
Cold data (or extents) stored on a higher-performance tier is demoted to a more appropriate tier. Easy Tier is available with two-tier disk-drive pools and three-tier pools. Sequential bandwidth is moved to the lower tier to increase the efficient use of your tiers.
Chapter 3. Data management features 63
Page 76
Warm demotion
Active data that has larger bandwidth is demoted to the next lowest tier. Warm demotion is triggered whenever the higher tier is over its bandwidth capacity. Selected warm extents are demoted to allow the higher tier to operate at its optimal load. Warm demotes do not follow a predetermined schedule.
Warm promotion
Active data that has higher IOPS is promoted to the next highest tier. Warm promotion is triggered whenever the lower tier is over its IOPS capacity. Selected warm extents are promoted to allow the lower tier to operate at its optimal load. Warm promotes do not follow a predetermined schedule.
Manual volume or pool rebalance
Volume rebalancing relocates the smallest number of extents of a volume and restripes those extents on all available ranks of the extent pool.
Auto-rebalancing
Automatically balances the workload of the same storage tier within both the homogeneous and the hybrid pool that is based on usage to improve system performance and resource use. Use the auto-rebalancing functions of Easy Tier to manage a combination of homogeneous and hybrid pools, including relocating hot spots on ranks. With homogeneous pools, systems with only one tier can use Easy Tier technology to optimize their RAID array usage.
Rank depopulations
Allows ranks that have extents (data) allocated to them to be unassigned from an extent pool by using extent migration to move extents from the specified ranks to other ranks within the pool.
Thin provisioning
Support for the use of thin provisioning is available on ESE and standard volumes. The use of TSE volumes (FB and CKD) is not supported.
Easy Tier provides a performance monitoring capability, regardless of whether the Easy Tier feature is activated. Easy Tier uses the monitoring process to determine what data to move and when to move it when you use automatic mode. You can enable monitoring independently (with or without the Easy Tier feature activated) for information about the behavior and benefits that can be expected if automatic mode were enabled.
Data from the monitoring process is included in a summary report that you can download to your local system.

VMware vStorage API for Array Integration support

The storage system provides support for the VMware vStorage API for Array Integration (VAAI).
The VAAI API offloads storage processing functions from the server to the storage system, reducing the workload on the host server hardware for improved performance on both the network and host servers.
The following operations are supported:
Atomic test and set or VMware hardware-assisted locking
The hardware-assisted locking feature uses the VMware Compare and
64 DS8880 Introduction and Planning Guide
Page 77
Write command for reading and writing the volume's metadata within a single operation. With the Compare and Write command, the storage system provides a faster mechanism that is displayed to the volume as an atomic action that does not require locking the entire volume.
The Compare and Write command is supported on all open systems fixed block volumes, including Metro Mirror and Global Mirror primary volumes and FlashCopy source and target volumes.
XCOPY or Full Copy
The XCOPY (or extended copy) command copies multiple files from one directory to another or across a network.
Full Copy copies data from one storage array to another without writing to the VMware ESX Server (VMware vStorage API).
The following restrictions apply to XCOPY:
v XCOPY is not supported on Extent Space Efficient (ESE) volumes v XCOPY is not supported on volumes greater than 2 TB v The target of an XCOPY cannot be a Metro Mirror or Global Mirror
primary volume
v The Copy Services license is required
Block Zero (Write Same)
The SCSI Write Same command is supported on all volumes. This command efficiently writes each block, faster than standard SCSI write commands, and is optimized for network bandwidth usage.
IBM vCenter plug-in for ESX 4.x
The IBM vCenter plug-in for ESX 4.x provides support for the VAAI interfaces on ESX 4.x.
For information on how to attach a VMware ESX Server host to a DS8880 with Fibre Channel adapters, see IBM DS8000 series online product documentation ( http://www.ibm.com/support/knowledgecenter/ ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/f2c_securitybp.html) and select
Attaching and configuring hosts > VMware ESX Server host attachment.
VMware vCenter Site Recovery Manager 5.0
VMware vCenter Site Recovery Manager (SRM) provides methods to simplify and automate disaster recovery processes. IBM Site Replication Adapter (SRA) communicates between SRM and the storage replication interface. SRA support for SRM 5.0 includes the new features for planned migration, reprotection, and failback. The supported Copy Services are Metro Mirror, Global Mirror, Metro-Global Mirror, and FlashCopy.
The IBM Storage Management Console plug-in enables VMware administrators to manage their systems from within the VMware management environment. This plug-in provides an integrated view of IBM storage to VMware virtualize datastores that are required by VMware administrators. For information, see the IBM Storage Management Console for VMware vCenter (http://www.ibm.com/ support/knowledgecenter/en/STAV45/hsg/hsg_vcplugin_kcwelcome_sonas.html) online documentation.
Chapter 3. Data management features 65
Page 78

Performance for IBM Z

The storage system supports the following IBM performance enhancements for IBM Z environments.
v Parallel Access Volumes (PAVs) v Multiple allegiance v z/OS Distributed Data Backup v z/HPF extended distance capability v zHyperLink
Parallel Access Volumes
A PAV capability represents a significant performance improvement by the storage unit over traditional I/O processing. With PAVs, your system can access a single volume from a single host with multiple concurrent requests.
You must configure both your storage unit and operating system to use PAVs. You can use the logical configuration definition to define PAV-bases, PAV-aliases, and their relationship in the storage unit hardware. This unit address relationship creates a single logical volume, allowing concurrent I/O operations.
Static PAV associates the PAV-base address and its PAV aliases in a predefined and fixed method. That is, the PAV-aliases of a PAV-base address remain unchanged. Dynamic PAV, on the other hand, dynamically associates the PAV-base address and its PAV aliases. The device number types (PAV-alias or PAV-base) must match the unit address types as defined in the storage unit hardware.
You can further enhance PAV by adding the IBM HyperPAV feature. IBM HyperPAV associates the volumes with either an alias address or a specified base logical volume number. When a host system requests IBM HyperPAV processing and the processing is enabled, aliases on the logical subsystem are placed in an IBM HyperPAV alias access state on all logical paths with a specific path group ID. IBM HyperPAV is only supported on FICON channel paths.
PAV can improve the performance of large volumes. You get better performance with one base and two aliases on a 3390 Model 9 than from three 3390 Model 3 volumes with no PAV support. With one base, it also reduces storage management costs that are associated with maintaining large numbers of volumes. The alias provides an alternate path to the base device. For example, a 3380 or a 3390 with one alias has only one device to write to, but can use two paths.
The storage unit supports concurrent or parallel data transfer operations to or from the same volume from the same system or system image for IBM Z or S/390 hosts. PAV software support enables multiple users and jobs to simultaneously access a logical volume. Read and write operations can be accessed simultaneously to different domains. (The domain of an I/O operation is the specified extents to which the I/O operation applies.)
®
Multiple allegiance
With multiple allegiance, the storage unit can run concurrent, multiple requests from multiple hosts.
Traditionally, IBM storage subsystems allow only one channel program to be active to a disk volume at a time. This means that, after the subsystem accepts an I/O request for a particular unit address, this unit address appears "busy" to
66 DS8880 Introduction and Planning Guide
Page 79
subsequent I/O requests. This single allegiance capability ensures that additional requesting channel programs cannot alter data that is already being accessed.
By contrast, the storage unit is capable of multiple allegiance (or the concurrent execution of multiple requests from multiple hosts). That is, the storage unit can queue and concurrently run multiple requests for the same unit address, if no extent conflict occurs. A conflict refers to either the inclusion of a Reserve request by a channel program or a Write request to an extent that is in use.
z/OS Distributed Data Backup
z/OS Distributed Data Backup (zDDB) allows hosts, which are attached through a FICON interface, to access data on fixed block (FB) volumes through a device address on FICON interfaces.
If the zDDB LIC feature key is installed and enabled and a volume group type specifies either FICON interfaces, this volume group has implicit access to all FB logical volumes that are configured in addition to all CKD volumes specified in the volume group. In addition, this optional feature enables data backup of open systems from distributed server platforms through a IBM Z host. The feature helps you manage multiple data protection environments and consolidate those into one environment that is managed by IBM Z. For more information, see “z/OS Distributed Data Backup” on page 122.
z/HPF extended distance

Copy Services

z/HPF extended distance reduces the impact that is associated with supported commands on current adapter hardware, improving FICON throughput on the I/O ports. The storage system also supports the new zHPF I/O commands for multitrack I/O operations.
zHyperLink
zHyperLink is a short distance link technology that is designed for up to 10 times lower latency than zHPF. It can speed up transaction processing and improve active log throughput. zHyperLink is intended to complement FICON technology to accelerate I/O requests that are typically used for transaction processing.
Copy Services functions can help you implement storage solutions to keep your business running 24 hours a day, 7 days a week. Copy Services include a set of disaster recovery, data migration, and data duplication functions.
The storage system supports Copy Service functions that contribute to the protection of your data. These functions are also supported on the IBM TotalStorage Enterprise Storage Server®.
Notes:
v If you are creating paths between an older release of the DS8000 (Release 5.1 or
earlier), which supports only 4-port host adapters, and a newer release of the DS8000 (Release 6.0 or later), which supports 8-port host adapters, the paths connect only to the lower four ports on the newer storage system.
v The maximum number of FlashCopy relationships that are allowed on a volume
is 65534. If that number is exceeded, the FlashCopy operation fails.
Chapter 3. Data management features 67
Page 80
v The size limit for volumes or extents in a Copy Service relationship is 2 TB. v Thin provisioning functions in open-system environments are supported for the
following Copy Services functions: – FlashCopy relationships – Global Mirror relationships if the Global Copy A and B volumes are Extent
Space Efficient (ESE) volumes. The FlashCopy target volume (Volume C) in the Global Mirror relationship can be an ESE volume or standard volume.
v PPRC supports any intermix of T10-protected or standard volumes. FlashCopy
does not support intermix.
v PPRC supports copying from standard volumes to ESE volumes, or ESE
volumes to Standard volumes, to allow migration with PPRC failover when both source and target volumes are on a DS8000 version 8.2 or higher.
The following Copy Services functions are available as optional features: v Point-in-time copy, which includes IBM FlashCopy.
The FlashCopy function allows you to make point-in-time, full volume copies of data so that the copies are immediately available for read or write access. In IBM Z environments, you can also use the FlashCopy function to perform data set level copies of your data.
v Remote mirror and copy, which includes the following functions:
– Metro Mirror
Metro Mirror provides real-time mirroring of logical volumes between two storage system that can be located up to 300 km from each other. It is a synchronous copy solution where write operations are completed on both copies (local and remote site) before they are considered to be done.
– Global Copy
Global Copy is a nonsynchronous long-distance copy function where incremental updates are sent from the local to the remote site on a periodic basis.
– Global Mirror
Global Mirror is a long-distance remote copy function across two sites by using asynchronous technology. Global Mirror processing is designed to provide support for unlimited distance between the local and remote sites, with the distance typically limited only by the capabilities of the network and the channel extension technology.
– Metro/Global Mirror (a combination of Metro Mirror and Global Mirror)
Metro/Global Mirror is a three-site remote copy solution. It uses synchronous replication to mirror data between a local site and an intermediate site, and asynchronous replication to mirror data from an intermediate site to a remote site.
– Multiple Target PPRC
Multiple Target PPRC builds and extends the capabilities of Metro Mirror and Global Mirror. It allows data to be mirrored from a single primary site to two secondary sites simultaneously. You can define any of the sites as the primary site and then run Metro Mirror replication from the primary site to either of the other sites individually or both sites simultaneously.
v Remote mirror and copy for IBM Z environments, which includes z/OS Global
Mirror.
Note: When FlashCopy is used on FB (open) volumes, the source and the target volumes must have the same protection type of either T10 DIF or standard.
68 DS8880 Introduction and Planning Guide
Page 81
The point-in-time and remote mirror and copy features are supported across variousIBM server environments such as IBM i, System p, and IBM Z, as well as servers from Oracle and Hewlett-Packard.
You can manage these functions through a command-line interface that is called the DS CLI. You can use the DS8000 Storage Management GUI to set up and manage the following types of data-copy functions from any point where network access is available:
Point-in-time copy (FlashCopy)
You can use the FlashCopy function to make point-in-time, full volume copies of data, with the copies immediately available for read or write access. In IBM Z environments, you can also use the FlashCopy function to perform data set level copies of your data. You can use the copy with standard backup tools that are available in your environment to create backup copies on tape.
FlashCopy is an optional function.
The FlashCopy function creates a copy of a source volume on the target volume. This copy is called a point-in-time copy. When you initiate a FlashCopy operation, a FlashCopy relationship is created between a source volume and target volume. A FlashCopy relationship is a mapping of the FlashCopy source volume and a FlashCopy target volume. This mapping allows a point-in-time copy of that source volume to be copied to the associated target volume. The FlashCopy relationship exists between the volume pair in either case:
v From the time that you initiate a FlashCopy operation until the storage system
copies all data from the source volume to the target volume.
v Until you explicitly delete the FlashCopy relationship if it was created as a
persistent FlashCopy relationship.
One of the main benefits of the FlashCopy function is that the point-in-time copy is immediately available for creating a backup of production data. The target volume is available for read and write processing so it can be used for testing or backup purposes. Data is physically copied from the source volume to the target volume by using a background process. (A FlashCopy operation without a background copy is also possible, which allows only data modified on the source to be copied to the target volume.) The amount of time that it takes to complete the background copy depends on the following criteria:
v The amount of data to be copied v The number of background copy processes that are occurring v The other activities that are occurring on the storage systems
The FlashCopy function supports the following copy options:
Consistency groups
Creates a consistent point-in-time copy of multiple volumes, with negligible host impact. You can enable FlashCopy consistency groups from the DS CLI.
Change recording
Activates the change recording function on the volume pair that is participating in a FlashCopy relationship. This function enables a subsequent refresh to the target volume.
Establish FlashCopy on existing Metro Mirror source
Establish a FlashCopy relationship, where the target volume is also the
Chapter 3. Data management features 69
Page 82
source of an existing remote mirror and copy source volume. This allows you to create full or incremental point-in-time copies at a local site and then use remote mirroring commands to copy the data to the remote site.
Fast reverse
Reverses the FlashCopy relationship without waiting for the finish of the background copy of the previous FlashCopy. This option applies to the Global Mirror mode.
Inhibit writes to target
Ensures that write operations are inhibited on the target volume until a refresh FlashCopy operation is complete.
Multiple Incremental FlashCopy
Allows a source volume to establish incremental flash copies to a maximum of 12 targets.
Multiple Relationship FlashCopy
Allows a source volume to have multiple (up to 12) target volumes at the same time.
Persistent FlashCopy
Allows the FlashCopy relationship to remain even after the FlashCopy operation completes. You must explicitly delete the relationship.
Refresh target volume
Refresh a FlashCopy relationship, without recopying all tracks from the source volume to the target volume.
Resynchronizing FlashCopy volume pairs
Update an initial point-in-time copy of a source volume without having to recopy your entire volume.
Reverse restore
Reverses the FlashCopy relationship and copies data from the target volume to the source volume.
Reset SCSI reservation on target volume
If there is a SCSI reservation on the target volume, the reservation is released when the FlashCopy relationship is established. If this option is not specified and a SCSI reservation exists on the target volume, the FlashCopy operation fails.
Remote Pair FlashCopy
Figure 11 on page 71 illustrates how Remote Pair FlashCopy works. If Remote Pair FlashCopy is used to copy data from Local A to Local B, an equivalent operation is also performed from Remote A to Remote B. FlashCopy can be performed as described for a Full Volume FlashCopy, Incremental FlashCopy, and Dataset Level FlashCopy.
The Remote Pair FlashCopy function prevents the Metro Mirror relationship from changing states and the resulting momentary period where Remote A is out of synchronization with Remote B. This feature provides a solution for data replication, data migration, remote copy, and disaster recovery tasks.
Without Remote Pair FlashCopy, when you established a FlashCopy relationship from Local A to Local B, by using a Metro Mirror primary volume as the target of that FlashCopy relationship, the corresponding Metro Mirror volume pair went from “full duplex” state to “duplex pending” state if the FlashCopy data was being transferred to the Local B.
70 DS8880 Introduction and Planning Guide
Page 83
The time that it took to complete the copy of the FlashCopy data until all
Local Storage Server Remote Storage Server
Local A
Local B
Remote B
FlashCopy
f2c01089
Remote A
full duplex
Establish
FlashCopy
full duplex
Metro Mirror
Metro Mirror volumes were synchronous again, depended on the amount of data transferred. During this time, the Local B would be inconsistent if a disaster were to have occurred.
Note: Previously, if you created a FlashCopy relationship with the Preserve Mirror, Required option, by using a Metro Mirror primary
volume as the target of that FlashCopy relationship, and if the status of the Metro Mirror volume pair was not in a “full duplex” state, the FlashCopy relationship failed. That restriction is now removed. The Remote Pair FlashCopy relationship completes successfully with the “Preserve Mirror, Required” option, even if the status of the Metro Mirror volume pair is either in a suspended or duplex pending state.
Figure 11. Remote Pair FlashCopy
Note: The storage system supports Incremental FlashCopy and Metro Global Mirror Incremental Resync on the same volume.
|
| | | | | |
| | | |
Safeguarded Copy
The Safeguarded Copy feature creates safeguarded backups that are not accessible by the host system and protects these backups from corruption that can occur in the production environment. You can define a Safeguarded Copy schedule to create multiple backups on a regular basis, such as hourly or daily. You can also restore a backup to the source volume or to a different volume. A backup contains the same metadata as the safeguarded source volume.
Safeguarded Copy can create backups with more frequency and capacity in comparison to FlashCopy volumes. The creation of safeguarded backups also impacts performance less than the multiple target volumes that are created by FlashCopy.
Chapter 3. Data management features 71
Page 84
| | | |
With backups that are outside of the production environment, you can use the backups to restore your environment back to a specified point in time. You can also extract and restore specific data from the backup or use the backup to diagnose production issues.
| |
| |
You cannot delete a safeguarded source volume before the safeguarded backups are deleted. The maximum size of a backup is 16 TB.
Copy Services Manager (available on the Hardware Management Console) is required to facilitate the use and management of Safeguarded Copy functions.
Remote mirror and copy
The remote mirror and copy feature is a flexible data mirroring technology that allows replication between a source volume and a target volume on one or two disk storage systems. You can also issue remote mirror and copy operations to a group of source volumes on one logical subsystem (LSS) and a group of target volumes on another LSS. (An LSS is a logical grouping of up to 256 logical volumes for which the volumes must have the same disk format, either count key data or fixed block.)
Remote mirror and copy is an optional feature that provides data backup and disaster recovery.
Note: You must use Fibre Channel host adapters with remote mirror and copy functions. To see a current list of environments, configurations, networks, and products that support remote mirror and copy functions, click Interoperability Matrix at the following location IBM System Storage Interoperation Center (SSIC) website (www.ibm.com/systems/support/storage/config/ssic).
The remote mirror and copy feature provides synchronous (Metro Mirror) and asynchronous (Global Copy) data mirroring. The main difference is that the Global Copy feature can operate at long distances, even continental distances, with minimal impact on applications. Distance is limited only by the network and channel extenders technology capabilities. The maximum supported distance for Metro Mirror is 300 km.
With Metro Mirror, application write performance depends on the available bandwidth. Global Copy enables better use of available bandwidth capacity to allow you to include more of your data to be protected.
The enhancement to Global Copy is Global Mirror, which uses Global Copy and the benefits of FlashCopy to form consistency groups. (A consistency group is a set of volumes that contain consistent and current data to provide a true data backup at a remote site.) Global Mirror uses a master storage system (along with optional subordinate storage systems) to internally, without external automation software, manage data consistency across volumes by using consistency groups.
Consistency groups can also be created by using the freeze and run functions of Metro Mirror. The freeze and run functions, when used with external automation software, provide data consistency for multiple Metro Mirror volume pairs.
The following sections describe the remote mirror and copy functions.
Synchronous mirroring (Metro Mirror)
Provides real-time mirroring of logical volumes (a source and a target)
72 DS8880 Introduction and Planning Guide
Page 85
between two storage systems that can be located up to 300 km from each other. With Metro Mirror copying, the source and target volumes can be on the same storage system or on separate storage systems. You can locate the storage system at another site, some distance away.
Metro Mirror is a synchronous copy feature where write operations are completed on both copies (local and remote site) before they are considered to be complete. Synchronous mirroring means that a storage server constantly updates a secondary copy of a volume to match changes that are made to a source volume.
The advantage of synchronous mirroring is that there is minimal host impact for performing the copy. The disadvantage is that since the copy operation is synchronous, there can be an impact to application performance because the application I/O operation is not acknowledged as complete until the write to the target volume is also complete. The longer the distance between primary and secondary storage systems, the greater this impact to application I/O, and therefore, application performance.
Asynchronous mirroring (Global Copy)
Copies data nonsynchronously and over longer distances than is possible with the Metro Mirror feature. When operating in Global Copy mode, the source volume sends a periodic, incremental copy of updated tracks to the target volume instead of a constant stream of updates. This function causes less impact to application writes for source volumes and less demand for bandwidth resources. It allows for a more flexible use of the available bandwidth.
The updates are tracked and periodically copied to the target volumes. As a consequence, there is no guarantee that data is transferred in the same sequence that was applied to the source volume.
To get a consistent copy of your data at your remote site, periodically switch from Global Copy to Metro Mirror mode, then either stop the application I/O or freeze data to the source volumes by using a manual process with freeze and run commands. The freeze and run functions can be used with external automation software such as Geographically Dispersed Parallel Sysplex™(GDPS®), which is available for IBM Z environments, to ensure data consistency to multiple Metro Mirror volume pairs in a specified logical subsystem.
Common options for Metro Mirror/Global Mirror and Global Copy include the following modes:
Suspend and resume
If you schedule a planned outage to perform maintenance at your remote site, you can suspend Metro Mirror/Global Mirror or Global Copy processing on specific volume pairs during the duration of the outage. During this time, data is no longer copied to the target volumes. Because the primary storage system tracks all changed data on the source volume, you can resume operations later to synchronize the data between the volumes.
Copy out-of-synchronous data
You can specify that only data updated on the source volume while the volume pair was suspended is copied to its associated target volume.
Copy an entire volume or not copy the volume
You can copy an entire source volume to its associated target
Chapter 3. Data management features 73
Page 86
Global Mirror
Provides a long-distance remote copy across two sites by using asynchronous technology. Global Mirror processing is most often associated with disaster recovery or disaster recovery testing. However, it can also be used for everyday processing and data migration.
Global Mirror integrates both the Global Copy and FlashCopy functions. The Global Mirror function mirrors data between volume pairs of two
storage systems over greater distances without affecting overall performance. It also provides application-consistent data at a recovery (or remote) site in a disaster at the local site. By creating a set of remote volumes every few seconds, the data at the remote site is maintained to be a point-in-time consistent copy of the data at the local site.
Global Mirror operations periodically start point-in-time FlashCopy operations at the recovery site, at regular intervals, without disrupting the I/O to the source volume, thus giving a continuous, near up-to-date data backup. By grouping many volumes into a session that is managed by the master storage system, you can copy multiple volumes to the recovery site simultaneously maintaining point-in-time consistency across those volumes. (A session contains a group of source volumes that are mirrored asynchronously to provide a consistent copy of data at the remote site. Sessions are associated with Global Mirror relationships and are defined with an identifier [session ID] that is unique across the enterprise. The ID identifies the group of volumes in a session that are related and that can participate in the Global Mirror consistency group.)
volume to guarantee that the source and target volume contain the same data. When you establish volume pairs and choose not to copy a volume, a relationship is established between the volumes but no data is sent from the source volume to the target volume. In this case, it is assumed that the volumes contain the same data and are consistent, so copying the entire volume is not necessary or required. Only new updates are copied from the source to target volumes.
Global Mirror supports up to 32 Global Mirror sessions per storage facility image. Previously, only one session was supported per storage facility image.
You can use multiple Global Mirror sessions to fail over only data assigned to one host or application instead of forcing you to fail over all data if one host or application fails. This process provides increased flexibility to control the scope of a failover operation and to assign different options and attributes to each session.
The DS CLI and DS Storage Manager display information about the sessions, including the copy state of the sessions.
Practice copying and consistency groups
To get a consistent copy of your data, you can pause Global Mirror on a consistency group boundary. Use the pause command with the secondary storage option. (For more information, see the DS CLI Commands reference.) After verifying that Global Mirror is paused on a consistency boundary (state is Paused with Consistency), the secondary storage system and the FlashCopy target storage system or device are consistent. You can then issue either a FlashCopy or Global Copy command to make a practice copy on another storage system or device. You can immediately resume Global Mirror, without the need to wait for the practice copy operation to
74 DS8880 Introduction and Planning Guide
Page 87
finish. Global Mirror then starts forming consistency groups again. The entire pause and resume operation generally takes just a few seconds.
Metro/Global Mirror
Provides a three-site, long-distance disaster recovery replication that combines Metro Mirror with Global Mirror replication for both IBM Z and open systems data. Metro/Global Mirror uses synchronous replication to mirror data between a local site and an intermediate site, and asynchronous replication to mirror data from an intermediate site to a remote site.
In a three-site Metro/Global Mirror, if an outage occurs, a backup site is maintained regardless of which one of the sites is lost. Suppose that an outage occurs at the local site, Global Mirror continues to mirror updates between the intermediate and remote sites, maintaining the recovery capability at the remote site. If an outage occurs at the intermediate site, data at the local storage system is not affected. If an outage occurs at the remote site, data at the local and intermediate sites is not affected. Applications continue to run normally in either case.
With the incremental resynchronization function enabled on a Metro/Global Mirror configuration, if the intermediate site is lost, the local and remote sites can be connected, and only a subset of changed data is copied between the volumes at the two sites. This process reduces the amount of data needing to be copied from the local site to the remote site and the time it takes to do the copy.
Multiple Target PPRC
Provides an enhancement to disaster recovery solutions by allowing data to be mirrored from a single primary site to two secondary sites simultaneously. The function builds on and extends Metro Mirror and Global Mirror capabilities. Various interfaces and operating systems support the function. Disaster recovery scenarios depend on support from controlling software such as Geographically Dispersed Parallel Sysplex (GDPS) and IBM Copy Services Manager.
z/OS Global Mirror
If workload peaks, which might temporarily overload the bandwidth of the Global Mirror configuration, the enhanced z/OS Global Mirror function initiates a Global Mirror suspension that preserves primary site application performance. If you are installing new high-performance z/OS Global Mirror primary storage subsystems, this function provides improved capacity and application performance during heavy write activity. This enhancement can also allow Global Mirror to be configured to tolerate longer periods of communication loss with the primary storage subsystems. This enables the Global Mirror to stay active despite transient channel path recovery events. In addition, this enhancement can provide fail-safe protection against application system impact that is related to unexpected data mover system events.
The z/OS Global Mirror function is an optional function.
z/OS Metro/Global Mirror Incremental Resync
z/OS Metro/Global Mirror Incremental Resync is an enhancement for z/OS Metro/Global Mirror. z/OS Metro/Global Mirror Incremental Resync can eliminate the need for a full copy after a HyperSwap®situation in 3-site z/OS Metro/Global Mirror configurations. The storage system supports z/OS Metro/Global Mirror that is a 3-site mirroring solution that uses IBM System Storage Metro Mirror and z/OS Global Mirror (XRC).
Chapter 3. Data management features 75
Page 88
The z/OS Metro/Global Mirror Incremental Resync capability is intended to enhance this solution by enabling resynchronization of data between sites by using only the changed data from the Metro Mirror target to the z/OS Global Mirror target after a HyperSwap operation.
If an unplanned failover occurs, you can use the z/OS Soft Fence function to prevent any system from accessing data from an old primary PPRC site. For more information, see the GDPS/PPRC Installation and Customization
Guide, or the GDPS/PPRC HyperSwap Manager Installation and Customization Guide.
z/OS Global Mirror Multiple Reader (enhanced readers)
z/OS Global Mirror Multiple Reader provides multiple Storage Device Manager readers that allow improved throughput for remote mirroring configurations in IBM Z environments. z/OS Global Mirror Multiple Reader helps maintain constant data consistency between mirrored sites and promotes efficient recovery. This function is supported on the storage system running in a IBM Z environment with version 1.7 or later at no additional charge.
Interoperability with existing and previous generations of the DS8000 series
All of the remote mirroring solutions that are documented in the sections above use Fibre Channel as the communications link between the primary and secondary storage systems. The Fibre Channel ports that are used for remote mirror and copy can be configured as either a dedicated remote mirror link or as a shared port between remote mirroring and Fibre Channel Protocol (FCP) data traffic.
The remote mirror and copy solutions are optional capabilities and are compatible with previous generations of DS8000 series. They are available as follows:
v Metro Mirror indicator feature numbers 75xx and 0744 and corresponding
DS8000 series function authorization (2396-LFA MM feature numbers 75xx)
v Global Mirror indicator feature numbers 75xx and 0746 and corresponding
DS8000 series function authorization (2396-LFA GM feature numbers 75xx).
Global Copy is a non-synchronous long-distance copy option for data migration and backup.

Disaster recovery through Copy Services

Through Copy Services functions, you can prepare for a disaster by backing up, copying, and mirroring your data at local and remote sites.
Having a disaster recovery plan can ensure that critical data is recoverable at the time of a disaster. Because most disasters are unplanned, your disaster recovery plan must provide a way to recover your applications quickly, and more importantly, to access your data. Consistent data to the same point-in-time across all storage units is vital before you can recover your data at a backup (normally your remote) site.
Most users use a combination of remote mirror and copy and point-in-time copy (FlashCopy) features to form a comprehensive enterprise solution for disaster recovery. In an event of a planned event or unplanned disaster, you can use failover and failback modes as part of your recovery solution. Failover and failback modes can reduce the synchronization time of remote mirror and copy volumes after you switch between local (or production) and intermediate (or remote) sites
76 DS8880 Introduction and Planning Guide
Page 89
during an outage. Although failover transmits no data, it changes the status of a device, and the status of the secondary volume changes to a suspended primary volume. The device that initiates the failback command determines the direction of the transmitted data.
Recovery procedures that include failover and failback modes use remote mirror and copy functions, such as Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror, Multiple Target PPRC, and FlashCopy.
Note: See the IBM DS8000 Command-Line Interface User's Guide for specific disaster recovery tasks.
Data consistency can be achieved through the following methods:
Manually using external software (without Global Mirror)
You can use Metro Mirror, Global Copy, and FlashCopy functions to create a consistent and restartable copy at your recovery site. These functions require a manual and periodic suspend operation at the local site. For instance, you can enter the freeze and run commands with external automated software. Then, you can initiate a FlashCopy function to make a consistent copy of the target volume for backup or recovery purposes. Automation software is not provided with the storage system; it must be supplied by the user.
Note: The freeze operation occurs at the same point-in-time across all links and all storage systems.
Automatically (with Global Mirror and FlashCopy)
You can automatically create a consistent and restartable copy at your intermediate or remote site with minimal or no interruption of applications. This automated process is available for two-site Global Mirror or three-site Metro / Global Mirror configurations. Global Mirror operations automate the process of continually forming consistency groups. It combines Global Copy and FlashCopy operations to provide consistent data at the remote site. A master storage unit (along with subordinate storage units) internally manages data consistency through consistency groups within a Global Mirror configuration. Consistency groups can be created many times per hour to increase the currency of data that is captured in the consistency groups at the remote site.
Note: A consistency group is a collection of session-grouped volumes across multiple storage systems. Consistency groups are managed together in a session during the creation of consistent copies of data. The formation of these consistency groups is coordinated by the master storage unit, which sends commands over remote mirror and copy links to its subordinate storage units.
If a disaster occurs at a local site with a two or three-site configuration, you can continue production on the remote (or intermediate) site. The consistent point-in-time data from the remote site consistency group enables recovery at the local site when it becomes operational.

Resource groups for Copy Services scope limiting

Resource groups are used to define a collection of resources and associate a set of policies relative to how the resources are configured and managed. You can define a network user account so that it has authority to manage a specific set of resources groups.
Chapter 3. Data management features 77
Page 90
Copy Services scope limiting overview
Copy services scope limiting is the ability to specify policy-based limitations on Copy Services requests. With the combination of policy-based limitations and other inherent volume-addressing limitations, you can control which volumes can be in a Copy Services relationship, which network users or host LPARs issue Copy Services requests on which resources, and other Copy Services operations.
Use these capabilities to separate and protect volumes in a Copy Services relationship from each other. This can assist you with multitenancy support by assigning specific resources to specific tenants, limiting Copy Services relationships so that they exist only between resources within each tenant's scope of resources, and limiting a tenant's Copy Services operators to an "operator only" role.
When managing a single-tenant installation, the partitioning capability of resource groups can be used to isolate various subsets of an environment as if they were separate tenants. For example, to separate mainframes from distributed system servers, Windows from UNIX, or accounting departments from telemarketing.
Using resource groups to limit Copy Service operations
Figure 12 on page 79 illustrates one possible implementation of an exemplary environment that uses resource groups to limit Copy Services operations. Two tenants (Client A and Client B) are illustrated that are concurrently operating on shared hosts and storage systems.
Each tenant has its own assigned LPARs on these hosts and its own assigned volumes on the storage systems. For example, a user cannot copy a Client A volume to a Client B volume.
Resource groups are configured to ensure that one tenant cannot cause any Copy Services relationships to be initiated between its volumes and the volumes of another tenant. These controls must be set by an administrator as part of the configuration of the user accounts or access-settings for the storage system.
78 DS8880 Introduction and Planning Guide
Page 91
Site 1
Hosts with LPARs
Switches
Site 2
Switches
Hosts with LPARs
f2c01638
Client A Client A
Client B Client B
Client A Client A
Client B Client B
Figure 12. Implementation of multiple-client volume administration
Resource groups functions provide additional policy-based limitations to users or the DS8000 storage systems, which in conjunction with the inherent volume addressing limitations support secure partitioning of Copy Services resources between user-defined partitions. The process of specifying the appropriate limitations is completed by an administrator using resource groups functions.
Note: User and administrator roles for resource groups are the same user and administrator roles used for accessing your DS8000 storage system. For example, those roles include storage administrator, Copy Services operator, and physical operator.
The process of planning and designing the use of resource groups for Copy Services scope limiting can be complex. For more information on the rules and policies that must be considered in implementing resource groups, see topics about resource groups. For specific DS CLI commands used to implement resource groups, see the IBM DS8000 Command-Line Interface User's Guide.

Comparison of Copy Services features

The features of the Copy Services aid with planning for a disaster.
Chapter 3. Data management features 79
Page 92
Table 28 provides a brief summary of the characteristics of the Copy Services features that are available for the storage system.
Table 28. Comparison of features
Feature Description Advantages Considerations
Multiple Target PPRC Synchronous and
asynchronous replication
Metro/Global Mirror Three-site, long
distance disaster recovery replication
Metro Mirror Synchronous data
copy at a distance
Global Copy Continuous copy
without data consistency
Global Mirror Asynchronous copy Nearly unlimited
z/OS Global Mirror Asynchronous copy
controlled by IBM Z host software
Mirrors data from a single primary site to two secondary sites simultaneously.
A backup site is maintained regardless of which one of the sites is lost.
No data loss, rapid recovery time for distances up to 300 km.
Nearly unlimited distance, suitable for data migration, only limited by network and channel extenders capabilities.
distance, scalable, and low RPO. The RPO is the time needed to recover from a disaster; that is, the total system downtime.
Nearly unlimited distance, highly scalable, and very low RPO.
Disaster recovery scenarios depend on support from controlling software such as Geographically Dispersed Parallel Sysplex (GDPS) and IBM Copy Services Manager
Recovery point objective (RPO) might grow if bandwidth capability is exceeded.
Slight performance impact.
Copy is normally fuzzy but can be made consistent through synchronization.
RPO might grow when link bandwidth capability is exceeded.
Additional host server hardware and software is required. The RPO might grow if bandwidth capability is exceeded or host performance might be impacted.

I/O Priority Manager

The performance group attribute associates the logical volume with a performance group object. Each performance group has an associated performance policy which determines how the I/O Priority Manager processes I/O operations for the logical volume.
80 DS8880 Introduction and Planning Guide
Page 93
Note: The default setting for this feature is “disabled” and must be enabled for
use through either the DS8000 Storage Management GUI or the DS CLI. If I/O Priority Manager is enabled on your storage system, you cannot use a zHyperLink connection.
The I/O Priority Manager maintains statistics for the set of logical volumes in each performance group that can be queried. If management is performed for the performance policy, the I/O Priority Manager controls the I/O operations of all managed performance groups to achieve the goals of the associated performance policies. The performance group defaults to 0 if not specified. Table 29 lists performance groups that are predefined and have the associated performance policies:
Table 29. Performance groups and policies
Performance group
0 0 No management 1-5 1 Fixed block high priority 6-10 2 Fixed block medium priority 11-15 3 Fixed block low priority 16-18 0 No management 19 19 CKD high priority 1 20 20 CKD high priority 2 21 21 CKD high priority 3 22 22 CKD medium priority 1 23 23 CKD medium priority 2 24 24 CKD medium priority 3 25 25 CKD medium priority 4 26 26 CKD low priority 1 27 27 CKD low priority 2 28 28 CKD low priority 3 29 29 CKD low priority 4 30 30 CKD low priority 5 31 31 CKD low priority 6
Note:1Performance group settings can be managed using DS CLI.
1
Performance policy
Performance policy description

Securing data

You can secure data with the encryption features that are supported by the storage system.
Encryption technology has a number of considerations that are critical to understand to maintain the security and accessibility of encrypted data. For example, encryption must be enabled by feature code and configured to protect data in your environment. Encryption also requires access to at least two external key servers.
It is important to understand how to manage IBM encrypted storage and comply with IBM encryption requirements. Failure to follow these requirements might
Chapter 3. Data management features 81
Page 94
cause a permanent encryption deadlock, which might result in the permanent loss of all key-server-managed encrypted data at all of your installations.
The storage system automatically tests access to the encryption keys every 8 hours and access to the key servers every 5 minutes. You can verify access to key servers manually, initiate key retrieval, and monitor the status of attempts to access the key server.
82 DS8880 Introduction and Planning Guide
Page 95

Chapter 4. Planning the physical configuration

Physical configuration planning is your responsibility. Your technical support representative can help you to plan for the physical configuration and to select features.
This section includes the following information: v Explanations for available features that can be added to the physical
configuration of your system model
v Feature codes to use when you order each feature v Configuration rules and guidelines

Configuration controls

Indicator features control the physical configuration of the storage system.
These indicator features are for administrative use only. The indicator features ensure that each storage system (the base frame plus any expansion frames) has a valid configuration. There is no charge for these features.
Your storage system can include the following indicators:
Expansion-frame position indicators
Expansion-frame position indicators flag models that are attached to expansion frames. They also flag the position of each expansion frame within the storage system. For example, a position 1 indicator flags the expansion frame as the first expansion frame within the storage system.
Administrative indicators
If applicable, models also include the following indicators:
v IBM / Openwave alliance v IBM / EPIC attachment v IBM systems, including System p and IBM Z v Lenovo System x and BladeCenter v IBM storage systems, including IBM System Storage ProtecTIER®, IBM
Storwize®V7000, and IBM System Storage N series
v IBM SAN Volume Controller v Linux v VMware VAAI indicator v Storage Appliance

Determining physical configuration features

You must consider several guidelines for determining and then ordering the features that you require to customize your storage system. Determine the feature codes for the optional features you select and use those feature codes to complete your configuration.
Procedure
1. Calculate your overall storage needs, including the licensed functions.
The Copy Services and z-Synergy Services licensed functions are based on usage requirements.
© Copyright IBM Corp. 2004, 2018 83
Page 96
2. Determine the base and expansion models of which your storage system is to
be comprised.
3. Determine the management console configuration that supports the storage
system by using the following steps: a. Order one management console for each storage system. The management
console feature code must be ordered for the base model within the storage system.
b. Decide whether a secondary management console is to be installed for the
storage system. Adding a secondary management console ensures that you maintain a highly available environment.
4. For each base and expansion model, determine the storage features that you
need. a. Select the drive set feature codes and determine the amount of each feature
code that you must order for each model.
b. Select the storage enclosure feature codes and determine the amount that
you must order to enclose the drive sets that you are ordering.
c. Select the disk cable feature codes and determine the amount that you need
of each.
5. Determine the I/O adapter features that you need for your storage system. a. Select the device, flash RAID, and host adapters feature codes to order, and
choose a model to contain the adapters. All base models can contain adapters, but only the first attached expansion model can contain adapters.
b. For each model chosen to contain adapters, determine the number of each
I/O enclosure feature codes that you must order.
c. Select the cables that you require to support the adapters.
6. Based on the disk storage and adapters that the base model and expansion
models support, determine the appropriate processor memory feature code that is needed by each base model.
7. Decide which power features that you must order to support each model.
8. Review the other features and determine which feature codes to order.

Management console features

Management consoles are required features for your storage system configuration.
Customize your management consoles by specifying the following different features:
v A primary management console v A secondary management console

Primary and secondary management consoles

The management console is the focal point for configuration, Copy Services functions, remote support, and maintenance of your storage system.
The management console (also known as the Hardware Management Console or HMC) is a dedicated appliance that is physically located inside your storage system. It can proactively monitor the state of your storage system and notifying you and IBM when service is required. It also can be connected to your network for centralized management of your storage system by using the IBM DS
84 DS8880 Introduction and Planning Guide
Page 97
command-line interface (DS CLI) or storage management software through the IBM DS Open API. (The DS8000 Storage Management GUI cannot be started from the HMC.)
You can also use the DS CLI to control the remote access of your technical support representative to the HMC.
A secondary management console is available as an optional feature. The secondary HMC is a redundant management console for environments with high-availability requirements. If you use Copy Services, a redundant management console configuration is especially important.
The management console is included with every base frame along with a monitor and keyboard. An optional secondary management console is also available in the base frame.
Note: To preserve console function, the management consoles are not available as a general-purpose computing resource.
Feature codes for management consoles
Use these feature codes to order management consoles (MCs) for each storage system.
Table 30. Feature codes for management consoles
Feature code Description Models
1141 Primary management console A primary management consoles is
required for model 984, 985, 986, and 988
1151 Secondary management console Redundant management console
for high availability

Configuration rules for management consoles

The management console is a dedicated appliance in your storage system that can proactively monitor the state of your storage system. You must order an internal management console each time that you order a base frame.
You can also order a second management console for your storage system.

Storage features

You must select the storage features that you want on your storage system.
The storage features are separated into the following categories:
v Drive-set features and storage-enclosure features v Enclosure filler features v Device adapter features

Storage enclosures and drives

DS8880 supports various storage enclosures and drive options.
For model 984, 985, 986, and 988, this feature is optional
Chapter 4. Storage system physical configuration 85
Page 98
Standard drive enclosures and drives
Standard drive enclosures and drives are required components of your storage system configuration.
Each standard drive enclosure feature contains two enclosures.
Each drive set feature contains 16 disk drives or flash drives (SSDs) and is installed with eight drives in each standard drive-enclosure pair.
The 3.5-inch storage enclosure slots are numbered left to right, and then top to bottom. The top row of drives is D01 - D04. The second row of drives is D05 ­D08. The third row of drives is D09 - D12.
The 2.5-inch storage enclosure slots are numbered from left to right as slots D01 ­D24. For full SFF (2.5-inch) drive sets, the first installation group populates D01 ­D08 for both standard drive enclosures in the pair. The second installation group populates D09 - D16. The third installation group populates D17 - D24.
Note: Storage enclosures are installed in the frame from the bottom up.
Table 31 provide information on the placement of drive sets in the storage enclosure.
Table 31. Placement of full drive sets in the storage enclosure
Standard drive-enclosures type Set 1 Set 2 Set 3
3.5 inch disk drives D01 - D04 D05 - D08 D09 - D12
2.5 inch disk and flash drives
D01 - D08 D09 - D16 D17 - D24
Feature codes for drive sets
Use these feature codes to order sets of encryption disk drives andflash drives for DS8880.
All drives that are installed in a standard drive enclosure pair or High Performance Flash Enclosure Gen2 pair must be of the same drive type, capacity, and speed.
The flash drives can be installed only in High Performance Flash Enclosures Gen2. See Table 34 on page 87 for the feature codes. Each High Performance Flash Enclosure Gen2 pair can contain 16, 32, or 48 flash drives. All flash drives in a High Performance Flash Enclosure Gen2 must be the same type and same capacity.
Table 32, Table 33 on page 87, Table 34 on page 87 list the feature codes for encryption drive sets based on drive size and speed.
Table 32. Feature codes for disk-drive sets
Feature code Disk size Drive type Drives per set
5308 300 GB 2.5-in. disk
drives
5618 600 GB 2.5-in. disk
drives
16 15 K Yes 51, 6, 10
16 15 K Yes 51, 6, 10
Drive speed in RPM (K=1000)
Encryption drive RAID support
86 DS8880 Introduction and Planning Guide
Page 99
Table 32. Feature codes for disk-drive sets (continued)
Feature code Disk size Drive type Drives per set
5708 600 GB 2.5-in. disk
16 10 K Yes 51, 6, 10
Drive speed in RPM (K=1000)
Encryption drive RAID support
drives
5768 1.2 TB 2.5-in disk
16 10 K Yes 6, 10
drives
5778 1.8 TB 2.5-in disk
16 10 K Yes 6, 10
drives
5868 4 TB 3.5-in. NL disk
8 7.2 K Yes 6, 10
drives
5878 6 TB 3.5-in. NL disk
8 7.2 K Yes 6, 10
drives
Note:
1. RAID 5 is not supported for drives larger than 1 TB and requires a request for price quote (RPQ). For
information, contact your service representative.
2. Drives are full disk encryption (FDE) self-encrypting drive (SED) capable.
Table 33. Feature codes for flash-drive (SSD) sets for standard enclosures
Drive speed
Feature code Disk size Drive type Drives per set
6158 400 GB 2.5-in flash
16 N/A Yes 51, 6, 10
in RPM (K=1000)
Encryption drive RAID support
drives
6258 800 GB 2.5-in flash
16 N/A Yes 51, 6, 10
drives
6358 1.6 TB 2.5-in flash
16 N/A Yes 6, 10
drives
Note:
1. RAID 5 is not supported for drives larger than 1 TB and requires a request for price quote (RPQ). For
information, contact your service representative.
Table 34. Feature codes for flash-drive sets for High Performance Flash Enclosures Gen2
Feature code Disk size Drive type Drives per set
1610 400 GB 2.5-in. Flash
16 N/A Yes 5, 6, 10
Drive speed in RPM (K=1000)
Encryption drive RAID support
Tier 0 drives
1611 800 GB 2.5-in. Flash
16 N/A Yes 5, 6, 10
Tier 0 drives
1612 1.6 TB 2.5-in. Flash
16 N/A Yes 6, 10
Tier 0 drives
1613 3.2 TB 2.5-in. Flash
16 N/A Yes 6, 10
Tier 0 drives
1623 3.8 TB 2.5-in. Flash
16 N/A Yes 6, 10
Tier 1 drives
1624 7.6 TB 2.5-in. Flash
16 N/A Yes 6
Tier 2 drives
Chapter 4. Storage system physical configuration 87
1
1
2
3
Page 100
Table 34. Feature codes for flash-drive sets for High Performance Flash Enclosures Gen2 (continued)
Drive speed in
Feature code Disk size Drive type Drives per set Note:
1. RAID 5 is not supported for 1.6 TB and 3.2 TB Flash Tier 0 drives without a request for price quote (RPQ). For
information, contact your sales representative.
2. RAID 5 is not supported for 3.8 TB Flash Tier 1 drives, and no RPQ is available.
|
3. RAID 5 and RAID 10 are not supported for 7.6 TB Flash Tier 2 drives, and no RPQ is available.
|
RPM (K=1000)
Encryption drive RAID support
Feature codes for storage enclosures
Use these feature codes to identify the type of drive enclosures for your storage system.
Table 35. Feature codes for storage enclosures
Feature code Description Models
1241 Standard drive-enclosure pair
Note: This feature contains two filler sets in each enclosure.
1242 Standard drive-enclosure pair for 2.5-inch
disk drives
1244 Standard drive-enclosure pair for 3.5-inch
disk drives
1245 Standard drive-enclosure pair for 400 GB
flash drives
1256 Standard drive-enclosure pair for 800 GB
flash drives
1257 Standard drive-enclosure pair for 1.6 TB
flash drives
1600 High Performance Flash Enclosure Gen2
pair with flash RAID controllers for 400 GB, 800 GB, 1.6 TB, 3.2 TB, 3.8 TB, and 7.6 TB flash drives
1602 High Performance Flash Enclosure Gen2
pair for 400 GB, 800 GB, 1.6 TB, 3.2 TB, 3.8 TB, and 7.6 TB flash drives
1604 Flash RAID adapter pair for 400 GB, 800
GB, 1.6 TB, 3.2 TB, 3.8 TB, and 7.6 TB flash drives
984, 985, 986, 84E, 85E, 86E
984, 985, 986, 84E, 85E, 86E
984, 985, 986, 84E, 85E, 86E
984, 985, 986, 84E, 85E, 86E
984, 985, 986, 84E, 85E, 86E
984, 985, 986, 84E, 85E, 86E
984, 985, 986, 988, 84E, 85E, 86E, 88E
988 and 88E
988 and 88E

Storage-enclosure fillers

Storage-enclosure fillers fill empty drive slots in the storage enclosures. The fillers ensure sufficient airflow across populated storage.
For standard drive enclosures, one filler feature provides a set of 8 or 16 fillers. Two filler features are required if only one drive set feature is in the standard drive-enclosure pair. One filler feature is required if two drive-set features are in the standard drive-enclosure pair.
For High Performance Flash Enclosures Gen2, one filler feature provides a set of 16 fillers.
88 DS8880 Introduction and Planning Guide
Loading...