Before using this information and the product it supports, read the information in “Safety and environmental notices” on
page 133 and “Notices” on page 131.
This edition applies to version 8, release 5 of the DS8882F Rack Mounted storage system.
z/OS Metro/Global Mirror Incremental Resync86
Copy Services Manager on the Hardware
Management Console license........ 86
Planning for encryption-key servers ..... 111
Planning for key lifecycle managers ..... 112
Planning for full-disk encryption activation .. 113
Planning for user accounts and passwords.... 113
Managing secure user accounts ...... 113
Managing secure service accounts..... 114
Planning for NIST SP 800-131A security
conformance.............. 114
Chapter 10. License activation and
management............ 117
Planning your licensed functions ....... 117
Activation of licensed functions ....... 118
Activating licensed functions ....... 118
Scenarios for managing licensing ....... 119
Adding storage to your machine ...... 119
Managing a licensed feature ....... 120
Appendix A. Accessibility features121
Chapter 6. Delivery and installation
requirements ............ 89
Acclimation .............. 89
Shipment weights and dimensions....... 90
Receiving delivery ............ 90
Installation site requirements........ 90
Planning the rack configuration...... 90
Planning for floor and space requirements ... 92
Planning for power requirements ...... 93
Planning for environmental requirements ... 95
Planning for safety ........... 99
Planning for network and communications
requirements............ 100
Chapter 7. Planning your storage
complex setup ........... 103
Company information .......... 103
Management console network settings ..... 103
Remote support settings .......... 104
Notification settings ........... 105
Power control settings .......... 105
Control switch settings .......... 105
Chapter 8. Planning data migration107
Selecting a data migration method ...... 108
Appendix B. Warranty information123
Appendix C. IBM equipment and
documents ............ 125
Installation components .......... 125
Customer components .......... 126
Service components ........... 126
Appendix D. Customization
worksheets ............ 127
Appendix E. Compliance standards129
Notices .............. 131
Trademarks .............. 132
Homologation statement......... 133
Safety and environmental notices....... 133
Safety notices and labels......... 133
Vendor-specific uninterruptible power supply
safety statements ........... 142
Environmental notices ......... 143
Electromagnetic compatibility notices .... 144
Index ............... 149
Chapter 9. Planning for security ... 111
Planning for data encryption ........ 111
ivDS8882F Introduction and Planning Guide
Page 5
About this book
This book describes how to plan for a new installation of the DS8882F Rack
Mounted storage system. It includes information about planning considerations,
customization guidance, and configuration.
Who should use this book
This book is intended for personnel that are involved in planning. Such personnel
include IT facilities managers, individuals responsible for power, cooling, wiring,
network, and general site environmental planning and setup.
Conventions and terminology
Different typefaces are used in this guide to show emphasis, and various notices
are used to highlight key information.
The following typefaces are used to show emphasis:
TypefaceDescription
BoldText in bold represents menu items.
bold monospaceText in bold monospace represents command names.
ItalicsText in italics is used to emphasize a word. In command syntax, it
MonospaceText in monospace identifies the data or commands that you type,
is used for variables for which you supply actual values, such as a
default directory or the name of a system.
samples of command output, examples of program code or
messages from the system, or names of command flags,
parameters, arguments, and name-value pairs.
These notices are used to highlight key information:
NoticeDescription
NoteThese notices provide important tips, guidance, or advice.
ImportantThese notices provide information or advice that might help you
avoid inconvenient or difficult situations.
AttentionThese notices indicate possible damage to programs, devices, or
data. An attention notice is placed before the instruction or
situation in which damage can occur.
Publications and related information
Product guides, other IBM®publications, and websites contain information that
relates to the IBM DS8000®series.
To view a PDF file, you need Adobe Reader. You can download it at no charge
from the Adobe website(get.adobe.com/reader/).
The IBM DS8000 series online product documentation ( http://www.ibm.com/
support/knowledgecenter/ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/
f2c_securitybp.html) contains all of the information that is required to install,
configure, and manage DS8000 storage systems. The online documentation is
updated between product releases to provide the most current documentation.
Publications
You can order or download individual publications (including previous versions)
that have an order number from the IBM Publications Center website
(www.ibm.com/shop/publications/order/). Publications without an order number
are available on the documentation CD or can be downloaded here.
Table 1. DS8000 series product publications
TitleDescriptionOrder number
DS8882F Introduction and
Planning Guide
DS8880 Introduction and
Planning Guide
DS8870 Introduction and
Planning Guide
DS8800 and DS8700
Introduction and Planning
Guide
This publication provides an overview of
the DS8882F, the latest storage system in
the DS8000 series. The DS8882F provides
the new model 983. This publication
provides an overview of the product and
technical concepts for DS8882F.
This publication provides an overview of
the product and technical concepts for
DS8880. It also describes the ordering
features and how to plan for an
installation and initial configuration of
the storage system.
This publication provides an overview of
the product and technical concepts for
DS8870. It also describes the ordering
features and how to plan for an
installation and initial configuration of
the storage system.
This publication provides an overview of
the product and technical concepts for
DS8800 and DS8700. It also describes
ordering features and how to plan for an
installation and initial configuration of
the storage system.
Table 1. DS8000 series product publications (continued)
TitleDescriptionOrder number
Command-Line Interface
User's Guide
This publication describes how to use the
DS8000 command-line interface (DS CLI)
to manage DS8000 configuration and
Copy Services relationships, and write
customized scripts for a host system. It
also includes a complete list of CLI
commands with descriptions and
example usage.
This publication provides information
about attaching hosts to the storage
system. You can use various host
attachments to consolidate storage
capacity and workloads for open systems
and IBM Z hosts.
IBM Storage System
Multipath Subsystem Device
Driver User's Guide
This publication provides information
regarding the installation and use of the
Subsystem Device Driver (SDD),
Download
Subsystem Device Driver Path Control
Module (SDDPCM), and Subsystem
Device Driver Device Specific Module
(SDDDSM) on open systems hosts.
Application Programming
Interface Reference
This publication provides reference
information for the DS8000 Open
application programming interface (DS
Open API) and instructions for installing
the Common Information Model Agent,
See the Agreements and License Information CD that was included with the
DS8000 series for the following documents:
v License Information
v Notices and Information
v Supplemental Notices and Information
Related websites
View the websites in the following table to get more information about DS8000
series.
Table 3. DS8000 series related websites
TitleDescription
IBM website (ibm.com®)Find more information about IBM products and
services.
IBM Support Portal
website(www.ibm.com/storage/
support)
IBM Directory of Worldwide
Contacts website(www.ibm.com/
planetwide)
IBM DS8000 series website
(www.ibm.com/servers/storage/
disk/ds8000)
IBM Redbooks
website(www.redbooks.ibm.com/)
IBM System Storage®Interoperation
Center (SSIC) website
(www.ibm.com/systems/support/
storage/config/ssic)
®
Find support-related information such as downloads,
documentation, troubleshooting, and service requests
and PMRs.
Find contact information for general inquiries,
technical support, and hardware and software
support by country.
Find product overviews, details, resources, and
reviews for the DS8000 series.
Find technical information developed and published
by IBM International Technical Support Organization
(ITSO).
Find information about host system models,
operating systems, adapters, and switches that are
supported by the DS8000 series.
viiiDS8882F Introduction and Planning Guide
Page 9
Table 3. DS8000 series related websites (continued)
TitleDescription
IBM Storage SAN
(www.ibm.com/systems/storage/
san)
IBM Data storage feature activation
(DSFA) website
(www.ibm.com/storage/dsfa)
IBM Fix Central
(www-933.ibm.com/support/
fixcentral)
IBM Java™SE (JRE)(www.ibm.com/
developerworks/java/jdk)
IBM Security Key Lifecycle
Manager online product
documentation(www.ibm.com/
support/knowledgecenter/
SSWPVP/)
IBM Spectrum Control™online
product documentation in IBM
Knowledge Center
(www.ibm.com/support/
knowledgecenter)
DS8700 Code Bundle Information
website(www.ibm.com/support/
docview.wss?uid=ssg1S1003593)
DS8800 Code Bundle Information
website(www.ibm.com/support/
docview.wss?uid=ssg1S1003740)
DS8870 Code Bundle Information
website(www.ibm.com/support/
docview.wss?uid=ssg1S1004204)
DS8880 Code Bundle Information
website(www.ibm.com/support/
docview.wss?uid=ssg1S1005392)
Find information about IBM SAN products and
solutions, including SAN Fibre Channel switches.
Download licensed machine code (LMC) feature keys
that you ordered for your DS8000 storage systems.
Download utilities such as the IBM Easy Tier®Heat
Map Transfer utility and Storage Tier Advisor tool.
Download IBM versions of the Java SE Runtime
Environment (JRE), which is often required for IBM
products.
This online documentation provides information
about IBM Security Key Lifecycle Manager, which
you can use to manage encryption keys and
certificates.
This online documentation provides information
about IBM Spectrum Control, which you can use to
centralize, automate, and simplify the management of
complex and heterogeneous storage environments
including DS8000 storage systems and other
components of your data storage infrastructure.
Find information about code bundles for DS8700. See
section 3 for web links to SDD information.
The version of the currently active installed code
bundle displays with the DS CLI ver command when
you specify the -l parameter.
Find information about code bundles for DS8800. See
section 3 for web links to SDD information.
The version of the currently active installed code
bundle displays with the DS CLI ver command when
you specify the -l parameter.
Find information about code bundles for DS8870. See
section 3 for web links to SDD information.
The version of the currently active installed code
bundle displays with the DS CLI ver command when
you specify the -l parameter.
Find information about code bundles for DS8880.
The version of the currently active installed code
bundle displays with the DS CLI ver command when
you specify the -l parameter.
IBM Publications Center
The IBM Publications Center is a worldwide central repository for IBM product
publications and marketing material.
About this bookix
Page 10
Procedure
The IBM Publications Center website (ibm.com/shop/publications/order) offers
customized search functions to help you find the publications that you need. You
can view or download publications at no charge.
Sending comments
Your feedback is important in helping to provide the most accurate and highest
quality information.
Procedure
To submit any comments about this publication or any other IBM storage product
documentation:
Send your comments by email to ibmkc@us.ibm.com. Be sure to include the
following information:
v Exact publication title and version
v Publication form number (for example, GA32-1234-00)
v Page, table, or illustration numbers that you are commenting on
v A detailed description of any information that should be changed
xDS8882F Introduction and Planning Guide
Page 11
Chapter 1. Overview
The IBM DS8882F Rack Mounted is a high-performance storage system that
supports continuous operation, data security, and data resiliency. For
high-availability, the hardware components are redundant.
DS8882F adds a modular rack-mountable enterprise storage system to the 533x
all-flash machine type family. The modular system can be integrated into 16U
contiguous space of an existing IBM z14™Model ZR1 (z14 Model ZR1), IBM
LinuxONE Rockhopper™II (z14 Model LR1), or other standard 19-inch wide rack
that conforms to EIA 310D specifications. The DS8882F allows you to take
advantage of the DS8880 advanced features while limiting datacenter footprint and
power infrastructure requirements. The modular system contains processor nodes,
an I/O enclosure, High Performance Flash Enclosures Gen2, a management
enclosure (which includes the HMCs, Ethernet Switches, and RPCs), and battery
backup modules to power the DS8882F modules. The HMCs are small form factor
computers.
The DS8884, DS8886, DS8884F, DS8886F, and DS8888F systems are not documented
in this publication. For information on those systems, refer to the DS8880Introduction and Planning Guide (GC27-8525-16).
Licensed functions are available in four groups:
Base Function
The Base Function license is required for each DS8882F storage system. The
licensed functions include Database Protection, Encryption Authorization,
Easy Tier, I/O Priority Manager, the Operating Environment License, and
Thin Provisioning.
z-synergy Services
The z-synergy Services include z/OS®functions that are supported on the
storage system. The licensed functions include transparent cloud tiering,
High Performance FICON®for z Systems®, HyperPAV, PAV, and z/OS
Distributed Data Backup.
Copy Services
Copy Services features help you implement storage solutions to keep your
business running 24 hours a day, 7 days a week by providing data
duplication, data migration, and disaster recovery functions. The licensed
functions include Global Mirror, Metro Mirror, Metro/Global Mirror,
Point-in-Time Copy/FlashCopy®, z/OS Global Mirror, Safeguarded Copy,
and z/OS Metro/Global Mirror Incremental Resync (RMZ).
Copy Services Manager on Hardware Management Console
The Copy Services Manager on Hardware Management Console (CSM on
HMC) license enables IBM Copy Services Manager to run on the Hardware
Management Console, which eliminates the need to maintain a separate
server for Copy Services functions.
DS8880 also includes features such as:
v POWER8®processors
v Power®-usage reporting
v National Institute of Standards and Technology (NIST) SP 800-131A enablement
Other functions that are supported in both the DS8000 Storage Management GUI
and the DS command-line interface (DS CLI) include:
v Easy Tier
v Data encryption
v Thin provisioning
You can use the DS8000 Storage Management GUI and the DS command-line
interface (DS CLI) to manage and logically configure the storage system.
Functions that are supported in only the DS command-line interface (DS CLI)
include:
v Point-in-time copy functions with IBM FlashCopy
v Remote Mirror and Copy functions, including
– Metro Mirror
– Global Copy
– Global Mirror
– Metro/Global Mirror
– z/OS Global Mirror
– z/OS Metro/Global Mirror
– Multiple Target PPRC
v I/O Priority Manager
DS8882F meets hazardous substances (RoHS) requirements by conforming to the
following EC directives:
v Directive 2011/65/EU of the European Parliament and of the Council of 8 June
2011 on the restriction of the use of certain hazardous substances in electrical
and electronic equipment. It has been demonstrated that the requirements
specified in Article 4 are met.
v EN 50581:2012 technical documentation for the assessment of electrical and
electronic products regarding the restriction of hazardous substances.
The IBM Security Key Lifecycle Manager stores data keys that are used to secure
the key hierarchy that is associated with the data encryption functions of various
devices, including the DS8000 series. It can be used to provide, protect, and
maintain encryption keys that are used to encrypt information that is written to
and decrypt information that is read from encryption-enabled disks. IBM Security
Key Lifecycle Manager operates on various operating systems.
Machine types overview
There are several machine type options available for the DS8882F. Order a
hardware machine type for the storage system and a corresponding function
authorization machine type for the licensed functions that are planned for use.
The following table lists the available hardware machine types and their
corresponding function authorization machine types.
2DS8882F Introduction and Planning Guide
Page 13
Hardware
f2c02629
HOST adapters
HOST adapters
Adaptor processors
Adapter processors
Protocol management
Protocol management
Shared processors
cache
Shared processors
cache
Shared processors
cache
Shared processors
cache
Power server
Power server
Flash RAID adapters
Flash RAID adapters
Adapter processorsRAID & sparing management
RAID & sparing managementAdapter processors
Power server
Power server
Table 4. Available hardware and function-authorization machine types
HardwareLicensed functions
Corresponding
Hardware machine
type
5331 (1-year warranty
period)
5332 (2-year warranty
period)
5333 (3-year warranty
period)
5334 (4-year warranty
period)
Available hardware
models
983
function
authorization
machine type
9046 (1-year warranty
period)
9047 (2-year warranty
period)
9048 (3-year warranty
period)
9049 (4-year warranty
period)
Available function
authorization models
LF8
The machine types for the DS8882F specify the service warranty period. The
warranty is used for service entitlement checking when notifications for service are
called home. The model 983 reports 2107 as the machine type to attached host
systems.
The architecture of the DS8882F is based on three major elements that provide
function specialization and three tiers of processing power.
Figure 1. DS8882F architecture
The DS8882F architecture has the following major benefits.
Chapter 1. Overview3
Page 14
v Server foundation
– Promotes high availability and high performance by using field-proven Power
servers
– Reduces custom components and design complexity
– Positions the storage system to reap the benefits of server technology
advances
v Operating environment
– Promotes high availability and provides a high-quality base for the storage
system software through a field-proven AIX operating-system kernel
– Provides an operating environment that is optimized for Power servers,
including performance and reliability, availability, and serviceability
– Provides shared processor (SMP) efficiency
– Reduces custom code and design complexity
– Uses Power firmware and software support for networking and service
functions
DS8882F (machine type 533x model 983)
The DS8882F is an entry-level, high-performance storage system that includes only
High Performance Flash Enclosures Gen2.
DS8882F storage system features 6-core processors and is scalable and supports one
High Performance Flash Enclosure Gen2 pair with up to 48 Flash Tier 0, Flash Tier
1, or Flash Tier 2 drives. This modular rack-mountable enterprise storage system
can be integrated into 16U contiguous space of an existing IBM z14 Model ZR1
(z14 Model ZR1), IBM LinuxONE Rockhopper II (z14 Model LR1), or other
standard 19-inch wide rack that conforms to EIA 310D specifications to take
advantage of the DS8880 advanced features while limiting datacenter footprint and
power infrastructure requirements. The modular system contains processor nodes,
an I/O enclosure, High Performance Flash Enclosures Gen2, a management
enclosure (which includes the HMCs, Ethernet Switches, and RPCs), and battery
backup modules to power the DS8882F modules. The HMCs are small form factor
computers.
Note: The standard 19-inch wide rack installation (feature code 0939) supports an
optional 1U keyboard and display (feature code 1765). The 16U contiguous space
requirement does not include space for the optional keyboard and display, but they
are not required to reside contiguously with the DS8882F model 983. However, if
you add the keyboard and display, ensure that you provide adequate space to
accommodate them.
The DS8882F uses 16 Gbps Fibre Channel host adapters that run Fibre Channel
Protocol (FCP), FICON protocol. The High Performance FICON (HPF) feature is
also supported.
The DS8882F supports single-phase power.
For more specifications, see the IBM DS8000 series specifications web
site(www.ibm.com/systems/storage/disk/ds8000/specifications.html).
The following tables list the hardware components and maximum capacities that
are supported for the DS8882F, depending on the amount of memory that is
available.
The DS8882F (model 983) consists of eight 2U modules for installation in and
existing rack.
The model 983 includes the following components:
v High Performance Flash Enclosure Gen2 pair
v I/O enclosure
v Two processor nodes (available with POWER8 processors)
v Management enclosure
v Two 3 kVA battery backup modules
High Performance Flash
Enclosure Gen2 pairs
1
Chapter 1. Overview5
Page 16
f2c02668
Battery Backup Module
Management Enclosure
IO Enclosure
High Performance
Flash Enclosure Gen2
Processor Node
Processor Node
Battery Backup Module
High Performance
Flash Enclosure Gen2
PDU
PDU
PDU
PDU
16U
Contiguous
Space
Existing Rack
DS8882F model 983
Figure 2. DS8882F model 983 modules
High Performance Flash Enclosures Gen2 pair
The High Performance Flash Enclosure Gen2 is a 2U storage enclosure that is
installed in pairs. For version 8.5, the model 983 supports only one High
Performance Flash Enclosure Gen2 pair.
The High Performance Flash Enclosure Gen2 pair provides two 2U storage
enclosures. This combination of components forms a high-performance,
fully-redundant flash storage array.
The High Performance Flash Enclosure Gen2 pair contains the following hardware
components:
v Two 2U 24-slot SAS flash drive enclosures. Each of the two enclosures contains
the following components:
– Two power supplies with integrated cooling fans
– Two SAS Expander Modules with two SAS ports each
6DS8882F Introduction and Planning Guide
Page 17
– One midplane or backplane for plugging components that provides
maintenance of flash drives, Expander Modules, and power supplies
Management enclosure
The model 983 contains a management enclosure.
The management enclosure contains the following components:
v Two Hardware Management Consoles (HMCs)
v Two Ethernet switches
v Two power control cards
v Two power supply units (PSUs) to power the management enclosure
v One Local/Remote switch assembly
Management console
The management console is also referred to as the Hardware Management Console
(or HMC). It supports storage system hardware and firmware installation and
maintenance activities.
The HMC connects to the customer network and provides access to functions that
can be used to manage the storage system. Management functions include logical
configuration, problem notification, call home for service, remote service, and Copy
Services management. You can perform management functions from the DS8000
Storage Management GUI, DS command-line interface (DS CLI), or other storage
management software that supports the storage system.
Ethernet switches
The Ethernet switches provide internal communication between the management
consoles and the processor complexes. Two redundant Ethernet switches are
provided.
Processor nodes
The processor nodes drive all functions in the storage system. Each node consists
of a Power server that contains POWER8 processors and memory.
I/O enclosure
The I/O enclosure provides connectivity between the adapters and the processor
complex.
The I/O enclosure uses PCIe interfaces to connect I/O adapters in the I/O
enclosure to both processor nodes. A PCIe device is an I/O adapter.
To improve I/O operations per second (IOPS) and sequential read/write
throughput, the I/O enclosure is connected to each processor node with a
point-to-point connection.
The I/O enclosure contains the following adapters:
Flash RAID adapters
PCIe-attached adapter with four SAS ports. These adapters connect the
processor nodes to enclosures and provide RAID controllers for RAID
support.
Host adapters
An I/O enclosure can support 8 or 16 host ports.
Chapter 1. Overview7
Page 18
Power
Each of the four 16 Gbps Fibre Channel ports on a PCIe-attached adapter
can be independently configured to use SCSI/FCP or FICON/zHPF
protocols. Both longwave and shortwave adapter versions that support
different maximum cable lengths are available. The host-adapter ports can
be directly connected to attached hosts systems or storage systems, or
connected to a storage area network. SCSI/FCP ports are used for
connections between storage systems. SCSI/FCP ports that are attached to
a SAN can be used for both host and storage system connections.
The High Performance FICON Extension (zHPF) protocol can be used by
FICON host channels that have zHPF support. The use of zHPF protocols
provides a significant reduction in channel usage. This reduction improves
I/O input on a single channel and reduces the number of FICON channels
that are required to support the workload.
Two redundant 3 kVA battery backup modules supply 230 V AC power to the
DS8882F storage system. Each battery backup module receives input power from a
single-phase line cord.
If both battery backup modules lose input power, they have sufficient capacity to
continue to supply AC power to the DS8882F until it has completed a fire hose
dump to protect modified data. The system will then gracefully power off.
Functional overview
The following list provides an overview of some of the features that are associated
with DS8882F.
Note: Some storage system functions are not available or are not supported in all
environments. See the IBM System Storage Interoperation Center (SSIC) website
(www.ibm.com/systems/support/storage/config/ssic) for the most current
information on supported hosts, operating systems, adapters, and switches.
Nondisruptive and disruptive activities
DS8882F supports full redundancy, but some components are a single point
of repair. It is designed to support nondisruptive changes: repair, and
licensed function upgrades. In addition, logical configuration changes can
be made nondisruptively. For example:
v An increase in license scope is nondisruptive and takes effect
immediately. A decrease in license scope is also nondisruptive but does
not take effect until the next IML.
v Easy Tier helps keep performance optimized by periodically
redistributing data to help eliminate drive hot spots that can degrade
performance. This function helps balance I/O activity across the drives
in an existing drive tier. It can also automatically redistribute some data
to new empty drives added to a tier to help improve performance by
taking advantage of the new resources. Easy Tier does this I/O activity
rebalancing automatically without disrupting access to your data.
Energy reporting
You can use the DS8882F to display the following energy measurements
through the DS CLI:
v Average inlet temperature in Celsius
v Total data transfer rate in MB/s
v Timestamp of the last update for values
8DS8882F Introduction and Planning Guide
Page 19
The derived values are averaged over a 5-minute period. For more
information about energy-related commands, see the commands reference.
You can also query power usage and data usage with the showsu
command. For more information, see the showsu description in the
Command-Line Interface User's Guide.
National Institute of Standards and Technology (NIST) SP 800-131A security
enablement
NIST SP 800-131A requires the use of cryptographic algorithms that have
security strengths of 112 bits to provide data security and data integrity for
secure data that is created in the cryptoperiod starting in 2014. The DS8880
is enabled for NIST SP 800-131A. Conformance with NIST SP 800-131A
depends on the use of appropriate prerequisite management software
versions and appropriate configuration of the DS8880 and other
network-related entities.
Storage pool striping (rotate capacity)
Storage pool striping is supported on the DS8000 series, providing
improved performance. The storage pool striping function stripes new
volumes across all arrays in a pool. The striped volume layout reduces
workload skew in the system without requiring manual tuning by a
storage administrator. This approach can increase performance with
minimal operator effort. With storage pool striping support, the system
automatically performs close to highest efficiency, which requires little or
no administration. The effectiveness of performance management tools is
also enhanced because imbalances tend to occur as isolated problems.
When performance administration is required, it is applied more precisely.
You can configure and manage storage pool striping by using the DS8000
Storage Management GUI, DS CLI, and DS Open API. The rotate capacity
allocation method (also referred to as rotate volumes) is an alternative
allocation method that tends to prefer volumes that are allocated to a
single managed array, and is not recommended. The rotate extents option
(storage pool striping) is designed to provide the best performance by
striping volumes across arrays in the pool. Existing volumes can be
reconfigured nondisruptively by using manual volume migration and
volume rebalance.
The storage pool striping function is provided with the DS8000 series at no
additional charge.
Performance statistics
You can use usage statistics to monitor your I/O activity. For example, you
can monitor how busy the I/O ports are and use that data to help manage
your SAN. For more information, see documentation about performance
monitoring in the DS8000 Storage Management GUI.
Sign-on support that uses Lightweight Directory Access Protocol (LDAP)
The DS8882F system provides support for both unified sign-on functions
(available through the DS8000 Storage Management GUI), and the ability
to specify an existing Lightweight Directory Access Protocol (LDAP) server.
The LDAP server can have existing users and user groups that can be used
for authentication on the DS8882F system.
Setting up unified sign-on support for the DS8882F system is achieved by
using IBM Copy Services Manager or IBM Spectrum Control. .
Note: Other supported user directory servers include IBM Directory Server
and Microsoft Active Directory.
Chapter 1. Overview9
Page 20
Easy Tier
Easy Tier is an optional feature that offers enhanced capabilities through
features such as auto-rebalancing, hot spot management, rank
depopulation, and manual volume migration.
Easy Tier enables the DS8882F system to automatically balance I/O access
to drives to avoid hot spots on arrays.
Easy Tier can benefit homogeneous drive pools because it can move data
away from over-utilized arrays to under-utilized arrays to eliminate hot
spots and peaks in drive response times.
z-synergy
The DS8882F storage system can work in cooperation with IBM Z hosts to
provide the following performance enhancement functions.
v Extended Address Volumes
v High Performance FICON for IBM Z
v I/O Priority Manager with z/OS Workload Manager
v Parallel Access Volumes and HyperPAV (also referred to as aliases)
v Quick initialization for IBM Z
v Transparent cloud tiering
Copy Services
The DS8882F storage system supports a wide variety of Copy Service
functions, including Remote Mirror, Remote Copy, and Point-in-Time
functions. The following includes key Copy Service functions:
v FlashCopy
v Remote Pair FlashCopy (Preserve Mirror)
v Safeguarded Copy
v Remote Mirror and Copy:
– Metro Mirror
– Global Copy
– Global Mirror
– Metro/Global Mirror
– Multiple Target PPRC
– z/OS Global Mirror
– z/OS Metro/Global Mirror
Multitenancy support (resource groups)
Resource groups provide additional policy-based limitations. Resource
groups, together with the inherent volume addressing limitations, support
secure partitioning of Copy Services resources between user-defined
partitions. The process of specifying the appropriate limitations is
performed by an administrator using resource groups functions. DS CLI
support is available for resource groups functions.
Multitenancy can be supported in certain environments without the use of
resource groups, if the following constraints are met:
v Either Copy Services functions are disabled on all DS8000 systems that
share the same SAN (local and remote sites) or the landlord configures
the operating system environment on all hosts (or host LPARs) attached
to a SAN, which has one or more DS8000 systems, so that no tenant can
issue Copy Services commands.
10DS8882F Introduction and Planning Guide
Page 21
v The z/OS Distribute Data backup feature is disabled on all DS8000
systems in the environment (local and remote sites).
v Thin provisioned volumes (ESE or TSE) are not used on any DS8000
systems in the environment (local and remote sites).
v On zSeries systems there is only one tenant running in an LPAR, and the
volume access is controlled so that a CKD base volume or alias volume
is only accessible by a single tenant’s LPAR or LPARs.
I/O Priority Manager
The I/O Priority Manager function can help you effectively manage quality
of service levels for each application running on your system. This function
aligns distinct service levels to separate workloads in the system to help
maintain the efficient performance of each volume. The I/O Priority
Manager detects when a higher-priority application is hindered by a
lower-priority application that is competing for the same system resources.
This detection might occur when multiple applications request data from
the same drives. When I/O Priority Manager encounters this situation, it
delays lower-priority I/O data to assist the more critical I/O data in
meeting its performance targets.
Use this function to consolidate more workloads on your system and to
ensure that your system resources are aligned to match the priority of your
applications.
The default setting for this feature is disabled.
Restriction of hazardous substances (RoHS)
Logical configuration
You can use either the DS8000 Storage Management GUI or the DS CLI to
configure storage. Although the end result of storage configuration is similar, each
interface has specific terminology, concepts and procedures.
Note: LSS is synonymous with logical control unit (LCU) and subsystem
identification (SSID).
Note: If the I/O Priority Manager LIC key is activated, you can enable
I/O Priority Manager on the Advanced tab of the System settings page in
the DS8000 Storage Management GUI.
The DS8882F system meets RoHS requirements. It conforms to the
following EC directives:
v Directive 2011/65/EU of the European Parliament and of the Council of
8 June 2011 on the restriction of the use of certain hazardous substances
in electrical and electronic equipment. It has been demonstrated that the
requirements specified in Article 4 have been met.
v EN 50581:2012 technical documentation for the assessment of electrical
and electronic products with respect to the restriction of hazardous
substances.
Logical configuration with DS8000 Storage Management GUI
Before you configure your storage system, it is important to understand the storage
concepts and sequence of system configuration.
Figure 3 on page 12 illustrates the concepts of configuration.
Chapter 1. Overview11
Page 22
ds800001
Volumes
CKD
Volumes
FB
Pools
FB
CKD
Pools
CKD
LSSs
Arrays
z Systems
Hosts
Open Systems
Hosts
Figure 3. Logical configuration sequence
The following concepts are used in storage configuration.
Arrays
An array, also referred to as a managed array, is a group of storage devices
that provides capacity for a pool. An array generally consists of 8 drives
that are managed as a Redundant Array of Independent Disks (RAID).
PoolsA storage pool is a collection of storage that identifies a set of storage
resources. These resources provide the capacity and management
requirements for arrays and volumes that have the same storage type,
either fixed block (FB) or count key data (CKD).
Volumes
A volume is a fixed amount of storage on a storage device.
LSSThe logical subsystem (LSS) that enables one or more host I/O interfaces to
access a set of devices.
Hosts A host is the computer system that interacts with the storage system. Hosts
defined on the storage system are configured with a user-designated host
type that enables the storage system to recognize and interact with the
host. Only hosts that are mapped to volumes can access those volumes.
Logical configuration of the storage system begins with managed arrays. When
you create storage pools, you assign the arrays to pools and then create volumes in
the pools. FB volumes are connected through host ports to an open systems host.
CKD volumes require that logical subsystems (LSSs) be created as well so that they
can be accessed by an IBM Z host.
Pools must be created in pairs to balance the storage workload. Each pool in the
pool pair is controlled by a processor node (either Node 0 or Node 1). Balancing
the workload helps to prevent one node from doing most of the work and results
in more efficient I/O processing, which can improve overall system performance.
Both pools in the pair must be formatted for the same storage type, either FB or
CKD storage. You can create multiple pool pairs to isolate workloads.
12DS8882F Introduction and Planning Guide
Page 23
When you create a pair of pools, you can choose to automatically assign all
available arrays to the pools, or assign them manually afterward. If the arrays are
assigned automatically, the system balances them across both pools so that the
workload is distributed evenly across both nodes. Automatic assignment also
ensures that spares and device adapter (DA) pairs are distributed equally between
the pools.
If you are connecting to a IBM Z host, you must create a logical subsystem (LSS)
before you can create CKD volumes.
You can create a set of volumes that share characteristics, such as capacity and
storage type, in a pool pair. The system automatically balances the volumes
between both pools. If the pools are managed by Easy Tier, the capacity in the
volumes is automatically distributed among the arrays. If the pools are not
managed by Easy Tier, you can choose to use the rotate capacity allocation method,
which stripes capacity across the arrays.
If the volumes are connecting to a IBM Z host, the next steps of the configuration
process are completed on the host.
If the volumes are connecting to an open systems host, map the volumes to the
host, add host ports to the host, and then map the ports to the I/O ports on the
storage system.
FB volumes can only accept I/O from the host ports of hosts that are mapped to
the volumes. Host ports are zoned to communicate only with certain I/O ports on
the storage system. Zoning is configured either within the storage system by using
I/O port masking, or on the switch. Zoning ensures that the workload is spread
properly over I/O ports and that certain workloads are isolated from one another,
so that they do not interfere with each other.
The workload enters the storage system through I/O ports, which are on the host
adapters. The workload is then fed into the processor nodes, where it can be
cached for faster read/write access. If the workload is not cached, it is stored on
the arrays in the storage enclosures.
Logical configuration with DS CLI
Before you configure your storage system with the DS CLI, it is important to
understand IBM terminology for storage concepts and the storage hierarchy.
In the storage hierarchy, you begin with a physical disk. Logical groupings of eight
disks form an array site. Logical groupings of one array site form an array. After
you define your array storage type as CKD or fixed block, you can create a rank. A
rank is divided into a number of fixed-size extents. If you work with an
open-systems host, a large extent is 1 GiB, and a small extent is 16 MiB. If you
work in an IBM Z environment, a large extent is the size of an IBM 3390 Mod 1
disk drive (1113 cylinders), and a small extent is 21 cylinders.
After you create ranks, your physical storage can be considered virtualized.
Virtualization dissociates your physical storage configuration from your logical
configuration, so that volume sizes are no longer constrained by the physical size
of your arrays.
Chapter 1. Overview13
Page 24
The available space on each rank is divided into extents. The extents are the
building blocks of the logical volumes. An extent is striped across all disks of an
array.
Extents of the same storage type are grouped to form an extent pool. Multiple
extent pools can create storage classes that provide greater flexibility in storage
allocation through a combination of RAID types, DDM size, DDM speed, and
DDM technology. This configuration allows a differentiation of logical volumes by
assigning them to the appropriate extent pool for the needed characteristics.
Different extent sizes for the same device type (for example, count-key-data or
fixed block) can be supported on the same storage unit. The different extent types
must be in different extent pools.
A logical volume is composed of one or more extents. A volume group specifies a
set of logical volumes. Identify different volume groups for different uses or
functions (for example, SCSI target, remote mirror and copy secondary volumes,
FlashCopy targets, and Copy Services). Access to the set of logical volumes that are
identified by the volume group can be controlled. Volume groups map hosts to
volumes. Figure 4 on page 15 shows a graphic representation of the logical
configuration sequence.
When volumes are created, you must initialize logical tracks from the host before
the host is allowed read and write access to the logical tracks on the volumes. The
Quick Initialization feature for open system on FB ESE volumes allows quicker
access to logical volumes. The volumes include host volumes and source volumes
that can be used Copy Services relationships, such as FlashCopy or Remote Mirror
and Copy relationships. This process dynamically initializes logical volumes when
they are created or expanded, allowing them to be configured and placed online
more quickly.
You can specify LUN ID numbers through the graphical user interface (GUI) for
volumes in a map-type volume group. You can create a new volume group, add
volumes to an existing volume group, or add a volume group to a new or existing
host. Previously, gaps or holes in LUN ID numbers might result in a "map error"
status. The Status field is eliminated from the volume groups main page in the
GUI and the volume groups accessed table on the Manage Host Connections
page. You can also assign host connection nicknames and host port nicknames.
Host connection nicknames can be up to 28 characters, which is expanded from the
previous maximum of 12. Host port nicknames can be 32 characters, which are
expanded from the previous maximum of 16.
14DS8882F Introduction and Planning Guide
Page 25
Disk
ArraySite
Array
Rank
Extents
=CKDMod1ExtentinIBM
Systemzenvironments
=FB1GBinanOpen
systemsHost
Virtualization
ExtentPool
Extents
LogicalVolume
VolumeGroup
VolumeGroups
MapHoststo
Volumes
f2d00137
Figure 4. Logical configuration sequence
RAID implementation
RAID implementation improves data storage reliability and performance.
Redundant array of independent disks (RAID) is a method of configuring multiple
drives in a storage subsystem for high availability and high performance. The
collection of two or more drives presents the image of a single drive to the system.
If a single device failure occurs, data can be read or regenerated from the other
drives in the array.
Chapter 1. Overview15
Page 26
RAID implementation provides fault-tolerant data storage by storing the data in
different places on multiple drives. By placing data on multiple drives, I/O
operations can overlap in a balanced way to improve the basic reliability and
performance of the attached storage devices.
Physical capacity for the storage system can be configured as RAID 5, RAID 6, or
RAID 10. RAID 5 can offer excellent performance for some applications, while
RAID 10 can offer better performance for selected applications, in particular, high
random, write content applications in the open systems environment. RAID 6
increases data protection by adding an extra layer of parity over the RAID 5
implementation.
RAID 6 is the recommended and default RAID type for all drives over 1 TB. RAID
6 and RAID 10 are the only supported RAID types for 3.8 TB Flash Tier 1 drives.
RAID 6 is the only supported RAID type for 7.6 TB Flash Tier 2 drives.
RAID 5 overview
RAID 5 is a method of spreading volume data across multiple drives.
RAID 5 increases performance by supporting concurrent accesses to the multiple
drives within each logical volume. Data protection is provided by parity, which is
stored throughout the drives in the array. If a drive fails, the data on that drive can
be restored using all the other drives in the array along with the parity bits that
were created when the data was stored.
RAID 5 is not supported for drives larger than 1 TB and requires a request for
price quote (RPQ). For information, contact your sales representative.
Note: RAID 6 is the recommended and default RAID type for all drives over 1 TB.
RAID 6 and RAID 10 are the only supported RAID types for 3.8 TB Flash Tier 1
drives. RAID 6 is the only supported RAID type for 7.6 TB Flash Tier 2 drives.
RAID 6 overview
RAID 6 is a method of increasing the data protection of arrays with volume data
spread across multiple disk drives.
RAID 6 increases data protection by adding an extra layer of parity over the RAID
5 implementation. By adding this protection, RAID 6 can restore data from an
array with up to two failed drives. The calculation and storage of extra parity
slightly reduces the capacity and performance compared to a RAID 5 array.
The default RAID type for all drives over 1 TB is RAID 6. RAID 6 and RAID 10
are the only supported RAID types for 3.8 TB Flash Tier 1 drives. RAID 6 is the
only supported RAID type for 7.6 TB Flash Tier 2 drives.
RAID 10 overview
RAID 10 provides high availability by combining features of RAID 0 and RAID 1.
RAID 0 increases performance by striping volume data across multiple disk drives.
RAID 1 provides disk mirroring, which duplicates data between two disk drives.
By combining the features of RAID 0 and RAID 1, RAID 10 provides a second
optimization for fault tolerance.
RAID 10 implementation provides data mirroring from one disk drive to another
disk drive. RAID 10 stripes data across half of the disk drives in the RAID 10
configuration. The other half of the array mirrors the first set of disk drives. Access
16DS8882F Introduction and Planning Guide
Page 27
to data is preserved if one disk in each mirrored pair remains available. In some
cases, RAID 10 offers faster data reads and writes than RAID 5 because it is not
required to manage parity. However, with half of the disk drives in the group used
for data and the other half used to mirror that data, RAID 10 arrays have less
capacity than RAID 5 arrays.
Note: RAID 6 is the recommended and default RAID type for all drives over 1 TB.
RAID 6 and RAID 10 are the only supported RAID types for 3.8 TB Flash Tier 1
drives. RAID 6 is the only supported RAID type for 7.6 TB Flash Tier 2 drives.
Logical subsystems
To facilitate configuration of a storage system, volumes are partitioned into groups
of volumes. Each group is referred to as a logical subsystem (LSS).
As part of the storage configuration process, you can configure the maximum
number of LSSs that you plan to use. The storage system can contain up to 255
LSSs and each LSS can be connected to 16 other LSSs using a logical path. An LSS
is a group of up to 256 volumes that have the same storage type, either count key
data (CKD) for IBM Z hosts or fixed block (FB) for open systems hosts.
An LSS is uniquely identified within the storage system by an identifier that
consists of two hex characters (0-9 or uppercase AF) for which the volumes are
associated. A fully qualified LSS is designated using the storage system identifier
and the LSS identifier, such as IBM.2107-921-12FA123/1E. The LSS identifiers are
important for Copy Services operations. For example, for FlashCopy operations,
you specify the LSS identifier when choosing source and target volumes because
the volumes can span LSSs in a storage system.
The storage system has a 64K volume address space that is partitioned into 255
LSSs, where each LSS contains 256 logical volume numbers. The 255 LSS units are
assigned to one of 16 address groups, where each address group contains 16 LSSs,
or 4K volume addresses.
Storage system functions, including some that are associated with FB volumes,
might have dependencies on LSS partitions. For example:
v The LSS partitions and their associated volume numbers must identify volumes
that are specified for storage system Copy Services operations.
v To establish Remote Mirror and Copy pairs, a logical path must be established
between the associated LSS pair.
v FlashCopy pairs must reside within the same storage system.
If you increase storage system capacity, you can increase the number of LSSs that
you have defined. This modification to increase the maximum is a nonconcurrent
action. If you might need capacity increases in the future, leave the number of
LSSs set to the maximum of 255.
Note: If you reduce the CKD LSS limit to zero for IBM Z hosts, the storage system
does not process Remote Mirror and Copy functions. The FB LSS limit must be no
lower then eight to support Remote Mirror and Copy functions for open-systems
hosts.
Allocation methods
Allocation methods (also referred to as extent allocation methods) determine the
means by which volume capacity is allocated within a pool.
Chapter 1. Overview17
Page 28
All extents of the ranks that are assigned to an extent pool are independently
available for allocation to logical volumes. The extents for a LUN or volume are
logically ordered, but they do not have to come from one rank and the extents do
not have to be contiguous on a rank. This construction method of using fixed
extents to form a logical volume in the storage system allows flexibility in the
management of the logical volumes. You can delete volumes, resize volumes, and
reuse the extents of those volumes to create other volumes, different sizes. One
logical volume can be deleted without affecting the other logical volumes that are
defined on the same extent pool.
Because the extents are cleaned after you delete a volume, it can take some time
until these extents are available for reallocation. The reformatting of the extents is a
background process.
There are three allocation methods that are used by the storage system: rotate
capacity (also referred to as storage pool striping), rotate volumes, and managed.
Rotate capacity allocation method
The default allocation method is rotate capacity, which is also referred to as storage
pool striping. The rotate capacity allocation method is designed to provide the best
performance by striping volume extents across arrays in a pool. The storage system
keeps a sequence of arrays. The first array in the list is randomly picked at each
power-on of the storage subsystem. The storage system tracks the array in which
the last allocation started. The allocation of a first extent for the next volume starts
from the next array in that sequence. The next extent for that volume is taken from
the next rank in sequence, and so on. The system rotates the extents across the
arrays.
If you migrate a volume with a different allocation method to a pool that has the
rotate capacity allocation method, then the volume is reallocated. If you add arrays
to a pool, the rotate capacity allocation method reallocates the volumes by
spreading them across both existing and new arrays.
You can configure and manage this allocation method by using the DS8000 Storage
Management GUI, DS CLI, and DS Open API.
Rotate volumes allocation method
Volume extents can be allocated sequentially. In this case, all extents are taken from
the same array until there are enough extents for the requested volume size or the
array is full, in which case the allocation continues with the next array in the pool.
If more than one volume is created in one operation, the allocation for each
volume starts in another array. You might want to consider this allocation method
when you prefer to manage performance manually. The workload of one volume is
allocated to one array. This method makes the identification of performance
bottlenecks easier; however, by putting all the volume data onto just one array, you
might introduce a bottleneck, depending on your actual workload.
Managed allocation method
When a volume is managed by Easy Tier, the allocation method of the volume is
referred to as managed. Easy Tier allocates the capacity in ways that might differ
from both the rotate capacity and rotate volume allocation methods.
18DS8882F Introduction and Planning Guide
Page 29
Management interfaces
You can use various IBM storage management interfaces to manage your storage
system.
These interfaces include DS8000 Storage Management GUI, DS Command-Line
Interface (DS CLI), the DS Open Application Programming Interface, DS8000
RESTful API, IBM Storage Mobile Dashboard, IBM Spectrum Controland IBM
Copy Services Manager.
DS8000 Storage Management GUI
Use the DS8000 Storage Management GUI to configure and manage storage and
monitor performance and Copy Services functions.
DS8000 Storage Management GUI is a web-based GUI that is installed on the
Hardware Management Console (HMC). You can access the DS8000 Storage
Management GUI from any network-attached system by using a supported web
browser. For a list of supported browsers, see “DS8000 Storage Management GUI
supported web browsers” on page 22.
You can access the DS8000 Storage Management GUI from a browser by using the
following web address, where HMC_IP is the IP address or host name of the HMC.
https://HMC_IP
If the DS8000 Storage Management GUI does not display as anticipated, clear the
cache for your browser, and try to log in again.
Notes:
v If the storage system is configured for NIST SP 800-131A security conformance, a
version of Java that is NIST SP 800-131A compliant must be installed on all
systems that run the DS8000 Storage Management GUI. For more information
about security requirements, see information about configuring your
environment for NIST SP 800-131A compliance in the IBM DS8000 series online
product documentation ( http://www.ibm.com/support/knowledgecenter/
ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/f2c_securitybp.html).
v User names and passwords are encrypted for HTTPS protocol. You cannot access
the DS8000 Storage Management GUI over the non-secure HTTP protocol (port
8451).
DS command-line interface
The IBM DS command-line interface (DS CLI) can be used to create, delete, modify,
and view Copy Services functions and the logical configuration of a storage
system. These tasks can be performed either interactively, in batch processes
(operating system shell scripts), or in DS CLI script files. A DS CLI script file is a
text file that contains one or more DS CLI commands and can be issued as a single
command. DS CLI can be used to manage logical configuration, Copy Services
configuration, and other functions for a storage system, including managing
security settings, querying point-in-time performance information or status of
physical resources, and exporting audit logs.
Note: Java™1.8 must be installed on systems that run the DS CLI.
The DS CLI provides a full-function set of commands to manage logical
configurations and Copy Services configurations. The DS CLI is available in the
Chapter 1. Overview19
Page 30
DS8000 Storage Management GUI. The DS CLI client can also be installed on and
is supported in many different environments, including the following platforms:
v AIX®6.1, 7.1, 7.2
v Linux, Red Hat Enterprise Linux [RHEL] 6 and 7
v Linux, SUSE Linux, Enterprise Server (SLES) 11 and 12
v VMware ESX 5.5, 6 Console
v IBM i 7.1, 7.2
v Oracle Solaris 10 and 11
v Microsoft Windows Server 2008, 2012 and Windows 7, 8, 8.1, 10
Note: If the storage system is configured for NIST SP 800-131A security
conformance, a version of Java that is NIST SP 800-131A compliant must be
installed on all systems that run DS CLI client. For more information about
security requirements, see documentation about configuring your environment for
NIST SP 800-131A compliance in IBM Knowledge Center (https://www.ibm.com/
support/knowledgecenter/ST5GLJ_8.5.0/com.ibm.storage.ssic.help.doc/
f2c_securitybp_nist.html).
DS Open Application Programming Interface
The DS Open Application Programming Interface (API) is a nonproprietary storage
management client application that supports routine LUN management activities.
Activities that are supported include: LUN creation, mapping and masking, and
the creation or deletion of RAID 5, RAID 6, and RAID 10 volume spaces.
The DS Open API helps integrate configuration management support into storage
resource management (SRM) applications, which help you to use existing SRM
applications and infrastructures. The DS Open API can also be used to automate
configuration management through customer-written applications. Either way, the
DS Open API presents another option for managing storage units by
complementing the use of the IBM Storage Management GUI web-based interface
and the DS command-line interface.
Note: The DS Open API supports the storage system and is an embedded
component.
You can implement the DS Open API without using a separate middleware
application. For example, you can implement it with the IBM Common
Information Model (CIM) agent, which provides a CIM-compliant interface. The
DS Open API uses the CIM technology to manage proprietary devices as open
system devices through storage management applications. The DS Open API is
used by storage management applications to communicate with a storage unit.
RESTful API
The RESTful API is an application on the HMC for initiating simple storage
operations through the Web.
The RESTful (Representational State Transfer) API is a platform independent
means by which to initiate create, read, update, and delete operations in the
storage system and supporting storage devices. These operations are initiated with
the HTTP commands: POST, GET, PUT, and DELETE.
The RESTful API is intended for use in the development, testing, and debugging of
client management infrastructures. You can use the RESTful API with a CURL
20DS8882F Introduction and Planning Guide
Page 31
command or through standard Web browsers. For instance, you can use the
storage system with the RESTClient add-on.
IBM Spectrum Control
IBM Spectrum Control is an integrated software solution that can help you
improve and centralize the management of your storage environment through the
integration of products. With IBM Spectrum Control, it is possible to manage
multiple DS8000 systems from a single point of control.
Note: IBM Spectrum Control is not required for the operation of a storage system.
However, it is recommended. IBM Spectrum Control can be ordered and installed
as a software product on various servers and operating systems. When you install
IBM Spectrum Control, ensure that the selected version supports the current
system functions. Optionally, you can order a server on which IBM Spectrum
Control is preinstalled.
IBM Spectrum Control simplifies storage management by providing the following
benefits:
v Centralizing the management of heterogeneous storage network resources with
IBMstorage management software
v Providing greater synergy between storage management software and
IBMstorage devices
v Reducing the number of servers that are required to manage your software
infrastructure
v Migrating from basic device management to storage management applications
that provide higher-level functions
For more information, see IBM Spectrum Control online product documentation in
IBM Knowledge Center (www.ibm.com/support/knowledgecenter).
IBM Copy Services Manager
IBM Copy Services Manager controls Copy Services in storage environments. Copy
Services are features that are used by storage systems, such as DS8000, to
configure, manage, and monitor data-copy functions.
IBM Copy Services Manager provides both a graphical interface and command line
that you can use for configuring and managing Copy Services functions across
storage units. Copy Services include the point-in-time function – IBM FlashCopy
and Safeguarded Copy, and the remote mirror and copy functions – Metro Mirror,
Global Mirror, and Metro Global Mirror. Copy Services Manager can automate the
administration and configuration of these services; and monitor and manage copy
sessions.
You can use Copy Services Manager to complete the following data replication
tasks and help reduce the downtime of critical applications:
v Plan for replication when you are provisioning storage
v Keep data on multiple related volumes consistent across storage systems for a
planned or unplanned outage
v Monitor and track replication operations
v Automate the mapping of source volumes to target volumes
Starting with DS8000 Version 8.1, Copy Services Manager also comes preinstalled
on the Hardware Management Console (HMC). Therefore, you can enable the
Chapter 1. Overview21
Page 32
Copy Services Manager software that is already on the hardware system. Doing so
results in less setup time; and eliminates the need to maintain a separate server for
Copy Services functions.
You can also use Copy Services Manager to connect to an LDAP repository for
remote authentication. For more information, see the DS8000 online product
documentation at http://www.ibm.com/support/knowledgecenter/ST5GLJ/
ds8000_kcwelcome.html and search for topics that are related to remoteauthentication.
For more information, see the Copy Services Manager online product
documentation at http://www.ibm.com/support/knowledgecenter/SSESK4/
csm_kcwelcome.html. The "What's new" topic provides details on the features
added for each version of Copy Services Manager that can be used by DS8000,
including HyperSwap for multi-target sessions, and incremental FlashCopy
support.
DS8000 Storage Management GUI supported web browsers
To access the DS8000 Storage Management GUI, you must ensure that your web
browser is supported and has the appropriate settings enabled.
The DS8000 Storage Management GUI supports the following web browsers:
Table 7. Supported web browsers
DS8000 series versionSupported browsers
8.5Mozilla Firefox 38
Mozilla Firefox Extended Support Release (ESR) 38
Microsoft Internet Explorer 11
Google Chrome 43
IBM supports higher versions of the browsers as long as the vendors do not
remove or disable functionality that the product relies upon. For browser levels
higher than the versions that are certified with the product, customer support
accepts usage-related and defect-related service requests. As with operating system
and virtualization environments, if the support center cannot re-create the issue in
the our lab, we might ask the client to re-create the problem on a certified browser
version to determine whether a product defect exists. Defects are not accepted for
cosmetic differences between browsers or browser versions that do not affect the
functional behavior of the product. If a problem is identified in the product, defects
are accepted. If a problem is identified with the browser, IBM might investigate
potential solutions or workaround that the client can implement until a permanent
solution becomes available.
Enabling TLS 1.2 support
If the security requirements for your storage system require conformance with
NIST SP 800-131A, enable transport layer security (TLS) 1.2 on web browsers that
use SSL/TLS to access the DS8000 Storage Management GUI. See your web
browser documentation for instructions on enabling TLS 1.2. For Internet Explorer,
complete the following steps to enable TLS 1.2.
1. On the Tools menu, click Internet Options.
2. On the Advanced tab, under Settings, select Use TLS 1.2.
Note: Firefox, Release 24 and later, supports TLS 1.2. However, you must configure
Firefox to enable TLS 1.2 support.
22DS8882F Introduction and Planning Guide
Page 33
For more information about security requirements, see .
Selecting browser security settings
You must select the appropriate web browser security settings to access the DS8000
Storage Management GUI. In Internet Explorer, use the following steps.
1. On the Tools menu, click Internet Options.
2. On the Security tab, select Internet and click Custom level.
3. Scroll to Miscellaneous, and select Allow META REFRESH.
4. Scroll to Scripting, and select Active scripting.
Configuring Internet Explorer to access the DS8000 Storage
Management GUI
If DS8000 Storage Management GUI is accessed through IBM Spectrum Control
with Internet Explorer, complete the following steps to properly configure the web
browser.
1. Disable the Pop-up Blocker.
Note: If a message indicates that content is blocked without a signed by a validsecurity certificate, click the Information Bar at the top and select Show
blocked content.
2. Add the IP address of the DS8000 Hardware Management Console (HMC) to
the Internet Explorer list of trusted sites.
For more information, see your browser documentation.
Chapter 1. Overview23
Page 34
24DS8882F Introduction and Planning Guide
Page 35
Chapter 2. Hardware features
Use this information to assist you with planning, ordering, and managing your
storage system.
The following table lists feature codes that are used to order hardware features for
your system.
Table 8. Feature codes for hardware features
Feature
codeFeatureDescription
0400BSMI certification documentsRequired when the storage system
0403Non-encryption certification keyRequired when the storage system
0937zFlex Frame field mergeIndicates that the DS8882F will be
0938Rockhopper II field mergeIndicates that the DS8882F will be
0939Customer Rack field mergeIndicates that the DS8882F will be
1021Single-phase power cord, 250 V, 20 ANEMA L6-20P
1022Single-phase power cord, 250 V, 16 ACEE 7 VII
1023Single-phase power cord, 250 V, 16 ASANS 164
1024Single-phase power cord, 250 V, 16 ACEI 23-16
1025Single-phase power cord, 250 V, 20 ARS 3720DP
1026Single-phase power cord, 250 V, 16 AIEC 309
1027Single-phase power cord, 250 V, 15 AAS/NZS 3112
1028Single-phase power cord, 250 V, 15 AJIS C 8303 6-20P
1029Single-phase power cord, 125 - 250 V,
16 A
1030Single-phase power cord, 250 V, 20 AIRAM 2073
1031Single-phase power cord, 250 V, 16 AKSC 8305
1032Single-phase power cord, 250 V, 16 AIS 6538
1033Single-phase power cord, 250 V, 16 AGB 2099.1, 1002
1034Single-phase power cord, 250 V, 20 ANBR 14136
1035Single-phase power cord, 250 V, 20 ACNS 10917-3
1036Single-phase power cord, 250 V, 16 ASI 32
1037Single-phase power cord, 250 V, 16 ASEV 1011
1057Battery backup module (one)Two battery backup modules are
model is shipped to Taiwan.
model is shipped to China or Russia.
installed in an existing IBM z14
model ZR1 rack
installed in an existing IBM z14
model LR1 rack
installed in an existing standard
19-inch wide rack that conforms to
EIA 310D specifications
Gen2 filler set
1765Optional 1U keyboard and displayNot available with ZR1 feature code
1885DS8000 Licensed Machine Code R8.5Microcode bundle 88.x.xx.x for base
26DS8882F Introduction and Planning Guide
No intermix with Flash Tier 0 or
Flash Tier 2 drive sets
No intermix with Flash Tier 0 or
Flash Tier 1 drive sets
Includes 16 fillers
0937 or LR1 feature code 0938
model 983
Page 37
Table 8. Feature codes for hardware features (continued)
Feature
codeFeatureDescription
3065Base I/O expander for High
Performance Flash Enclosures Gen2
and host adapters (required)
3066I/O expander for additional host
adapters (optional)
3354Fibre Channel host-adapter4-port, 16 Gbps shortwave FCP and
3454Fibre Channel host-adapter4-port, 16 Gbps longwave FCP and
3600Transparent cloud tiering adapter pair
for 2U processor complex (optional)
423364 GB system memory(6-core)
4234128 GB system memory(6-core)
4235256 GB system memory(6-core)
44216-core POWER8 processorsRequires feature code 4233, 4234, or
Required to support one High
Performance Flash Enclosure Gen2
pair and two host adapters
A storage complex is a set of storage units that are managed by management
console units.
You can associate one or two management console units with a storage complex.
Each storage complex must use at least one of the management console units in
one of the storage units. Model 983 provides a second management console.
Management console
The management console supports storage system hardware and firmware
installation and maintenance activities.
The management console is a dedicated processor unit that can automatically
monitor the state of your system, and notify you and IBM when service is
required.
To provide continuous availability of your access to the management-console
functions, use an additional management console (already provided with model
983).
Hardware specifics
The storage system models offer a high degree of availability and performance
through the use of redundant components that can be replaced while the system is
operating. You can use a storage system model with a mix of different operating
systems and clustered and nonclustered variants of the same operating systems.
Chapter 2. Hardware features27
Page 38
Contributors to the high degree of availability and reliability include the structure
of the storage unit, the host systems that are supported, and the memory and
speed of the processors.
Storage system structure
The design of the storage system contributes to the high degree of availability. The
primary components that support high availability within the storage unit are the
storage server, the processor complex, and the power control card.
Storage system
The storage unit contains a storage server and one pair of storage
enclosures.
Storage server
The storage server consists of two processor complexes and a pair of
power control cards.
Processor complex
The processor complex controls and manages the storage server functions
in the storage system. The two processor complexes form a redundant pair
such that if either processor complex fails, the remaining processor
complex controls and manages all storage server functions.
Power control card
A redundant pair of power control cards (for model 983, cards are located
in the Management enclosure) coordinate the power management within
the storage unit. The power control cards are attached to the service
processors in each processor complex.
Flash drives
The storage system provides you with a choice of drives.
The following drives are available:
v 2.5-inch Flash Tier 0 drives with FDE
– 400 GB
– 800 GB
– 1.6 TB
– 3.2 TB
v 2.5-inch Flash Tier 1 drives with FDE
– 3.8 TB
v 2.5-inch Flash Tier 2 drives with FDE
– 7.6 TB
Note: Intermix of Flash Tier 0, Flash Tier 1, and Flash Tier 2 drives is not
supported.
Drive maintenance policy
The internal maintenance functions use an Enhanced Sparing process that delays a
service call for drive replacement if there are sufficient spare drives. All drive
repairs are managed according to Enhanced Sparing rules.
A minimum of two spare drives are allocated in a device adapter loop. Internal
maintenance functions continuously monitor and report (by using the call home
feature) to IBM when the number of drives in a spare pool reaches a preset
threshold. This design ensures continuous availability of devices while it protects
data and minimizing any service disruptions.
28DS8882F Introduction and Planning Guide
Page 39
It is not recommended to replace a drive unless an error is generated indicating
that service is needed.
Host attachment overview
The storage system provides various host attachments so that you can consolidate
storage capacity and workloads for open-systems hosts and IBM Z.
The storage system provides extensive connectivity using Fibre Channel adapters
across a broad range of server environments.
Host adapter intermix support
The DS8882F model 983 provides only 4-port 16 Gbps host adapters, and a
maximum of 16 ports is available.
DS8882F model 983
The following table shows the host adapter plug order.
Table 9. Plug order for 4-port HA slots for the I/O enclosure
Host adapter pair
First host adapter pair
(required feature code
3354)
Second host adapter
pair (optional feature
code 3454)
Slot number
C3C4C5C6
12
21
Open-systems host attachment with Fibre Channel adapters
You can attach a storage system to an open-systems host with Fibre Channel
adapters.
The storage system supports SAN speeds of up to 16 Gbps with the current 16
Gbps host adapters. The storage system detects and operates at the greatest
available link speed that is shared by both sides of the system.
Fibre Channel technology transfers data between the sources and the users of the
information. Fibre Channel connections are established between Fibre Channel
ports that reside in I/O devices, host systems, and the network that interconnects
them. The network consists of elements like switches, bridges, and repeaters that
are used to interconnect the Fibre Channel ports.
FICON attached IBM Z hosts overview
The storage system can be attached to FICON attached IBM Z host operating
systems under specified adapter configurations.
Each storage system Fibre Channel adapter has four ports. Each port has a unique
worldwide port name (WWPN). You can configure the port to operate with the
FICON upper-layer protocol.
With Fibre Channel adapters that are configured for FICON, the storage system
provides the following configurations:
v Either fabric or point-to-point topologies
v A maximum of 509 logins per Fibre Channel port
v A maximum of 8,192 logins per storage system
Chapter 2. Hardware features29
Page 40
v A maximum of 1,280 logical paths on each Fibre Channel port
v Access to all 255 control-unit images (65,280 CKD devices) over each
FICON port
v A maximum of 512 logical paths per control unit image
Note: IBM z13®and IBM z14 servers support 32,768 devices per FICON host
channel, while IBM zEnterprise®EC12 and IBM zEnterprise BC12 servers support
24,576 devices per FICON host channel. Earlier IBM Z servers support 16,384
devices per FICON host channel. To fully access 65,280 devices, it is necessary to
connect multiple FICON host channels to the storage system. You can access the
devices through a Fibre Channel switch or FICON director to a single storage
system FICON port.
The storage system supports the following operating systems for IBM Z hosts:
v Linux
v Transaction Processing Facility (TPF)
v Virtual Storage Extended/Enterprise Storage Architecture
v z/OS
v z/VM
v z/VSE
For the most current information on supported hosts, operating systems, adapters,
and switches, go to the IBM System Storage Interoperation Center (SSIC) website
(www.ibm.com/systems/support/storage/config/ssic).
®
®
I/O load balancing
You can maximize the performance of an application by spreading the I/O load
across processor nodes, arrays, and device adapters in the storage system.
During an attempt to balance the load within the storage system, placement of
application data is the determining factor. The following resources are the most
important to balance, roughly in order of importance:
v Activity to the RAID drive groups. Use as many RAID drive groups as possible
for the critical applications. Most performance bottlenecks occur because a few
drive are overloaded. Spreading an application across multiple RAID drive
groups ensures that as many drives as possible are available. This is extremely
important for open-system environments where cache-hit ratios are usually low.
v Activity to the nodes. When selecting RAID drive groups for a critical
application, spread them across separate nodes. Because each node has separate
memory buses and cache memory, this maximizes the use of those resources.
v Activity to the device adapters. When selecting RAID drive groups within a
cluster for a critical application, spread them across separate device adapters.
v Activity to the Fibre Channel ports.
Storage consolidation
When you use a storage system, you can consolidate data and workloads from
different types of independent hosts into a single shared resource.
You can mix production and test servers in an open systems environment or mix
open systems and IBM Z hosts. In this type of environment, servers rarely, if ever,
contend for the same resource.
30DS8882F Introduction and Planning Guide
Page 41
Count key data
Fixed block
Although sharing resources in the storage system has advantages for storage
administration and resource sharing, there are more implications for workload
planning. The benefit of sharing is that a larger resource pool (for example, drives
or cache) is available for critical applications. However, you must ensure that
uncontrolled or unpredictable applications do not interfere with critical work. This
requires the same workload planning that you use when you mix various types of
work on a server.
In count-key-data (CKD) disk data architecture, the data field stores the user data.
Because data records can be variable in length, in CKD they all have an associated
count field that indicates the user data record size. The key field enables a
hardware search on a key. The commands used in the CKD architecture for
managing the data and the storage devices are called channel command words.
In fixed block (FB) architecture, the data (the logical volumes) are mapped over
fixed-size blocks or sectors.
With an FB architecture, the location of any block can be calculated to retrieve that
block. This architecture uses tracks and cylinders. A physical disk contains multiple
blocks per track, and a cylinder is the group of tracks that exists under the disk
heads at one point in time without performing a seek operation.
T10 DIF support
American National Standards Institute (ANSI) T10 Data Integrity Field (DIF)
standard is supported on IBM Z for SCSI end-to-end data protection on fixed block
(FB) LUN volumes. This support applies to the IBM DS8880 unit (98x models).
IBM Z support applies to FCP channels only.
IBM Z provides added end-to-end data protection between the operating system
and the DS8880 unit. This support adds protection information consisting of CRC
(Cyclic Redundancy Checking), LBA (Logical Block Address), and host application
tags to each sector of FB data on a logical volume.
Data protection using the T10 Data Integrity Field (DIF) on FB volumes includes
the following features:
v Ability to convert logical volume formats between standard and protected
formats supported through PPRC between standard and protected volumes
v Support for earlier versions of T10-protected volumes on the DS8880 with non
T10 DIF-capable hosts
v Allows end-to-end checking at the application level of data stored on FB disks
v Additional metadata stored by the storage facility image (SFI) allows host
adapter-level end-to-end checking data to be stored on FB disks independently
of whether the host uses the DIF format.
Notes:
v This feature requires changes in the I/O stack to take advantage of all the
capabilities the protection offers.
Chapter 2. Hardware features31
Page 42
v T10 DIF volumes can be used by any type of Open host with the exception of
v T10 DIF volumes can accept SCSI I/O of either T10 DIF or standard type, but if
Logical volumes
A logical volume is the storage medium that is associated with a logical disk. It
typically resides on two or more hard disk drives.
For the storage unit, the logical volumes are defined at logical configuration time.
For count-key-data (CKD) servers, the logical volume size is defined by the device
emulation mode and model. For fixed block (FB) hosts, you can define each FB
volume (LUN) with a minimum size of a single block (512 bytes) to a maximum
size of 232blocks or 16 TB.
A logical device that has nonremovable media has one and only one associated
logical volume. A logical volume is composed of one or more extents. Each extent
is associated with a contiguous range of addressable data units on the logical
volume.
Allocation, deletion, and modification of volumes
Extent allocation methods (namely, rotate volumes and pool striping) determine
the means by which actions are completed on storage system volumes.
iSeries, but active protection is supported only for Linux on IBM Z or AIX on
IBM Power Systems™. The protection can only be active if the host server has
T10 DIF enabled.
the FB volume type is standard, then only standard SCSI I/O is accepted.
All extents of the ranks assigned to an extent pool are independently available for
allocation to logical volumes. The extents for a LUN or volume are logically
ordered, but they do not have to come from one rank and the extents do not have
to be contiguous on a rank. This construction method of using fixed extents to
form a logical volume in the storage system allows flexibility in the management
of the logical volumes. You can delete volumes, resize volumes, and reuse the
extents of those volumes to create other volumes, different sizes. One logical
volume can be deleted without affecting the other logical volumes defined on the
same extent pool.
Because the extents are cleaned after you delete a volume, it can take some time
until these extents are available for reallocation. The reformatting of the extents is a
background process.
There are two extent allocation methods used by the storage system: rotate
volumes and storage pool striping (rotate extents).
Storage pool striping: extent rotation
The default storage allocation method is storage pool striping. The extents of a
volume can be striped across several ranks. The storage system keeps a sequence
of ranks. The first rank in the list is randomly picked at each power on of the
storage subsystem. The storage system tracks the rank in which the last allocation
started. The allocation of a first extent for the next volume starts from the next
rank in that sequence. The next extent for that volume is taken from the next rank
in sequence, and so on. The system rotates the extents across the ranks.
32DS8882F Introduction and Planning Guide
Page 43
If you migrate an existing non-striped volume to the same extent pool with a
rotate extents allocation method, then the volume is "reorganized." If you add more
ranks to an existing extent pool, then the "reorganizing" existing striped volumes
spreads them across both existing and new ranks.
You can configure and manage storage pool striping using the DS Storage
Manager, and DS CLI, and DS Open API. The default of the extent allocation
method (EAM) option that is allocated to a logical volume is now rotate extents.
The rotate extents option is designed to provide the best performance by striping
volume extents across ranks in extent pool.
Managed EAM: Once a volume is managed by Easy Tier, the EAM of the volume
is changed to managed EAM, which can result in placement of the extents
differing from the rotate volume and rotate extent rules. The EAM only changes
when a volume is manually migrated to a non-managed pool.
Rotate volumes allocation method
Extents can be allocated sequentially. In this case, all extents are taken from the
same rank until there are enough extents for the requested volume size or the rank
is full, in which case the allocation continues with the next rank in the extent pool.
If more than one volume is created in one operation, the allocation for each
volume starts in another rank. When allocating several volumes, rotate through the
ranks. You might want to consider this allocation method when you prefer to
manage performance manually. The workload of one volume is going to one rank.
This method makes the identification of performance bottlenecks easier; however,
by putting all the volumes data onto just one rank, you might introduce a
bottleneck, depending on your actual workload.
LUN calculation
The storage system uses a volume capacity algorithm (calculation) to provide a
logical unit number (LUN).
In the storage system, physical storage capacities are expressed in powers of 10.
Logical or effective storage capacities (logical volumes, ranks, extent pools) and
processor memory capacities are expressed in powers of 2. Both of these
conventions are used for logical volume effective storage capacities.
On open volumes with 512 byte blocks (including T10-protected volumes), you can
specify an exact block count to create a LUN. You can specify a standard LUN size
(which is expressed as an exact number of binary GiBs (230)) or you can specify an
ESS volume size (which is expressed in decimal GiBs (109) accurate to 0.1 GB). The
unit of storage allocation for fixed block open systems volumes is one extent. The
extent sizes for open volumes is either exactly 1 GiB, or 16 MiB. Any logical
volume that is not an exact multiple of 1 GiB does not use all the capacity in the
last extent that is allocated to the logical volume. Supported block counts are from
1 to 4 194 304 blocks (2 binary TiB) in increments of one block. Supported sizes are
from 1 to 16 TiB in increments of 1 GiB. The supported ESS LUN sizes are limited
to the exact sizes that are specified from 0.1 to 982.2 GB (decimal) in increments of
0.1 GB and are rounded up to the next larger 32 K byte boundary. The ESS LUN
sizes do not result in standard LUN sizes. Therefore, they can waste capacity.
However, the unused capacity is less than one full extent. ESS LUN sizes are
typically used when volumes must be copied between the storage system and ESS.
Chapter 2. Hardware features33
Page 44
On open volumes with 520 byte blocks, you can select one of the supported LUN
sizes that are used on IBM i processors to create a LUN. The operating system uses
8 of the bytes in each block. This leaves 512 bytes per block for your data. Variable
volume sizes are also supported.
Table 10 shows the disk capacity for the protected and unprotected models.
Logically unprotecting a storage LUN allows the IBM i host to start system level
mirror protection on the LUN. The IBM i system level mirror protection allows
normal system operations to continue running in the event of a failure in an HBA,
fabric, connection, or LUN on one of the LUNs in the mirror pair.
Note: On IBM i, logical volume sizes in the range 17.5 GB to 141.1 GB are
supported as load source units. Logical volumes smaller than 17.5 GB or larger
than 141.1 GB cannot be used as load source units.
Table 10. Capacity and models of disk volumes for IBM i hosts running IBM i operating
system
SizeProtected modelUnprotected model
8.5 GBA01A81
17.5 GBA02A82
35.1 GBA05A85
70.5 GBA04A84
141.1 GBA06A86
282.2 GBA07A87
1 GB to 2000 GB099050
On CKD volumes, you can specify an exact cylinder count or a standard volume
size to create a LUN. The standard volume size is expressed as an exact number of
Mod 1 equivalents (which is 1113 cylinders). The unit of storage allocation for CKD
volumes is one CKD extent. The extent size for a CKD volume is either exactly a
Mod-1 equivalent (which is 1113 cylinders), or it is 21 cylinders when using the
small-extents option. Any logical volume that is not an exact multiple of 1113
cylinders (1 extent) does not use all the capacity in the last extent that is allocated
to the logical volume. For CKD volumes that are created with 3380 track formats,
the number of cylinders (or extents) is limited to either 2226 (1 extent) or 3339 (2
extents). For CKD volumes that are created with 3390 track formats, you can
specify the number of cylinders in the range of 1 - 65520 (x'0001' - x'FFF0') in
increments of one cylinder, for a standard (non-EAV) 3390. The allocation of an
EAV volume is expressed in increments of 3390 mod1 capacities (1113 cylinders)
and can be expressed as integral multiples of 1113 between 65,667 - 1,182,006
cylinders or as the number of 3390 mod1 increments in the range of 59 - 1062.
Extended address volumes for CKD
Count key data (CKD) volumes now support the additional capacity of 1 TB. The 1
TB capacity is an increase in volume size from the previous 223 GB.
This increased volume capacity is referred to as extended address volumes (EAV)
and is supported by the 3390 Model A. Use a maximum size volume of up to
1,182,006 cylinders for the IBM z/OS. This support is available to you for the z/OS
version 12.1, and later.
34DS8882F Introduction and Planning Guide
Page 45
You can create a 1 TB IBM Z CKD volume. A IBM Z CKD volume is composed of
one or more extents from a CKD extent pool. CKD extents are 1113 cylinders in
size. When you define a IBM Z CKD volume, you must specify the number of
cylinders that you want for the volume. The storage system and the z/OS have
limits for the CKD EAV sizes. You can define CKD volumes with up to 1,182,006
cylinders, about 1 TB on the DS8880.
If the number of cylinders that you specify is not an exact multiple of 1113
cylinders, then some space in the last allocated extent is wasted. For example, if
you define 1114 or 3340 cylinders, 1112 cylinders are wasted. For maximum storage
efficiency, consider allocating volumes that are exact multiples of 1113 cylinders. In
fact, multiples of 3339 cylinders should be considered for future compatibility. If
you want to use the maximum number of cylinders for a volume (that is 1,182,006
cylinders), you are not wasting cylinders, because it is an exact multiple of 1113
(1,182,006 divided by 1113 is exactly 1062). This size is also an even multiple (354)
of 3339, a model 3 size.
Quick initialization
Quick initialization improves device initialization speed and allows a Copy
Services relationship to be established after a device is created.
Quick volume initialization for IBM Z environments is supported. This support
helps users who frequently delete volumes by reconfiguring capacity without
waiting for initialization. Quick initialization initializes the data logical tracks or
block within a specified extent range on a logical volume with the appropriate
initialization pattern for the host.
Normal read and write access to the logical volume is allowed during the
initialization process. Therefore, the extent metadata must be allocated and
initialized before the quick initialization function is started. Depending on the
operation, the quick initialization can be started for the entire logical volume or for
an extent range on the logical volume.
Chapter 2. Hardware features35
Page 46
36DS8882F Introduction and Planning Guide
Page 47
Chapter 3. Data management features
The storage system is designed with many management features that allow you to
securely process and access your data according to your business needs, even if it
is 24 hours a day and 7 days a week.
This section contains information about the data management features in your
storage system. Use the information in this section to assist you in planning,
ordering licenses, and in the management of your storage system data
management features.
Transparent cloud tiering
Transparent cloud tiering is a licensed function that enables volume data to be
copied and transferred to cloud storage. DS8000 transparent cloud tiering is a
feature in conjunction with z/OS and DFSMShsm that provides server-less
movement of archive and backup data directly to an object storage solution.
Offloading the movement of the data from the host to the DS8000 unlocks
DFSMShsm efficiencies and saves z/OS CPU cycles.
DFSMShsm has been the leading z/OS data archive solution for over 30 years. Its
architecture is designed and optimized for tape, being the medium in which the
data is transferred and archived.
Due to this architectural design point, there are inherent inefficiencies that
consume host CPU cycles, including the following examples:
Movement of data through the host
All of the data must move from the disk through the host and out to the
tape device.
Dual Data Movement
DSS must read the data from the disk and then pass the data from DSS to
HSM, which then moves the data from the host to the tape.
16K block sizes
HSM separates the data within z/OS into small 16K blocks.
Recycle
When a tape is full, HSM must continually read the valid data from that
tape volume and write it to a new tape.
HSM inventory
Reorgs, audits, and backups of the HSM inventory via the OCDS.
Transparent cloud tiering resolves these inefficiencies by moving the data directly
from the DS8000 to the cloud object storage. This process eliminates the movement
of data through the host, dual data movement, and the small 16K block size
requirement. This process also eliminates recycle processing and the OCDS.
Transparent cloud tiering translates into significant savings in CPU utilization
within z/OS, specifically when you are using both DFSMShsm and transparent
cloud tiering.
Modern enterprises adopted cloud storage to overcome the massive amount of
data growth. The transparent cloud tiering system supports creating connections to
cloud service providers to store data in private or public cloud storage. With
transparent cloud tiering, administrators can move older data to cloud storage to
free up capacity on the system. Point-in-time snapshots of data can be created on
the system and then copied and stored on the cloud storage.
An external cloud service provider manages the cloud storage, which helps to
reduce storage costs for the system. Before data can be copied to cloud storage, a
connection to the cloud service provider must be created from the system. A cloud
account is an object on the system that represents a connection to a cloud service
provider by using a particular set of credentials. These credentials differ depending
on the type of cloud service provider that is being specified. Most cloud service
providers require the host name of the cloud service provider and an associated
password, and some cloud service providers also require certificates to authenticate
users of the cloud storage.
Public clouds use certificates that are signed by well-known certificate authorities.
Private cloud service providers can use either self-signed certificate or a certificate
that is signed by a trusted certificate authority. These credentials are defined on the
cloud service provider and passed to the system through the administrators of the
cloud service provider. A cloud account defines whether the system can
successfully communicate and authenticate with the cloud service provider by
using the account credentials. If the system is authenticated, it can then access
cloud storage to either copy data to the cloud storage or restore data that is copied
to cloud storage back to the system. The system supports one cloud account to a
single cloud service provider. Migration between providers is not supported.
Client-side encryption for transparent cloud tiering ensures that data is encrypted
before it is transferred to cloud storage. The data remains encrypted in cloud
storage and is decrypted after it is transferred back to the storage system. You can
use client-side encryption for transparent cloud tiering to download and decrypt
data on any DS8000 storage system that uses the same set of key servers as the
system that first encrypted the data.
Notes:
v Client-side encryption for transparent cloud tiering requires IBM Security Key
Lifecycle Manager v3.0.0.2 or higher. For more information, see the IBM Security
Key Lifecycle Manager online product documentation(www.ibm.com/support/
knowledgecenter/SSWPVP/).
v Transparent cloud tiering supports the Key Management Interoperability
Protocol (KMIP) only.
Cloud object storage is inherently multi-tenant, which allows multiple users to
store data on the device, segregated from the other users. Each cloud service
provider divides cloud storage into segments for each client that uses the cloud
storage. These objects store only data specific to that client. Within the segment
that is controlled by the user’s name, DFSMShsm and its inventory system controls
the creation and segregation of containers that it uses to store the client data
objects.
The storage system supports the OpenStack Swift and Amazon S3 APIs. The
storage system also supports the IBM TS7700 as an object storage target and the
following cloud service providers:
v Amazon S3
38DS8882F Introduction and Planning Guide
Page 49
v IBM Bluemix - Cloud Object Storage
v OpenStack Swift Based Private Cloud
Dynamic volume expansion
Dynamic volume expansion is the capability to increase volume capacity up to a
maximum size while volumes are online to a host and not in a Copy Services
relationship.
Dynamic volume expansion increases the capacity of open systems and IBM Z
volumes, while the volume remains connected to a host system. This capability
simplifies data growth by providing volume expansion without taking volumes
offline.
Some operating systems do not support a change in volume size. Therefore, a host
action is required to detect the change after the volume capacity is increased.
The following volume sizes are the maximum that are supported for each storage
type.
v Open systems FB volumes: 16 TB
v IBM Z CKD volume types 3390 model 9 and custom: 65520 cylinders
v IBM Z CKD volume type 3390 model 3: 3339 cylinders
v IBM Z CKD volume types 3390 model A: 1,182,006 cylinders
Note: Volumes cannot be in Copy Services relationships (point-in-time copy,
FlashCopy SE, Metro Mirror, Global Mirror, Metro/Global Mirror, and z/OS Global
Mirror) during expansion.
Count key data and fixed block volume deletion prevention
By default, DS8000 attempts to prevent volumes that are online and in use from
being deleted. The DS CLI and DS Storage Manager provides an option to force
the deletion of count key data (CKD) and fixed block (FB) volumes that are in use.
For CKD volumes, in use means that the volumes are participating in a Copy
Services relationship or are in a path group. For FB volumes, in use means that the
volumes are participating in a Copy Services relationship or there is I/O access to
the volume in the last five minutes.
If you specify the -safe option when you delete an FB volume, the system
determines whether the volumes are assigned to non-default volume groups. If the
volumes are assigned to a non-default (user-defined) volume group, the volumes
are not deleted.
If you specify the -force option when you delete a volume, the storage system
deletes volumes regardless of whether the volumes are in use.
Thin provisioning
Thin provisioning defines logical volume sizes that are larger than the physical
capacity installed on the system. The volume allocates capacity on an as-needed
basis as a result of host-write actions.
The thin provisioning feature enables the creation of extent space efficient logical
volumes. Extent space efficient volumes are supported for FB and CKD volumes
Chapter 3. Data management features39
Page 50
and are supported for all Copy Services functionality, including FlashCopy targets
where they provide a space efficient FlashCopy capability.
Releasing space on CKD volumes that use thin provisioning
On an IBM Z®host, the DFSMSdss SPACEREL utility can release space
from thin provisioned CKD volumes that are used by either Global Copy
or Global Mirror.
For Global Copy, space is released on the primary and secondary copies. If
the secondary copy is the primary copy of another Global Copy
relationship, space is also released on secondary copies of that relationship.
For Global Mirror, space is released on the primary copy after a new
consistency group is formed. Space is released on the secondary copy after
the next consistency group is formed and a FlashCopy commit is
performed. If the secondary copy is the primary copy of another Global
Mirror relationship, space is also released on secondary copies of that
relationship.
Extent Space Efficient (ESE) capacity controls for thin
provisioning
Use of thin provisioning can affect the amount of storage capacity that you choose
to order. ESE capacity controls allow you to allocate storage appropriately.
With the mixture of thin-provisioned (ESE) and fully-provisioned (non-ESE)
volumes in an extent pool, a method is needed to dedicate some of the extent-pool
storage capacity for ESE user data usage, as well as limit the ESE user data usage
within the extent pool. Another thing that is needed is the ability to detect when
the available storage space within the extent pool for ESE volumes is running out
of space.
ESE capacity controls provide extent pool attributes to limit the maximum extent
pool storage available for ESE user data usage, and to guarantee a proportion of
the extent pool storage to be available for ESE user data usage.
An SNMP trap that is associated with the ESE capacity controls notifies you when
the ESE extent usage in the pool exceeds an ESE extent threshold set by you. You
are also notified when the extent pool is out of storage available for ESE user data
usage.
ESE capacity controls include the following attributes:
ESE Extent Threshold
The percentage that is compared to the actual percentage of storage
capacity available for ESE customer extent allocation when determining the
extent pool ESE extent status.
ESE Extent Status
One of the three following values:
v 0: the percent of the available ESE capacity is greater than the ESE extent
threshold
v 1: the percent of the available ESE capacity is greater than zero but less
than or equal to the ESE extent threshold
v 10: the percent of the available ESE capacity is zero
Note: When the size of the extent pool remains fixed or is only increased, the
allocatable physical capacity remains greater than or equal to the allocated physical
40DS8882F Introduction and Planning Guide
Page 51
IBM Easy Tier
capacity. However, a reduction in the size of the extent pool can cause the
allocatable physical capacity to become less than the allocated physical capacity in
some cases.
For example, if the user requests that one of the ranks in an extent pool be
depopulated, the data on that rank are moved to the remaining ranks in the pool
causing the rank to become not allocated and removed from the pool. The user is
advised to inspect the limits and threshold on the extent pool following any
changes to the size of the extent pool to ensure that the specified values are still
consistent with the user’s intentions.
Easy Tier is an optional feature that is provided at no cost. It can greatly increase
the performance of your system by ensuring frequently accessed data is put on
faster storage. Its capabilities include manual volume capacity rebalance, auto
performance rebalancing in homogeneous pools, hot spot management, rank
depopulation, manual volume migration, and thin provisioning support (ESE
volumes only).
Easy Tier features help you to effectively manage your system health, storage
performance, and storage capacity automatically. Easy Tier uses system
configuration and workload analysis with warm demotion to achieve effective
overall system health. Simultaneously, data promotion and auto-rebalancing
address performance while cold demotion works to address capacity.
Easy Tier data in memory persists in local storage or storage in the peer server,
ensuring the Easy Tier configurations are available at failover, cold start, or Easy
Tier restart.
The Easy Tier Heat Map Transfer utility replicates Easy Tier primary storage
workload learning results to secondary storage sites, synchronizing performance
characteristics across all storage systems. In the event of data recovery, storage
system performance is not sacrificed.
You can also use Easy Tier to help with the management of your ESE thin
provisioning on fixed block (FB) or count key data (CKD) volumes.
An additional feature provides the capability for you to use Easy Tier manual
processing for thin provisioning. Rank depopulation is supported on ranks with
ESE volumes allocated (extent space-efficient) or auxiliary volumes.
Use the capabilities of Easy Tier to support:
Drive classes
The following drive classes are available, in order from highest to lowest
performance.
Flash Tier 0 drives
The highest performance drives, which provide high I/O
throughput and low latency.
Flash Tier 1 drives
The first tier of high capacity drives.
Flash Tier 2 drives
The second tier of high capacity drives.
Chapter 3. Data management features41
Page 52
Three tiers
Using three tiers (each representing a separate drive class) and efficient
algorithms improves system performance and cost effectiveness.
You can select from four drive classes to create up to three tiers. The drives
within a tier must be homogeneous.
Manual volume or pool rebalance
Volume rebalancing relocates the smallest number of extents of a volume
and restripes those extents on all available ranks of the extent pool.
Auto-rebalancing
Automatically balances the workload of the same storage tier within both
the homogeneous and the hybrid pool that is based on usage to improve
system performance and resource use. Use the auto-rebalancing functions
of Easy Tier to manage a combination of homogeneous and hybrid pools,
including relocating hot spots on ranks. With homogeneous pools, systems
with only one tier can use Easy Tier technology to optimize their RAID
array usage.
Rank depopulations
Allows ranks that have extents (data) allocated to them to be unassigned
from an extent pool by using extent migration to move extents from the
specified ranks to other ranks within the pool.
Thin provisioning
Support for the use of thin provisioning is available on ESE and standard
volumes. The use of TSE volumes (FB and CKD) is not supported.
Easy Tier provides a performance monitoring capability, regardless of whether the
Easy Tier feature is activated. Easy Tier uses the monitoring process to determine
what data to move and when to move it when you use automatic mode. You can
enable monitoring independently (with or without the Easy Tier feature activated)
for information about the behavior and benefits that can be expected if automatic
mode were enabled.
Data from the monitoring process is included in a summary report that you can
download to your local system.
VMware vStorage API for Array Integration support
The storage system provides support for the VMware vStorage API for Array
Integration (VAAI).
The VAAI API offloads storage processing functions from the server to the storage
system, reducing the workload on the host server hardware for improved
performance on both the network and host servers.
The following operations are supported:
Atomic test and set or VMware hardware-assisted locking
The hardware-assisted locking feature uses the VMware Compare and
Write command for reading and writing the volume's metadata within a
single operation. With the Compare and Write command, the storage
system provides a faster mechanism that is displayed to the volume as an
atomic action that does not require locking the entire volume.
42DS8882F Introduction and Planning Guide
Page 53
The Compare and Write command is supported on all open systems fixed
block volumes, including Metro Mirror and Global Mirror primary
volumes and FlashCopy source and target volumes.
XCOPY or Full Copy
The XCOPY (or extended copy) command copies multiple files from one
directory to another or across a network.
Full Copy copies data from one storage array to another without writing to
the VMware ESX Server (VMware vStorage API).
The following restrictions apply to XCOPY:
v XCOPY is not supported on Extent Space Efficient (ESE) volumes
v XCOPY is not supported on volumes greater than 2 TB
v The target of an XCOPY cannot be a Metro Mirror or Global Mirror
primary volume
v The Copy Services license is required
Block Zero (Write Same)
The SCSI Write Same command is supported on all volumes. This
command efficiently writes each block, faster than standard SCSI write
commands, and is optimized for network bandwidth usage.
IBM vCenter plug-in for ESX 4.x
The IBM vCenter plug-in for ESX 4.x provides support for the VAAI
interfaces on ESX 4.x.
For information on how to attach a VMware ESX Server host to a DS8880
with Fibre Channel adapters, see IBM DS8000 series online product
documentation ( http://www.ibm.com/support/knowledgecenter/
ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/f2c_securitybp.html) and select
Attaching and configuring hosts > VMware ESX Server host attachment.
VMware vCenter Site Recovery Manager 5.0
VMware vCenter Site Recovery Manager (SRM) provides methods to
simplify and automate disaster recovery processes. IBM Site Replication
Adapter (SRA) communicates between SRM and the storage replication
interface. SRA support for SRM 5.0 includes the new features for planned
migration, reprotection, and failback. The supported Copy Services are
Metro Mirror, Global Mirror, Metro-Global Mirror, and FlashCopy.
The IBM Storage Management Console plug-in enables VMware administrators to
manage their systems from within the VMware management environment. This
plug-in provides an integrated view of IBM storage to VMware virtualize
datastores that are required by VMware administrators. For information, see the
IBM Storage Management Console for VMware vCenter (http://www.ibm.com/
support/knowledgecenter/en/STAV45/hsg/hsg_vcplugin_kcwelcome_sonas.html)
online documentation.
Performance for IBM Z
The storage system supports the following IBM performance enhancements for
IBM Z environments.
v Parallel Access Volumes (PAVs)
v Multiple allegiance
v z/OS Distributed Data Backup
v z/HPF extended distance capability
Chapter 3. Data management features43
Page 54
Parallel Access Volumes
A PAV capability represents a significant performance improvement by the storage
unit over traditional I/O processing. With PAVs, your system can access a single
volume from a single host with multiple concurrent requests.
You must configure both your storage unit and operating system to use PAVs. You
can use the logical configuration definition to define PAV-bases, PAV-aliases, and
their relationship in the storage unit hardware. This unit address relationship
creates a single logical volume, allowing concurrent I/O operations.
Static PAV associates the PAV-base address and its PAV aliases in a predefined and
fixed method. That is, the PAV-aliases of a PAV-base address remain unchanged.
Dynamic PAV, on the other hand, dynamically associates the PAV-base address and
its PAV aliases. The device number types (PAV-alias or PAV-base) must match the
unit address types as defined in the storage unit hardware.
You can further enhance PAV by adding the IBM HyperPAV feature. IBM
HyperPAV associates the volumes with either an alias address or a specified base
logical volume number. When a host system requests IBM HyperPAV processing
and the processing is enabled, aliases on the logical subsystem are placed in an
IBM HyperPAV alias access state on all logical paths with a specific path group ID.
IBM HyperPAV is only supported on FICON channel paths.
PAV can improve the performance of large volumes. You get better performance
with one base and two aliases on a 3390 Model 9 than from three 3390 Model 3
volumes with no PAV support. With one base, it also reduces storage management
costs that are associated with maintaining large numbers of volumes. The alias
provides an alternate path to the base device. For example, a 3380 or a 3390 with
one alias has only one device to write to, but can use two paths.
The storage unit supports concurrent or parallel data transfer operations to or from
the same volume from the same system or system image for IBM Z or S/390
hosts. PAV software support enables multiple users and jobs to simultaneously
access a logical volume. Read and write operations can be accessed simultaneously
to different domains. (The domain of an I/O operation is the specified extents to
which the I/O operation applies.)
®
Multiple allegiance
With multiple allegiance, the storage unit can run concurrent, multiple requests
from multiple hosts.
Traditionally, IBM storage subsystems allow only one channel program to be active
to a disk volume at a time. This means that, after the subsystem accepts an I/O
request for a particular unit address, this unit address appears "busy" to
subsequent I/O requests. This single allegiance capability ensures that additional
requesting channel programs cannot alter data that is already being accessed.
By contrast, the storage unit is capable of multiple allegiance (or the concurrent
execution of multiple requests from multiple hosts). That is, the storage unit can
queue and concurrently run multiple requests for the same unit address, if no
extent conflict occurs. A conflict refers to either the inclusion of a Reserve request
by a channel program or a Write request to an extent that is in use.
44DS8882F Introduction and Planning Guide
Page 55
Copy Services
z/OS Distributed Data Backup
z/OS Distributed Data Backup (zDDB) allows hosts, which are attached through a
FICON interface, to access data on fixed block (FB) volumes through a device
address on FICON interfaces.
If the zDDB LIC feature key is installed and enabled and a volume group type
specifies either FICON interfaces, this volume group has implicit access to all FB
logical volumes that are configured in addition to all CKD volumes specified in the
volume group. In addition, this optional feature enables data backup of open
systems from distributed server platforms through a IBM Z host. The feature helps
you manage multiple data protection environments and consolidate those into one
environment that is managed by IBM Z. For more information, see “z/OS
Distributed Data Backup” on page 84.
z/HPF extended distance
z/HPF extended distance reduces the impact that is associated with supported
commands on current adapter hardware, improving FICON throughput on the I/O
ports. The storage system also supports the new zHPF I/O commands for
multitrack I/O operations.
Copy Services functions can help you implement storage solutions to keep your
business running 24 hours a day, 7 days a week. Copy Services include a set of
disaster recovery, data migration, and data duplication functions.
The storage system supports Copy Service functions that contribute to the
protection of your data. These functions are also supported on the IBM
TotalStorage Enterprise Storage Server®.
Notes:
v If you are creating paths from a DS8882F 4-port host adapter to a previous
release DS8000 (Release 6.0 or later), which supports 8-port host adapters, you
can only connect the lower four ports of the 8-port host adapter.
v The maximum number of FlashCopy relationships that are allowed on a volume
is 65534. If that number is exceeded, the FlashCopy operation fails.
v The size limit for volumes or extents in a Copy Service relationship is 2 TB.
v Thin provisioning functions in open-system environments are supported for the
following Copy Services functions:
– FlashCopy relationships
– Global Mirror relationships if the Global Copy A and B volumes are Extent
Space Efficient (ESE) volumes. The FlashCopy target volume (Volume C) in
the Global Mirror relationship can be an ESE volume or standard volume.
v PPRC supports any intermix of T10-protected or standard volumes. FlashCopy
does not support intermix.
v PPRC supports copying from standard volumes to ESE volumes, or ESE
volumes to Standard volumes, to allow migration with PPRC failover when both
source and target volumes are on a DS8000 version 8.2 or higher.
The following Copy Services functions are available as optional features:
v Point-in-time copy, which includes IBM FlashCopy.
Chapter 3. Data management features45
Page 56
The FlashCopy function allows you to make point-in-time, full volume copies of
data so that the copies are immediately available for read or write access. In IBM
Z environments, you can also use the FlashCopy function to perform data set
level copies of your data.
v Remote mirror and copy, which includes the following functions:
– Metro Mirror
Metro Mirror provides real-time mirroring of logical volumes between two
storage system that can be located up to 300 km from each other. It is a
synchronous copy solution where write operations are completed on both
copies (local and remote site) before they are considered to be done.
– Global Copy
Global Copy is a nonsynchronous long-distance copy function where
incremental updates are sent from the local to the remote site on a periodic
basis.
– Global Mirror
Global Mirror is a long-distance remote copy function across two sites by
using asynchronous technology. Global Mirror processing is designed to
provide support for unlimited distance between the local and remote sites,
with the distance typically limited only by the capabilities of the network and
the channel extension technology.
– Metro/Global Mirror (a combination of Metro Mirror and Global Mirror)
Metro/Global Mirror is a three-site remote copy solution. It uses synchronous
replication to mirror data between a local site and an intermediate site, and
asynchronous replication to mirror data from an intermediate site to a remote
site.
– Multiple Target PPRC
Multiple Target PPRC builds and extends the capabilities of Metro Mirror and
Global Mirror. It allows data to be mirrored from a single primary site to two
secondary sites simultaneously. You can define any of the sites as the primary
site and then run Metro Mirror replication from the primary site to either of
the other sites individually or both sites simultaneously.
v Remote mirror and copy for IBM Z environments, which includes z/OS Global
Mirror.
Note: When FlashCopy is used on FB (open) volumes, the source and the target
volumes must have the same protection type of either T10 DIF or standard.
The point-in-time and remote mirror and copy features are supported across
variousIBM server environments such as IBM i, System p, and IBM Z, as well as
servers from Oracle and Hewlett-Packard.
You can manage these functions through a command-line interface that is called
the DS CLI. You can use the DS8000 Storage Management GUI to set up and
manage the following types of data-copy functions from any point where network
access is available:
Point-in-time copy (FlashCopy)
You can use the FlashCopy function to make point-in-time, full volume copies of
data, with the copies immediately available for read or write access. In IBM Z
environments, you can also use the FlashCopy function to perform data set level
copies of your data. You can use the copy with standard backup tools that are
available in your environment to create backup copies on tape.
46DS8882F Introduction and Planning Guide
Page 57
FlashCopy is an optional function.
The FlashCopy function creates a copy of a source volume on the target volume.
This copy is called a point-in-time copy. When you initiate a FlashCopy operation,
a FlashCopy relationship is created between a source volume and target volume. A
FlashCopy relationship is a mapping of the FlashCopy source volume and a
FlashCopy target volume. This mapping allows a point-in-time copy of that source
volume to be copied to the associated target volume. The FlashCopy relationship
exists between the volume pair in either case:
v From the time that you initiate a FlashCopy operation until the storage system
copies all data from the source volume to the target volume.
v Until you explicitly delete the FlashCopy relationship if it was created as a
persistent FlashCopy relationship.
One of the main benefits of the FlashCopy function is that the point-in-time copy is
immediately available for creating a backup of production data. The target volume
is available for read and write processing so it can be used for testing or backup
purposes. Data is physically copied from the source volume to the target volume
by using a background process. (A FlashCopy operation without a background
copy is also possible, which allows only data modified on the source to be copied
to the target volume.) The amount of time that it takes to complete the background
copy depends on the following criteria:
v The amount of data to be copied
v The number of background copy processes that are occurring
v The other activities that are occurring on the storage systems
The FlashCopy function supports the following copy options:
Consistency groups
Creates a consistent point-in-time copy of multiple volumes, with
negligible host impact. You can enable FlashCopy consistency groups from
the DS CLI.
Change recording
Activates the change recording function on the volume pair that is
participating in a FlashCopy relationship. This function enables a
subsequent refresh to the target volume.
Establish FlashCopy on existing Metro Mirror source
Establish a FlashCopy relationship, where the target volume is also the
source of an existing remote mirror and copy source volume. This allows
you to create full or incremental point-in-time copies at a local site and
then use remote mirroring commands to copy the data to the remote site.
Fast reverse
Reverses the FlashCopy relationship without waiting for the finish of the
background copy of the previous FlashCopy. This option applies to the
Global Mirror mode.
Inhibit writes to target
Ensures that write operations are inhibited on the target volume until a
refresh FlashCopy operation is complete.
Multiple Incremental FlashCopy
Allows a source volume to establish incremental flash copies to a
maximum of 12 targets.
Chapter 3. Data management features47
Page 58
Multiple Relationship FlashCopy
Allows a source volume to have multiple (up to 12) target volumes at the
same time.
Persistent FlashCopy
Allows the FlashCopy relationship to remain even after the FlashCopy
operation completes. You must explicitly delete the relationship.
Refresh target volume
Refresh a FlashCopy relationship, without recopying all tracks from the
source volume to the target volume.
Resynchronizing FlashCopy volume pairs
Update an initial point-in-time copy of a source volume without having to
recopy your entire volume.
Reverse restore
Reverses the FlashCopy relationship and copies data from the target
volume to the source volume.
Reset SCSI reservation on target volume
If there is a SCSI reservation on the target volume, the reservation is
released when the FlashCopy relationship is established. If this option is
not specified and a SCSI reservation exists on the target volume, the
FlashCopy operation fails.
Remote Pair FlashCopy
Figure 5 on page 49 illustrates how Remote Pair FlashCopy works. If
Remote Pair FlashCopy is used to copy data from Local A to Local B, an
equivalent operation is also performed from Remote A to Remote B.
FlashCopy can be performed as described for a Full Volume FlashCopy,
Incremental FlashCopy, and Dataset Level FlashCopy.
The Remote Pair FlashCopy function prevents the Metro Mirror
relationship from changing states and the resulting momentary period
where Remote A is out of synchronization with Remote B. This feature
provides a solution for data replication, data migration, remote copy, and
disaster recovery tasks.
Without Remote Pair FlashCopy, when you established a FlashCopy
relationship from Local A to Local B, by using a Metro Mirror primary
volume as the target of that FlashCopy relationship, the corresponding
Metro Mirror volume pair went from “full duplex” state to “duplex
pending” state if the FlashCopy data was being transferred to the Local B.
The time that it took to complete the copy of the FlashCopy data until all
Metro Mirror volumes were synchronous again, depended on the amount
of data transferred. During this time, the Local B would be inconsistent if a
disaster were to have occurred.
Note: Previously, if you created a FlashCopy relationship with the
Preserve Mirror, Required option, by using a Metro Mirror primary
volume as the target of that FlashCopy relationship, and if the status of the
Metro Mirror volume pair was not in a “full duplex” state, the FlashCopy
relationship failed. That restriction is now removed. The Remote Pair
FlashCopy relationship completes successfully with the “Preserve Mirror,
Required” option, even if the status of the Metro Mirror volume pair is
either in a suspended or duplex pending state.
48DS8882F Introduction and Planning Guide
Page 59
Local Storage ServerRemote Storage Server
Local A
Local B
Remote B
FlashCopy
f2c01089
Remote A
full duplex
Establish
FlashCopy
full duplex
Metro Mirror
Figure 5. Remote Pair FlashCopy
Note: The storage system supports Incremental FlashCopy and Metro Global
Mirror Incremental Resync on the same volume.
Safeguarded Copy
The Safeguarded Copy feature creates safeguarded backups that are not accessible
by the host system and protects these backups from corruption that can occur in
the production environment. You can define a Safeguarded Copy schedule to create
multiple backups on a regular basis, such as hourly or daily. You can also restore a
backup to the source volume or to a different volume. A backup contains the same
metadata as the safeguarded source volume.
Safeguarded Copy can create backups with more frequency and capacity in
comparison to FlashCopy volumes. The creation of safeguarded backups also
impacts performance less than the multiple target volumes that are created by
FlashCopy.
With backups that are outside of the production environment, you can use the
backups to restore your environment back to a specified point in time. You can
also extract and restore specific data from the backup or use the backup to
diagnose production issues.
You cannot delete a safeguarded source volume before the safeguarded backups
are deleted. The maximum size of a backup is 16 TB.
Copy Services Manager (available on the Hardware Management Console) is
required to facilitate the use and management of Safeguarded Copy functions.
Remote mirror and copy
The remote mirror and copy feature is a flexible data mirroring technology that
allows replication between a source volume and a target volume on one or two
disk storage systems. You can also issue remote mirror and copy operations to a
group of source volumes on one logical subsystem (LSS) and a group of target
Chapter 3. Data management features49
Page 60
volumes on another LSS. (An LSS is a logical grouping of up to 256 logical
volumes for which the volumes must have the same disk format, either count key
data or fixed block.)
Remote mirror and copy is an optional feature that provides data backup and
disaster recovery.
Note: You must use Fibre Channel host adapters with remote mirror and copy
functions. To see a current list of environments, configurations, networks, and
products that support remote mirror and copy functions, click InteroperabilityMatrix at the following location IBM System Storage Interoperation Center (SSIC)
website (www.ibm.com/systems/support/storage/config/ssic).
The remote mirror and copy feature provides synchronous (Metro Mirror) and
asynchronous (Global Copy) data mirroring. The main difference is that the Global
Copy feature can operate at long distances, even continental distances, with
minimal impact on applications. Distance is limited only by the network and
channel extenders technology capabilities. The maximum supported distance for
Metro Mirror is 300 km.
With Metro Mirror, application write performance depends on the available
bandwidth. Global Copy enables better use of available bandwidth capacity to
allow you to include more of your data to be protected.
The enhancement to Global Copy is Global Mirror, which uses Global Copy and
the benefits of FlashCopy to form consistency groups. (A consistency group is a set
of volumes that contain consistent and current data to provide a true data backup
at a remote site.) Global Mirror uses a master storage system (along with optional
subordinate storage systems) to internally, without external automation software,
manage data consistency across volumes by using consistency groups.
Consistency groups can also be created by using the freeze and run functions of
Metro Mirror. The freeze and run functions, when used with external automation
software, provide data consistency for multiple Metro Mirror volume pairs.
The following sections describe the remote mirror and copy functions.
Synchronous mirroring (Metro Mirror)
Provides real-time mirroring of logical volumes (a source and a target)
between two storage systems that can be located up to 300 km from each
other. With Metro Mirror copying, the source and target volumes can be on
the same storage system or on separate storage systems. You can locate the
storage system at another site, some distance away.
Metro Mirror is a synchronous copy feature where write operations are
completed on both copies (local and remote site) before they are considered
to be complete. Synchronous mirroring means that a storage server
constantly updates a secondary copy of a volume to match changes that
are made to a source volume.
The advantage of synchronous mirroring is that there is minimal host
impact for performing the copy. The disadvantage is that since the copy
operation is synchronous, there can be an impact to application
performance because the application I/O operation is not acknowledged as
complete until the write to the target volume is also complete. The longer
the distance between primary and secondary storage systems, the greater
this impact to application I/O, and therefore, application performance.
50DS8882F Introduction and Planning Guide
Page 61
Asynchronous mirroring (Global Copy)
Copies data nonsynchronously and over longer distances than is possible
with the Metro Mirror feature. When operating in Global Copy mode, the
source volume sends a periodic, incremental copy of updated tracks to the
target volume instead of a constant stream of updates. This function causes
less impact to application writes for source volumes and less demand for
bandwidth resources. It allows for a more flexible use of the available
bandwidth.
The updates are tracked and periodically copied to the target volumes. As
a consequence, there is no guarantee that data is transferred in the same
sequence that was applied to the source volume.
To get a consistent copy of your data at your remote site, periodically
switch from Global Copy to Metro Mirror mode, then either stop the
application I/O or freeze data to the source volumes by using a manual
process with freeze and run commands. The freeze and run functions can
be used with external automation software such as Geographically
Dispersed Parallel Sysplex™(GDPS®), which is available for IBM Z
environments, to ensure data consistency to multiple Metro Mirror volume
pairs in a specified logical subsystem.
Common options for Metro Mirror/Global Mirror and Global Copy
include the following modes:
Suspend and resume
If you schedule a planned outage to perform maintenance at your
remote site, you can suspend Metro Mirror/Global Mirror or
Global Copy processing on specific volume pairs during the
duration of the outage. During this time, data is no longer copied
to the target volumes. Because the primary storage system tracks
all changed data on the source volume, you can resume operations
later to synchronize the data between the volumes.
Copy out-of-synchronous data
Copy an entire volume or not copy the volume
Global Mirror
Provides a long-distance remote copy across two sites by using
asynchronous technology. Global Mirror processing is most often associated
with disaster recovery or disaster recovery testing. However, it can also be
used for everyday processing and data migration.
Global Mirror integrates both the Global Copy and FlashCopy functions.
The Global Mirror function mirrors data between volume pairs of two
storage systems over greater distances without affecting overall
You can specify that only data updated on the source volume
while the volume pair was suspended is copied to its associated
target volume.
You can copy an entire source volume to its associated target
volume to guarantee that the source and target volume contain the
same data. When you establish volume pairs and choose not to
copy a volume, a relationship is established between the volumes
but no data is sent from the source volume to the target volume. In
this case, it is assumed that the volumes contain the same data and
are consistent, so copying the entire volume is not necessary or
required. Only new updates are copied from the source to target
volumes.
Chapter 3. Data management features51
Page 62
performance. It also provides application-consistent data at a recovery (or
remote) site in a disaster at the local site. By creating a set of remote
volumes every few seconds, the data at the remote site is maintained to be
a point-in-time consistent copy of the data at the local site.
Global Mirror operations periodically start point-in-time FlashCopy
operations at the recovery site, at regular intervals, without disrupting the
I/O to the source volume, thus giving a continuous, near up-to-date data
backup. By grouping many volumes into a session that is managed by the
master storage system, you can copy multiple volumes to the recovery site
simultaneously maintaining point-in-time consistency across those
volumes. (A session contains a group of source volumes that are mirrored
asynchronously to provide a consistent copy of data at the remote site.
Sessions are associated with Global Mirror relationships and are defined
with an identifier [session ID] that is unique across the enterprise. The ID
identifies the group of volumes in a session that are related and that can
participate in the Global Mirror consistency group.)
Global Mirror supports up to 32 Global Mirror sessions per storage facility
image. Previously, only one session was supported per storage facility
image.
You can use multiple Global Mirror sessions to fail over only data assigned
to one host or application instead of forcing you to fail over all data if one
host or application fails. This process provides increased flexibility to
control the scope of a failover operation and to assign different options and
attributes to each session.
The DS CLI and DS Storage Manager display information about the
sessions, including the copy state of the sessions.
Practice copying and consistency groups
To get a consistent copy of your data, you can pause Global Mirror on a
consistency group boundary. Use the pause command with the secondary
storage option. (For more information, see the DS CLI Commands
reference.) After verifying that Global Mirror is paused on a consistency
boundary (state is Paused with Consistency), the secondary storage system
and the FlashCopy target storage system or device are consistent. You can
then issue either a FlashCopy or Global Copy command to make a practice
copy on another storage system or device. You can immediately resume
Global Mirror, without the need to wait for the practice copy operation to
finish. Global Mirror then starts forming consistency groups again. The
entire pause and resume operation generally takes just a few seconds.
Metro/Global Mirror
Provides a three-site, long-distance disaster recovery replication that
combines Metro Mirror with Global Mirror replication for both IBM Z and
open systems data. Metro/Global Mirror uses synchronous replication to
mirror data between a local site and an intermediate site, and
asynchronous replication to mirror data from an intermediate site to a
remote site.
In a three-site Metro/Global Mirror, if an outage occurs, a backup site is
maintained regardless of which one of the sites is lost. Suppose that an
outage occurs at the local site, Global Mirror continues to mirror updates
between the intermediate and remote sites, maintaining the recovery
capability at the remote site. If an outage occurs at the intermediate site,
data at the local storage system is not affected. If an outage occurs at the
52DS8882F Introduction and Planning Guide
Page 63
remote site, data at the local and intermediate sites is not affected.
Applications continue to run normally in either case.
With the incremental resynchronization function enabled on a
Metro/Global Mirror configuration, if the intermediate site is lost, the local
and remote sites can be connected, and only a subset of changed data is
copied between the volumes at the two sites. This process reduces the
amount of data needing to be copied from the local site to the remote site
and the time it takes to do the copy.
Multiple Target PPRC
Provides an enhancement to disaster recovery solutions by allowing data
to be mirrored from a single primary site to two secondary sites
simultaneously. The function builds on and extends Metro Mirror and
Global Mirror capabilities. Various interfaces and operating systems
support the function. Disaster recovery scenarios depend on support from
controlling software such as Geographically Dispersed Parallel Sysplex
(GDPS) and IBM Copy Services Manager.
z/OS Global Mirror
If workload peaks, which might temporarily overload the bandwidth of the
Global Mirror configuration, the enhanced z/OS Global Mirror function
initiates a Global Mirror suspension that preserves primary site application
performance. If you are installing new high-performance z/OS Global
Mirror primary storage subsystems, this function provides improved
capacity and application performance during heavy write activity. This
enhancement can also allow Global Mirror to be configured to tolerate
longer periods of communication loss with the primary storage
subsystems. This enables the Global Mirror to stay active despite transient
channel path recovery events. In addition, this enhancement can provide
fail-safe protection against application system impact that is related to
unexpected data mover system events.
The z/OS Global Mirror function is an optional function.
z/OS Metro/Global Mirror Incremental Resync
z/OS Metro/Global Mirror Incremental Resync is an enhancement for
z/OS Metro/Global Mirror. z/OS Metro/Global Mirror Incremental Resync
can eliminate the need for a full copy after a HyperSwap®situation in
3-site z/OS Metro/Global Mirror configurations. The storage system
supports z/OS Metro/Global Mirror that is a 3-site mirroring solution that
uses IBM System Storage Metro Mirror and z/OS Global Mirror (XRC).
The z/OS Metro/Global Mirror Incremental Resync capability is intended
to enhance this solution by enabling resynchronization of data between
sites by using only the changed data from the Metro Mirror target to the
z/OS Global Mirror target after a HyperSwap operation.
If an unplanned failover occurs, you can use the z/OS Soft Fence function
to prevent any system from accessing data from an old primary PPRC site.
For more information, see the GDPS/PPRC Installation and Customization
Guide, or the GDPS/PPRC HyperSwap Manager Installation and Customization
Guide.
z/OS Global Mirror Multiple Reader (enhanced readers)
z/OS Global Mirror Multiple Reader provides multiple Storage Device
Manager readers that allow improved throughput for remote mirroring
configurations in IBM Z environments. z/OS Global Mirror Multiple
Reader helps maintain constant data consistency between mirrored sites
Chapter 3. Data management features53
Page 64
and promotes efficient recovery. This function is supported on the storage
system running in a IBM Z environment with version 1.7 or later at no
additional charge.
Interoperability with existing and previous generations of the
DS8000 series
All of the remote mirroring solutions that are documented in the sections above
use Fibre Channel as the communications link between the primary and secondary
storage systems. The Fibre Channel ports that are used for remote mirror and copy
can be configured as either a dedicated remote mirror link or as a shared port
between remote mirroring and Fibre Channel Protocol (FCP) data traffic.
The remote mirror and copy solutions are optional capabilities and are compatible
with previous generations of DS8000 series. They are available as follows:
v Metro Mirror indicator feature numbers 75xx and 0744 and corresponding
DS8000 series function authorization (2396-LFA MM feature numbers 75xx)
vGlobal Mirror indicator feature numbers 75xx and 0746 and corresponding
DS8000 series function authorization (2396-LFA GM feature numbers 75xx).
Global Copy is a non-synchronous long-distance copy option for data migration
and backup.
Disaster recovery through Copy Services
Through Copy Services functions, you can prepare for a disaster by backing up,
copying, and mirroring your data at local and remote sites.
Having a disaster recovery plan can ensure that critical data is recoverable at the
time of a disaster. Because most disasters are unplanned, your disaster recovery
plan must provide a way to recover your applications quickly, and more
importantly, to access your data. Consistent data to the same point-in-time across
all storage units is vital before you can recover your data at a backup (normally
your remote) site.
Most users use a combination of remote mirror and copy and point-in-time copy
(FlashCopy) features to form a comprehensive enterprise solution for disaster
recovery. In an event of a planned event or unplanned disaster, you can use
failover and failback modes as part of your recovery solution. Failover and failback
modes can reduce the synchronization time of remote mirror and copy volumes
after you switch between local (or production) and intermediate (or remote) sites
during an outage. Although failover transmits no data, it changes the status of a
device, and the status of the secondary volume changes to a suspended primary
volume. The device that initiates the failback command determines the direction of
the transmitted data.
Recovery procedures that include failover and failback modes use remote mirror
and copy functions, such as Metro Mirror, Global Copy, Global Mirror,
Metro/Global Mirror, Multiple Target PPRC, and FlashCopy.
Note: See the IBM DS8000 Command-Line Interface User's Guide for specific disaster
recovery tasks.
Data consistency can be achieved through the following methods:
54DS8882F Introduction and Planning Guide
Page 65
Manually using external software (without Global Mirror)
You can use Metro Mirror, Global Copy, and FlashCopy functions to create
a consistent and restartable copy at your recovery site. These functions
require a manual and periodic suspend operation at the local site. For
instance, you can enter the freeze and run commands with external
automated software. Then, you can initiate a FlashCopy function to make a
consistent copy of the target volume for backup or recovery purposes.
Automation software is not provided with the storage system; it must be
supplied by the user.
Note: The freeze operation occurs at the same point-in-time across all
links and all storage systems.
Automatically (with Global Mirror and FlashCopy)
You can automatically create a consistent and restartable copy at your
intermediate or remote site with minimal or no interruption of
applications. This automated process is available for two-site Global Mirror
or three-site Metro / Global Mirror configurations. Global Mirror
operations automate the process of continually forming consistency groups.
It combines Global Copy and FlashCopy operations to provide consistent
data at the remote site. A master storage unit (along with subordinate
storage units) internally manages data consistency through consistency
groups within a Global Mirror configuration. Consistency groups can be
created many times per hour to increase the currency of data that is
captured in the consistency groups at the remote site.
Note: A consistency group is a collection of session-grouped volumes
across multiple storage systems. Consistency groups are managed together
in a session during the creation of consistent copies of data. The formation
of these consistency groups is coordinated by the master storage unit,
which sends commands over remote mirror and copy links to its
subordinate storage units.
If a disaster occurs at a local site with a two or three-site configuration,
you can continue production on the remote (or intermediate) site. The
consistent point-in-time data from the remote site consistency group
enables recovery at the local site when it becomes operational.
Resource groups for Copy Services scope limiting
Resource groups are used to define a collection of resources and associate a set of
policies relative to how the resources are configured and managed. You can define
a network user account so that it has authority to manage a specific set of
resources groups.
Copy Services scope limiting overview
Copy services scope limiting is the ability to specify policy-based limitations on
Copy Services requests. With the combination of policy-based limitations and other
inherent volume-addressing limitations, you can control which volumes can be in a
Copy Services relationship, which network users or host LPARs issue Copy
Services requests on which resources, and other Copy Services operations.
Use these capabilities to separate and protect volumes in a Copy Services
relationship from each other. This can assist you with multitenancy support by
assigning specific resources to specific tenants, limiting Copy Services relationships
Chapter 3. Data management features55
Page 66
so that they exist only between resources within each tenant's scope of resources,
Site 1
Hosts with LPARs
Switches
Site 2
Switches
Hosts with LPARs
f2c01638
Client AClient A
Client BClient B
Client AClient A
Client BClient B
and limiting a tenant's Copy Services operators to an "operator only" role.
When managing a single-tenant installation, the partitioning capability of resource
groups can be used to isolate various subsets of an environment as if they were
separate tenants. For example, to separate mainframes from distributed system
servers, Windows from UNIX, or accounting departments from telemarketing.
Using resource groups to limit Copy Service operations
Figure 6 illustrates one possible implementation of an exemplary environment that
uses resource groups to limit Copy Services operations. Two tenants (Client A and
Client B) are illustrated that are concurrently operating on shared hosts and
storage systems.
Each tenant has its own assigned LPARs on these hosts and its own assigned
volumes on the storage systems. For example, a user cannot copy a Client A
volume to a Client B volume.
Resource groups are configured to ensure that one tenant cannot cause any Copy
Services relationships to be initiated between its volumes and the volumes of
another tenant. These controls must be set by an administrator as part of the
configuration of the user accounts or access-settings for the storage system.
Figure 6. Implementation of multiple-client volume administration
Resource groups functions provide additional policy-based limitations to users or
the DS8000 storage systems, which in conjunction with the inherent volume
addressing limitations support secure partitioning of Copy Services resources
between user-defined partitions. The process of specifying the appropriate
limitations is completed by an administrator using resource groups functions.
56DS8882F Introduction and Planning Guide
Page 67
Note: User and administrator roles for resource groups are the same user and
administrator roles used for accessing your DS8000 storage system. For example,
those roles include storage administrator, Copy Services operator, and physical
operator.
The process of planning and designing the use of resource groups for Copy
Services scope limiting can be complex. For more information on the rules and
policies that must be considered in implementing resource groups, see topics about
resource groups. For specific DS CLI commands used to implement resource
groups, see the IBM DS8000 Command-Line Interface User's Guide.
Comparison of Copy Services features
The features of the Copy Services aid with planning for a disaster.
Table 11 provides a brief summary of the characteristics of the Copy Services
features that are available for the storage system.
Table 11. Comparison of features
FeatureDescriptionAdvantagesConsiderations
Multiple Target PPRC Synchronous and
asynchronous
replication
Metro/Global Mirror Three-site, long
distance disaster
recovery replication
Metro MirrorSynchronous data
copy at a distance
Global CopyContinuous copy
without data
consistency
Global MirrorAsynchronous copyNearly unlimited
Mirrors data from a
single primary site to
two secondary sites
simultaneously.
A backup site is
maintained
regardless of which
one of the sites is
lost.
No data loss, rapid
recovery time for
distances up to 300
km.
Nearly unlimited
distance, suitable for
data migration, only
limited by network
and channel
extenders
capabilities.
distance, scalable,
and low RPO. The
RPO is the time
needed to recover
from a disaster; that
is, the total system
downtime.
Disaster recovery
scenarios depend on
support from
controlling software
such as
Geographically
Dispersed Parallel
Sysplex (GDPS) and
IBM Copy Services
Manager
Recovery point
objective (RPO)
might grow if
bandwidth capability
is exceeded.
Slight performance
impact.
Copy is normally
fuzzy but can be
made consistent
through
synchronization.
RPO might grow
when link bandwidth
capability is
exceeded.
Chapter 3. Data management features57
Page 68
Table 11. Comparison of features (continued)
FeatureDescriptionAdvantagesConsiderations
z/OS Global MirrorAsynchronous copy
I/O Priority Manager
The performance group attribute associates the logical volume with a performance
group object. Each performance group has an associated performance policy which
determines how the I/O Priority Manager processes I/O operations for the logical
volume.
Note: The default setting for this feature is “disabled” and must be enabled for
use through either the DS8000 Storage Management GUI or the DS CLI.
The I/O Priority Manager maintains statistics for the set of logical volumes in each
performance group that can be queried. If management is performed for the
performance policy, the I/O Priority Manager controls the I/O operations of all
managed performance groups to achieve the goals of the associated performance
policies. The performance group defaults to 0 if not specified. Table 12 lists
performance groups that are predefined and have the associated performance
policies:
controlled by IBM Z
host software
Nearly unlimited
distance, highly
scalable, and very
low RPO.
Additional host
server hardware and
software is required.
The RPO might grow
if bandwidth
capability is exceeded
or host performance
might be impacted.
Table 12. Performance groups and policies
Performance group
00No management
1-51Fixed block high priority
6-102Fixed block medium priority
11-153Fixed block low priority
16-180No management
1919CKD high priority 1
2020CKD high priority 2
2121CKD high priority 3
2222CKD medium priority 1
2323CKD medium priority 2
2424CKD medium priority 3
2525CKD medium priority 4
2626CKD low priority 1
2727CKD low priority 2
2828CKD low priority 3
2929CKD low priority 4
3030CKD low priority 5
1
Performance policy
Performance policy
description
58DS8882F Introduction and Planning Guide
Page 69
Securing data
Table 12. Performance groups and policies (continued)
Performance group
3131CKD low priority 6
Note:1Performance group settings can be managed using DS CLI.
1
Performance policy
Performance policy
description
You can secure data with the encryption features that are supported by the storage
system.
Encryption technology has a number of considerations that are critical to
understand to maintain the security and accessibility of encrypted data. For
example, encryption must be enabled by feature code and configured to protect
data in your environment. Encryption also requires access to at least two external
key servers.
It is important to understand how to manage IBM encrypted storage and comply
with IBM encryption requirements. Failure to follow these requirements might
cause a permanent encryption deadlock, which might result in the permanent loss
of all key-server-managed encrypted data at all of your installations.
The storage system automatically tests access to the encryption keys every 8 hours
and access to the key servers every 5 minutes. You can verify access to key servers
manually, initiate key retrieval, and monitor the status of attempts to access the
key server.
Chapter 3. Data management features59
Page 70
60DS8882F Introduction and Planning Guide
Page 71
Chapter 4. Planning the physical configuration
Physical configuration planning is your responsibility. Your technical support
representative can help you to plan for the physical configuration and to select
features.
This section includes the following information:
v Explanations for available features that can be added to the physical
configuration of your system model
v Feature codes to use when you order each feature
v Configuration rules and guidelines
Configuration controls
Indicator features control the physical configuration of the storage system.
These indicator features are for administrative use only. The indicator features
ensure that each storage system has a valid configuration. There is no charge for
these features.
Your storage system can include the following indicators:
Administrative indicators
If applicable, models also include the following indicators:
v IBM / Openwave alliance
v IBM / EPIC attachment
v IBM systems, including System p and IBM Z
v Lenovo System x and BladeCenter
v IBM storage systems, including IBM System Storage ProtecTIER®, IBM
Storwize®V7000, and IBM System Storage N series
v IBM SAN Volume Controller
v Linux
v VMware VAAI indicator
v Storage Appliance
Determining physical configuration features
You must consider several guidelines for determining and then ordering the
features that you require to customize your storage system. Determine the feature
codes for the optional features you select and use those feature codes to complete
your configuration.
Procedure
1. Calculate your overall storage needs, including the licensed functions.
The Copy Services and z-Synergy Services licensed functions are based on
usage requirements.
2. Determine the models of which your storage system is to be comprised.
3. Order a primary and secondary management console for each storage system.
4. For each storage system, determine the storage features that you need.
a. Select the drive set feature codes and determine the amount of each feature
b. Select the storage enclosure feature codes and determine the amount that
you must order to enclose the drive sets that you are ordering.
5. Determine the I/O adapter features that you need for your storage system.
6. Determine the appropriate processor memory feature code that is needed.
7. Decide which power features that you must order.
8. Review the other features and determine which feature codes to order.
Management console features
Management consoles are required features for your storage system configuration.
The primary and secondary management console are included in the DS8882F
model 983.
Primary and secondary management consoles
The management console is the focal point for configuration, Copy Services
functions, remote support, and maintenance of your storage system.
The management consoles (also known as the Hardware Management Consoles or
HMCs) are dedicated appliances physically located inside the Management
enclosure. It can proactively monitor the state of your storage system and notifying
you and IBM when service is required. It also can be connected to your network
for centralized management of your storage system by using the IBM DS
command-line interface (DS CLI) or storage management software through the
IBM DS Open API. (The DS8000 Storage Management GUI cannot be started from
the HMC.)
You can also use the DS CLI to control the remote access of your technical support
representative to the HMC.
The secondary HMC is a redundant management console for environments with
high-availability requirements, and is required for model 983.
Feature codes for management consoles
Use these feature codes to order management consoles (MCs) for each storage
system.
The primary management console is included by default in DS8882F storage
systems.
You must select the storage features that you want on your storage system.
The storage features are separated into the following categories:
v Drive-set features
v Enclosure filler features
Feature codes for drive sets
Use these feature codes to order sets of encryption flash drives.
The flash drives can be installed only in High Performance Flash Enclosures Gen2.
The High Performance Flash Enclosure Gen2 pair can contain 16, 32, or 48 flash
drives. All flash drives in a High Performance Flash Enclosure Gen2 pair must be
the same type.
Table 14. Feature codes for flash-drive sets for High Performance Flash Enclosures Gen2
Drive speed in
Feature codeDisk sizeDrive typeDrives per set
1610400 GB2.5-in. Flash
Tier 0 drives
1611800 GB2.5-in. Flash
Tier 0 drives
16121.6 TB2.5-in. Flash
Tier 0 drives
16133.2 TB2.5-in. Flash
Tier 0 drives
16233.8 TB2.5-in. Flash
Tier 1 drives
16247.6 TB2.5-in. Flash
Tier 2 drives
Note:
1. RAID 5 is not supported for drives larger than 1 TB, and requires a request for price quote (RPQ). For
information, contact your sales representative.
2. RAID 6 is the default RAID type for all drives larger than 1 TB, and it is the only supported RAID type for 7.6
TB drives.
16N/AYes5, 6, 10
16N/AYes5, 6, 10
16N/AYes6, 10
16N/AYes6, 10
16N/AYes6, 10
16N/AYes6
RPM (K=1000)
Encryption
driveRAID support
1, 2
1, 2
1. 2
1, 2
Storage-enclosure fillers
Storage-enclosure fillers fill empty drive slots in the storage enclosures. The fillers
ensure sufficient airflow across populated storage.
For High Performance Flash Enclosures Gen2, one filler feature provides a set of 16
fillers.
Feature codes for storage enclosure fillers
Use these feature codes to order filler sets for High Performance Flash Enclosures
Gen2.
Table 15. Feature codes for storage enclosures
Feature codeDescription
1699Filler set for 2.5-in. High Performance Flash Enclosures
Gen2; includes 16 fillers
Chapter 4. Storage systemphysical configuration63
Page 74
Configuration rules for storage features
Use the following general configuration rules and ordering information to help you
order storage features.
High Performance Flash Enclosures Gen2
Follow these configuration rules when you order storage features for storage
systems with High Performance Flash Enclosures Gen2.
Flash drive sets
The High Performance Flash Enclosure Gen2 pair requires a minimum of
one 16 flash-drive set.
Storage enclosure fillers
For the High Performance Flash Enclosures Gen2, one filler feature
provides a set of 16 fillers. If only one flash-drive set is ordered, then two
storage enclosure fillers are needed to fill the remaining 32 slots in the
High Performance Flash Enclosures Gen2 pair. If two drive sets are ordered
(32 drives), one filler set is require to fill the remaining 16 slots. Each drive
slot in a High Performance Flash Enclosures Gen2 must have either a flash
drive or a filler.
Physical and effective capacity
Use the following information to calculate the physical and effective capacity of a
storage system.
To calculate the total physical capacity of a storage system, multiply each drive-set
feature by its total physical capacity and sum the values. For High Performance
Flash Enclosures Gen2, there are 16 identical flash drives per drive set, up to three
drive sets per enclosure pair.
The logical configuration of your storage affects the effective capacity of the drive
set.
Specifically, effective capacities vary depending on the following configurations:
RAID type and spares
Drives in the DS8882F must be configured as RAID 5, RAID 6, or RAID 10
arrays before they can be used, and then spare drives are assigned. RAID
10 can offer better performance for selected applications, in particular, high
random, write content applications in the open systems environment.
RAID 6 increases data protection by adding an extra layer of parity over
the RAID 5 implementation.
Data format
Arrays are logically configured and formatted as fixed block (FB) or count
key data (CKD) ranks. Data that is accessed by open systems hosts or
Linux on IBM Z that support Fibre Channel protocol must be logically
configured as FB. Data that is accessed by IBM Z hosts with z/OS or
z/VM must be configured as CKD. Each RAID rank is divided into
equal-sized segments that are known as extents.
The storage administrator has the choice to create extent pools of different
extent sizes. The supported extent sizes for FB volumes are 1 GB or 16 MB
and for CKD volumes it is one 3390 Mod1, which is 1113 cylinders or 21
cylinders. An extent pool cannot have a mix of different extent sizes.
64DS8882F Introduction and Planning Guide
Page 75
On prior models of DS8000 series, a fixed area on each rank was assigned to be
used for volume metadata, which reduced the amount of space available for use
by volumes. In the DS8880, there is no fixed area for volume metadata, and this
capacity is added to the space available for use. The metadata is allocated in the
storage pool when volumes are created and is referred to as the pool overhead.
The amount of space that can be allocated by volumes is variable and depends on
both the number of volumes and the logical capacity of these volumes. If thin
provisioning is used, then the metadata is allocated for the entire volume when the
volume is created, and not when extents are used, so over-provisioned
environments have more metadata.
Metadata is allocated in units that are called metadata extents, which are 16 MB for
FB data and 21 cylinders for CKD data. There are 64 metadata extents in each user
extent for FB and 53 for CKD. The metadata space usage is as follows:
v Each volume takes one metadata extent.
v Ten extents (or part thereof) for the volume take one metadata extent.
For example, both a 3390-3 and a 3390-9 volume each take two metadata extents
and a 128 GB FB volume takes 14 metadata extents.
A simple way of estimating the maximum space that might be used by volume
metadata is to use the following calculations:
FB Pool Overhead = (#volumes*2 + total volume extents / 10)/64 - rounded up
to the nearest integer
CKD Pool Overhead = (#volumes*2 + total volume extents / 10)/53 - rounded up
to the nearest integer
These calculations overestimate the space that is used by metadata by a small
amount, but the precise details of each volume do not need to be known.
Examples:
v For an FB storage pool with 6,190 extents in which you expect to use thin
provisioning and allocate up to 12,380 extents (2:1 overprovisioning) on 100
volumes, you would have a pool overhead of 23 extents -> (100*2+12380/10)/
64=22.46.
v For a CKD storage pool with 6,190 extents in which you expect to allocate all the
space on 700 volumes, then you would have a pool overhead of 39 extents ->
(700*2+6190/10)/53=38.09.
RAID capacities for DS8882F
Use the following information to calculate the physical and effective capacity for
High Performance Flash Enclosures Gen2.
The default RAID type for all drives larger than 1 TB is RAID 6, and it is the only
RAID type supported for 7.6 TB drives. RAID 5 is not supported for drives larger
than 1 TB, and requires a request for price quote (RPQ). For information, contact
your sales representative.
Chapter 4. Storage systemphysical configuration65
Page 76
Table 16. RAID capacities for High Performance Flash Enclosures Gen2
Physical
Flash drive disk
size
400 GB6.4 TBFB Lg Ext104914102132249317712132
800 GB12.8 TBFB Lg Ext213328554300502335784300
1.6 TB25.6 TBFB Lg Ext43015746n/an/a71918636
3.2 TB51.2 TBFB Lg Ext863711527n/an/a1441717307
3.8 TB61.4 TBFB Lg Ext1037113839n/an/a1730820776
7.6 TB123 TBFB Lg Extn/an/an/an/a3465041587
capacity of
Flash drive
setRank type
FB Sm Ext6717090285136507159607113393136495
CKD Lg Ext117715822392279719872392
CKD Sm Ext6238883858126797148256105328126787
FB Sm Ext136542182781275254321475229015275239
CKD Lg Ext239232034823563340134823
CKD Sm Ext126821169768255651298601212705255655
FB Sm Ext275284367771n/an/a460243552727
CKD Lg Ext48246445n/an/a80659686
CKD Sm Ext255684341586n/an/a427475513372
FB Sm Ext552771737753n/an/a9227331107703
CKD Lg Ext968712928n/an/a1617019412
CKD Sm Ext513414685225n/an/a8570291028843
FB Sm Ext663766885747n/an/a11077251329703
CKD Lg Ext1163215522n/an/a1941223302
CKD Sm Ext616506822682n/an/a10288481235028
FB Sm Extn/an/an/an/a22176632661631
CKD Lg Extn/an/an/an/a3886346643
CKD Sm Extn/an/an/an/a20597602472118
RAID-10 arraysRAID-5 arraysRAID-6 arrays
3 + 34 + 46 + P7 + P5 + P + Q6 + P + Q
Effective capacity of one rank in number of extents
I/O adapter features
You must select the I/O adapter features that you want for your storage system.
The I/O adapter features are separated into the following categories:
v 2U I/O enclosure
v Flash RAID adapter pair
v Host adapters
v Host adapters Fibre Channel cables
I/O enclosure
The I/O enclosure holds the I/O adapters and provides connectivity between the
I/O adapters and the storage processors.
The I/O adapters in the I/O enclosures can be either Flash RAID adapters or host
adapters. Each I/O enclosure can support up to two Flash RAID adapters (one
pair), which are imbedded into the PCIe adapter, and two or four host adapters
installed in pairs (not to exceed 16 host adapter ports).
Feature code for 2U I/O enclosure
Use this feature code to identify the 2U I/O enclosure for your storage system.
This I/O enclosure feature includes one 2U I/O enclosure. This feature supports
two or four 16 Gbps 4-port host adapters (installed in pairs).
66DS8882F Introduction and Planning Guide
Page 77
Table 17. Feature codes for I/O enclosures
Feature codeDescription
13052U I/O enclosure
Feature codes for Flash RAID adapter pairs
Use these feature codes to identify Flash RAID adapters for your storage system.
Table 18. Feature codes for Flash RAID adapter pairs
Feature codeFeature nameDescription
3065Base I/O expander
for High Performance
Flash Enclosures
Gen2 and host
adapters (required)
3066I/O expander for
additional host
adapters (optional)
Required to support one High Performance
Flash Enclosure Gen2 pair and two host
adapters
Required to support more than two host
adapters
Fibre Channel (SCSI-FCP and FICON) host adapters and
cables
You can order Fibre Channel host adapters for your storage-system configuration.
The Fibre Channel host adapters enable the storage system to attach to Fibre
Channel (SCSI-FCP) and FICON servers, and SAN fabric components. They are
also used for remote mirror and copy control paths between DS8000 series storage
systems. Fibre Channel host adapters are installed in the I/O enclosure.
Adapters are 4-port 16 Gbps.
Supported protocols include the following types:
v SCSI-FCP upper layer protocol (ULP) on point-to-point and fabric.
Note: The 16 Gbps adapter does not support arbitrated loop topology at any
speed.
v FICON ULP on point-to-point and fabric topologies.
A Fibre Channel cable is required to attach each Fibre Channel adapter port to a
server or fabric component port. The Fibre Channel cables can be 50 or 9 micron,
OM3 or higher fiber graded, single or multimode cables.
Feature codes for Fibre Channel host adapters
Use these feature codes to order Fibre Channel host adapters for your storage
system.
Table 19. Feature codes for Fibre Channel host adapters
Feature codeDescriptionReceptacle type
33544-port, 16 Gbps shortwave FCP and FICON
host adapter, PCIe
34544-port, 16 Gbps longwave FCP and FICON
host adapter, PCIe
LC
LC
Chapter 4. Storage systemphysical configuration67
Page 78
Feature codes for Fibre Channel cables
Use these feature codes to order Fibre Channel cables to connect Fibre Channel
host adapters. Take note of the distance capabilities for cable types.
Fibre cable typeDistance limits relative to 16 Gbps
OM1 (62.5 micron)Not recommended
OM2 (50 micron)35 m, but not recommended
OM3 (50 micron)100 m
OM4 (50 micron)125 m
Configuration rules for host adapters
Use the following configuration rules and ordering information to help you order
host adapters.
When you configure your storage system, consider the following issues when you
order the host adapters:
v How many host adapters will I install? You can install either two or four in
fixed locations in pairs.
v How can I balance the host adapters across the storage system to help ensure
optimum performance?
v What host adapter configurations help ensure high availability of my data?
68DS8882F Introduction and Planning Guide
Page 79
v How many and what type of cables do I need to order to support the host
adapters?
In addition, consider the following host adapter guideline.
v You can include a combination of Fibre Channel host adapters in one I/O
enclosure.
Ordering host adapter cables
For each host adapter, you must provide the appropriate fiber-optic cables.
Typically, to connect Fibre Channel host adapters to a server or fabric port, provide
the following cables:
v For shortwave Fibre Channel host adapters, provide a 50-micron multimode
OM3 or higher fiber-optic cable that ends in an LC connector.
v For longwave Fibre Channel host adapters, provide a 9-micron single mode OS1
or higher fiber-optic cable that ends in an LC connector.
These fiber-optic cables are available for order from IBM.
IBM Global Services Networking Services can assist with any unique cabling and
installation requirements.
Processor complex features
These features specify the number and type of core processors in the processor
complex. The DS8882F contains two processor enclosures (POWER8 servers) that
contain the processors and memory that drives all functions in the storage system.
Feature codes for Transparent cloud tiering adapters
Use these feature codes to order adapter pairs to enhance Transparent cloud tiering
connectivity for your storage system.
Transparent cloud tiering connectivity can be enhanced with 10 Gbps adapter pairs
to improve bandwidth for a native cloud storage tier in IBM Z environments.
Table 22. Feature codes for Transparent cloud tiering adapter pairs
Feature codeDescriptionModels
36002-port 10 Gbps SFP+ optical/2-port 1 Gbps
RJ-45 copper longwave adapter pair for 2U
processor complex
model 983
Feature codes for processor licenses
Use these processor-license feature codes to plan for and order processor memory
for your storage system. You can order only one processor license per system.
Table 23. Feature codes for processor licenses
Corequisite feature code for
Feature codeDescription
44216-core POWER8 processor
feature
memory
4233, 4234, or 4235
Chapter 4. Storage systemphysical configuration69
Page 80
Processor memory features
E
E
NL
These features specify the amount of memory that you need depending on the
processors in the storage system.
Feature codes for system memory
Use these feature codes to order system memory for the DS8882F.
Note: Memory is not the same as cache. The amount of cache is less than the
amount of available memory. See the DS8000 Storage Management GUI.
The following list provides standard plug types and the countries in which they
are commonly used. You can use a plug standard that is not identified here as
common to your country. For example, NEMA L6-20P, RS 3720DP, or IEC309
locking plugs might be preferred and can be used in most countries.
Bangladesh, Lesotho, Macao, Maldives, Namibia, Nepal, Pakistan, Samoa,
South Africa, Sri Lanka, Swaziland
CEI 23-16
Chile, Italy, Libyan Arab Jamahiriya
RS 3720DP
United States, Canada
IEC 309
Denmark, Liechtenstein
AS/NZS 3112
Australia, Fiji, Kiribati, Nauru, New Zealand, Papua New Guinea
JIS C 8303 6-20P
Japan
IEC 60320-2-2
Worldwide
Chapter 4. Storage systemphysical configuration73
Page 84
IRAM 2073
KSC 8305
IS 6538
GB 2099.1, 1002
NBR 14136
CNS 10917-3
SI 32 Israel
SEV 1011
Input voltage
The battery backup module distributes power that ranges from 200 V AC to 240 V
AC.
Feature codes for battery backup modules
Use these feature codes to identify battery backup modules.
Argentina, Paraguay, Uruguay
Korea (Democratic Peoples Republic of), Korea (Republic of)
India
China (SAR)
Brazil
Taiwan
Switzerland
Table 26. Feature codes for battery backup modules
Feature codeDescriptionRequirements
1057Battery backup moduleTwo battery backup modules are
Configuration rules for power features
Ensure that you are familiar with the configuration rules and feature codes before
you order power features.
When you order power cord features, the following rules apply.
v You must order a minimum of two power cord features.
v You must select the power cord that is appropriate for the input voltage and
outlet type of the storage system in the frame that the model 983 is located.
Other configuration features
Features are available for shipping and setting up the storage system.
You can select shipping and setup options for the storage system. The following
list identifies optional feature codes that you can specify to customize or to receive
your storage system.
v BSMI certificate (Taiwan)
v Encryption not capable (China or Russia)
included in the DS8882F model
983.
74DS8882F Introduction and Planning Guide
Page 85
BSMI certificate (Taiwan)
The BSMI certificate for Taiwan option provides the required Bureau of Standards,
Metrology, and Inspection (BSMI) ISO 9001 certification documents for storage
system shipments to Taiwan.
If the storage system that you order is shipped to Taiwan, you must order this
option for each model that is shipped.
Feature code for BSMI certification documents (Taiwan)
Use this feature code to you order the Bureau of Standards, Metrology, and
Inspection (BSMI) certification documents that are required when the storage
system is shipped to Taiwan.
Table 27. Feature code for the BSMI certification documents (Taiwan)
Feature codeDescription
0400BSMI certification documents
Non-encryption certification key (China and Russia)
The encryption not capable feature for China and Russia disables the encryption
capabilities of the storage system.
If the storage system that you order is shipped to China or Russia, you must order
this option to ensure that the non-encryption certification key is applied to comply
with government encryption requirements.
Feature code for non-encryption certification key (China and
Russia)
Use this feature code to you order the non-encryption certification key that is
required when the storage system is shipped to China or Russia.
Table 28. Feature code for non-encryption certification key (China and Russia)
Feature codeDescription
0403Non-encryption certification key
Chapter 4. Storage systemphysical configuration75
Page 86
76DS8882F Introduction and Planning Guide
Page 87
Chapter 5. Planning use of licensed functions
Licensed functions are the operating system and functions of the storage system.
Required features and optional features are included.
IBM authorization for licensed functions is purchased as 533x or 904x machine
function authorizations. However, the license functions are storage models. For
example, the Base Function license is listed as a 533x or 904x model LF8. The 533x
or 904x machine function authorization features are for billing purposes only.
The following licensed functions are available:
Base Function
The Base Function license is required for each storage system.
z-synergy Services
The z-synergy Services include z/OS licensed features that are supported
on the storage system.
Copy Services
Copy Services features help you implement storage solutions to keep your
business running 24 hours a day, 7 days a week by providing data
duplication, data migration, and disaster recovery functions.
Copy Services Manager on Hardware Management Console
The Copy Services Manager on Hardware Management Console (CSM on
HMC) license enables IBM Copy Services Manager to run on the Hardware
Management Console, which eliminates the need to maintain a separate
server for Copy Services functions.
Licensed function indicators
Each licensed function indicator feature that you order on a base frame enables
that function at the system level.
After you receive and apply the feature activation codes for the licensed function
indicators, the licensed functions are enabled for you to use. The licensed function
indicators are also used for maintenance billing purposes.
Note: Retrieving feature activation codes is part of managing and activating your
licenses. Before you can logically configure your storage system, you must first
manage and activate your licenses.
Each licensed function indicator requires a corequisite 283x or 904x function
authorization. Function authorization establishes the extent of IBM authorization
for the licensed function before the feature activation code is provided by IBM.
Each function authorization applies only to the specific storage system (by serial
number) for which it was acquired. The function authorization cannot be
transferred to another storage system (with a different serial number).
License scope
Licensed functions are activated and enforced within a defined license scope.
License scope refers to the following types of storage and types of servers with
which the function can be used:
Fixed block (FB)
The function can be used only with data from Fibre Channel attached
servers. The Base Function, Copy Services, and Copy Services Manager on
the Hardware Management Console licensed functions are available within
this scope.
Count key data (CKD)
The function can be used only with data from FICON attached servers.
The Copy Services, Copy Services Manager on the Hardware Management
Console, and z-synergy Services licensed functions are available within this
scope.
Both FB and CKD (ALL)
The function can be used with data from all attached servers. The Base
Function, Copy Services, and Copy Services Manager on the Hardware
Management Console licensed functions are available within this scope.
Some licensed functions have multiple license scope options, while other functions
have only a single license scope.
You do not specify the license scope when you order function authorization feature
numbers. Feature numbers establish only the extent of the IBM authorization (in
terms of physical capacity), regardless of the storage type. However, if a licensed
function has multiple license scope options, you must select a license scope when
you initially retrieve the feature activation codes for your storage system. This
activity is performed by using the IBM Data storage feature activation (DSFA)
website (www.ibm.com/storage/dsfa) .
Note: Retrieving feature activation codes is part of managing and activating your
licenses. Before you can logically configure your storage system, you must first
manage and activate your licenses.
When you use the DSFA website to change the license scope after a licensed
function is activated, a new feature activation code is generated. When you install
the new feature activation code into the storage system, the function is activated
and enforced by using the newly selected license scope. The increase in the license
scope (changing FB or CKD to ALL) is a nondisruptive activity. A reduction of the
license scope (changing ALL to FB or CKD) is a disruptive activity, which takes
effect at the next restart.
Ordering licensed functions
After you decide which licensed functions to use with your storage system, you
are ready to order the functions.
About this task
Licensed functions are purchased as function authorization features.
To order licensed functions, use the following general steps:
Procedure
1. Required. Order the Base Function license to support the total physical capacity
of your storage system.
78DS8882F Introduction and Planning Guide
Page 89
2. Optional. Order the z-synergy Services license to support the physical capacity
of all ranks that are formatted as CKD.
3. Optional. Order the Copy Services license to support the total usable capacity
of all volumes that are involved in one or more copy services functions.
Note: The Copy Services license is based on the usable capacity of volumes
and not on physical capacity. If overprovisioning is used on the DS8880 with a
significant amount of Copy Services functionality, then the Copy Services
license needs only to be equal to the total rank usable capacity (even if the
logical volume capacity of volumes in Copy Services is greater). For example, if
the total rank usable capacity of a DS8880 is 100 TB but there are 200 TB of thin
provisioning volumes in Metro Mirror, then only a 100 TB of Copy Services
license is needed.
4. Optional. Order the Copy Services Manager on the Hardware Management
Console license that support the total usable capacity of all volumes that are
involved in one or more copy services functions.
Rules for ordering licensed functions
A Base Function license is required for every base frame. All other licensed
functions are optional and must have a capacity that is equal to or less than the
Base Function license.
For all licensed functions, you can combine feature codes to order the exact
capacity that you need. For example, if you require 160 TB of Base Function license
capacity, order 10 of feature code 8151 (10 TB each up to 100 TB capacity) and 4 of
feature code 8152 (15 TB each, for an extra 60 TB).
When you calculate usable capacity for the Copy Services license, use the size of
each volume involved in a copy services relationship and multiply by the size of
each extent.
When you calculate physical capacity, consider the capacity across the entire
storage system, including the base frame and any expansion frames. To calculate
the physical capacity, use the following table to determine the total size of each
regular drive feature in your storage system, and then add all the values.
Table 29. Total physical capacity for drive-set features
Drive sizesTotal physical capacityDrives per feature
The initial enablement of any optional DS8000 licensed function is a concurrent
activity (assuming that the appropriate level of microcode is installed on the
machine for the specific function). The removal of a DS8000 licensed function is a
nondisruptive activity but takes effect at the next machine IML.
Chapter 5. Planning use of licensed functions79
Page 90
If you have a licensed function and no longer want to use it, you can deactivate
the license in one of the following ways:
v Order an inactive or disabled license and replace the active license activation key
with the new inactive license activation key at the IBM Data storage feature
activation (DSFA) website (www.ibm.com/storage/dsfa).
v Go to the DSFA website and change the assigned value from the current number
of terabytes (TB) to 0 TB. This value, in effect, makes the feature inactive. If this
change is made, you can go back to DSFA and reactivate the feature, up to the
previously purchased level, without having to repurchase the feature.
Regardless of which method is used, the deactivation of a licensed function is a
nondisruptive activity, but takes effect at the next machine IML.
Note: Although you do not need to specify how the licenses are to be applied
when you order them, you must allocate the licenses to the storage image when
you obtain your license keys on the IBM Data storage feature activation (DSFA)
website (www.ibm.com/storage/dsfa).
Base Function license
The Base Function license provides essential functions for your storage system. A
Base Function license is required for each storage system.
The Base Function license is available for the following license scopes: FB and ALL
(both FB and CKD).
The Base Function license includes the following features:
v Database Protection
v Encryption Authorization
v Easy Tier
v I/O Priority Manager
v Operating Environment License (OEL)
v Thin Provisioning
The Base Function license feature codes are ordered in increments up to a specific
capacity. For example, if you require 160 TB of capacity, order 10 of feature code
8151 (10 TB each up to 100 TB capacity) and 4 of feature code 8152 (15 TB each, for
an extra 60 TB).
The Base Function license includes the following feature codes.
Table 30. Base Function license feature codes
Feature CodeFeature code for licensed function indicator
The Base Function license authorizes you to use the model configuration at a
specific capacity level. The Base Function license must cover the full physical
80DS8882F Introduction and Planning Guide
Page 91
capacity of your storage system, which includes the physical capacity of any
expansion frames within the storage system. The license capacity must cover both
open systems data (fixed block data) and IBM Z data (count key data). All other
licensed functions must have a capacity that is equal to or less than the Base
Function license.
Note: Your storage system cannot be logically configured until you activate the
Base Function license. On activation, drives can be logically configured up to the
extent of the Base Function license authorization level.
As you add more drives to your storage system, you must increase the Base
Function license authorization level for the storage system by purchasing more
license features. Otherwise, you cannot logically configure the additional drives for
use.
Database Protection
The IBM Database Protection feature provides the highest level of protection for
Oracle databases by detecting corrupted Oracle data and preventing it from being
processed to storage.
The IBM Database Protection feature complies with the Oracle Hardware Assisted
Resilient Data (HARD) initiative, which provides an end-to-end data protection
between an Oracle database and permanent storage devices.
Data must pass through many software and hardware layers on its way to storage.
It is possible for the data to become corrupted, on a rare occasion, caused by a
malfunction in an intermediate layer. With the IBM Database Protection feature, an
IBM DS8000 model can validate whether Oracle data blocks are consistent using
the same logic that Oracle uses. This validation is done before the write request is
processed. You can designate how the transaction is managed: either rejected and
reported, or processed and reported.
Encryption Authorization
The Encryption Authorization feature provides data encryption by using IBM Full
Disk Encryption (FDE) and key managers, such as IBM Security Key Lifecycle
Manager.
The Encryption Authorization feature secures data at rest and offers a simple,
cost-effective solution for securely erasing any disk drive that is being retired or
re-purposed (cryptographic erasure). The storage system uses disks that have FDE
encryption hardware and can perform symmetric encryption and decryption of
data at full disk speed with no impact on performance.
IBM Easy Tier
Support for IBM Easy Tier is available with the IBM Easy Tier feature.
The Easy Tier feature enables the following modes:
v Easy Tier: automatic mode
v Easy Tier: manual mode
The feature enables the following functions for the storage type:
v Easy Tier application
v Easy Tier heat map transfer
v The capability to migrate volumes for logical volumes
Chapter 5. Planning use of licensed functions81
Page 92
v The reconfigure extent pool function of the extent pool
v The dynamic extent relocation with an Easy Tier managed extent pool
I/O Priority Manager
The I/O Priority Manager function can help you effectively manage quality of
service levels for each application running on your system. This function aligns
distinct service levels to separate workloads in the system to help maintain the
efficient performance of each DS8000 volume.
The I/O Priority Manager detects when a higher-priority application is hindered
by a lower-priority application that is competing for the same system resources.
This detection might occur when multiple applications request data from the same
drives. When I/O Priority Manager encounters this situation, it delays
lower-priority I/O data to assist the more critical I/O data in meeting its
performance targets.
Operating environment license
The operating environment model and features establish the extent of IBM
authorization for the use of the IBM DS operating environment.
Thin provisioning
Thin provisioning defines logical volume sizes that are larger than the physical
capacity installed on the system. The volume allocates capacity on an as-needed
basis as a result of host-write actions.
The thin provisioning feature enables the creation of extent space efficient logical
volumes. Extent space efficient volumes are supported for FB and CKD volumes
and are supported for all Copy Services functionality, including FlashCopy targets
where they provide a space efficient FlashCopy capability.
z-synergy Services license
The z-synergy Services license includes z/OS®features that are supported on the
storage system.
The z-synergy Services license is available for the following license scope: CKD.
The z-synergy Services license includes the following features:
v High Performance FICON for z Systems
v HyperPAV
v Parallel Access Volumes (PAV)
v Transparent cloud tiering
v z/OS Distributed Data Backup
The z-synergy Services license also includes the ability to attach FICON channels.
The z-synergy Services license feature codes are ordered in increments up to a
specific capacity. For example, if you require 160 TB of capacity, order 10 of feature
code 8351 (10 TB each up to 100 TB capacity), and 4 of feature code 8352 (15 TB
each, for an extra 60 TB).
The z-synergy Services license includes the feature codes listed in the following
table.
A z-synergy Services license is required for only the total physical capacity that is
logically configured as count key data (CKD) ranks for use with IBM Z host
systems.
Note: If z/OS Distributed Data Backup is being used on a system with no CKD
ranks, a 10 TB z-synergy Services license must be ordered to enable the FICON
attachment functionality.
High Performance FICON for z Systems
High Performance FICON for z Systems (zHPF) is an enhancement to the IBM
FICON architecture to offload I/O management processing from the z Systems
channel subsystem to the DS8880 Host Adapter and controller.
zHPF is an optional feature of z Systems server and of the DS8880. Recent
enhancements to zHPF include Extended Distance Facility zHPF List Pre-fetch
support for IBM DB2®and utility operations, and zHPF support for sequential
access methods. All of DB2 I/O is now zHPF-capable.
IBM HyperPAV
IBM HyperPAV associates the volumes with either an alias address or a specified
base logical volume number. When a host system requests IBM HyperPAV
processing and the processing is enabled, aliases on the logical subsystem are
placed in an IBM HyperPAV alias access state on all logical paths with a given
path group ID.
Parallel Access Volumes
The parallel access volumes (PAV) features establish the extent of IBM
authorization for the use of the parallel access volumes function.
Parallel Access Volumes (PAVs), also referred to as aliases, provide your system
with access to volumes in parallel when you use an IBM Z host.
A PAV capability represents a significant performance improvement by the storage
unit over traditional I/O processing. With PAVs, your system can access a single
volume from a single host with multiple concurrent requests.
Transparent cloud tiering
Transparent cloud tiering provides a native cloud storage tier for IBM Z
environments. Transparent cloud tiering moves data directly from the storage
system to cloud object storage, without sending data through the host.
Chapter 5. Planning use of licensed functions83
Page 94
Transparent cloud tiering provides cloud object storage (public, private, or
on-premises) as a secure, reliable, transparent storage tier that is natively integrated
with the storage system. Transparent cloud tiering on the storage system is fully
integrated with DFSMShsm, which reduces CPU utilization on the host when you
are migrating and recalling data in cloud storage. You can use the IBM Z host to
manage transparent cloud tiering and attach metadata to cloud objects.
The storage system supports the OpenStack Swift and Amazon S3 APIs. The
storage system also supports the IBM TS7700 as an object storage target and the
following cloud service providers:
v Amazon S3
v IBM Bluemix - Cloud Object Storage
v OpenStack Swift Based Private Cloud
z/OS Distributed Data Backup
z/OS Distributed Data Backup (zDDB) is a licensed feature on the base frame that
allows hosts, which are attached through a FICON interface, to access data on
fixed block (FB) volumes through a device address on FICON interfaces.
If zDDB is installed and enabled and a volume group type specifies either FICON
interfaces, this volume group has implicit access to all FB logical volumes that are
configured in addition to all CKD volumes specified in the volume group. Then,
with appropriate software, a z/OS host can complete backup and restore functions
for FB logical volumes that are configured on a storage system image for open
systems hosts.
Copy Services license
Copy Services features help you implement storage solutions to keep your business
running 24 hours a day, 7 days a week by providing data duplication, data
migration, and disaster recovery functions. The Copy Services license is based on
usable capacity of the volumes involved in Copy Services functionality.
The Copy Services license is available for the following license scopes: FB and ALL
(both FB and CKD).
The Copy Services license includes the following features:
v Global Mirror
v Metro Mirror
v Metro/Global Mirror
v Point-in-Time Copy/FlashCopy
v Safeguarded Copy
v z/OS Global Mirror
v z/OS Metro/Global Mirror Incremental Resync (RMZ)
The Copy Services license feature codes are ordered in increments up to a specific
capacity. For example, if you require 160 TB of capacity, order 10 of feature code
8251 (10 TB each up to 100 TB capacity), and 4 of feature code 8252 (15 TB each,
for an extra 60 TB).
The Copy Services license includes the following feature codes.
84DS8882F Introduction and Planning Guide
Page 95
Table 32. Copy Services license feature codes
Feature CodeFeature code for licensed function indicator
The following ordering rules apply when you order the Copy Services license:
v The Copy Services license should be ordered based on the total usable capacity
of all volumes involved in one or more Copy Services relationships.
v The licensed authorization must be equal to or less that the total usable capacity
allocated to the volumes that participate in Copy Services operations.
v You must purchase features for both the source (primary) and target (secondary)
storage system.
Remote mirror and copy functions
The Copy Services license establishes the extent of IBM authorization for the use of
the remote mirror and copy functions on your storage system.
The following functions are included:
v Metro Mirror
v Global Mirror
v Global Copy
v Metro/Global Mirror
v Multiple Target PPRC
FlashCopy function (point-in-time copy)
FlashCopy creates a copy of a source volume on the target volume. This copy is
called a point-in-time copy.
When you initiate a FlashCopy operation, a FlashCopy relationship is created
between a source volume and target volume. A FlashCopy relationship is a
"mapping" of the FlashCopy source volume and a FlashCopy target volume. This
mapping allows a point-in-time copy of that source volume to be copied to the
associated target volume. The FlashCopy relationship exists between this volume
pair from the time that you initiate a FlashCopy operation until the storage unit
copies all data from the source volume to the target volume or you delete the
FlashCopy relationship, if it is a persistent FlashCopy.
Safeguarded Copy
The Safeguarded Copy feature, available with the Copy Services license, creates
backups of data that you can restore to the source volume or a different volume.
The Safeguarded Copy feature creates safeguarded backups that are not accessible
by the host system and protects these backups from corruption that can occur in
the production environment. You can define a Safeguarded Copy schedule to create
multiple backups on a regular basis, such as hourly or daily. You can also restore a
Chapter 5. Planning use of licensed functions85
Page 96
backup to the source volume or to a different volume. A backup contains the same
metadata as the safeguarded source volume.
Safeguarded Copy can create backups with more frequency and capacity in
comparison to FlashCopy volumes. The creation of safeguarded backups also
impacts performance less than the multiple target volumes that are created by
FlashCopy.
With backups that are outside of the production environment, you can use the
backups to restore your environment back to a specified point in time. You can
also extract and restore specific data from the backup or use the backup to
diagnose production issues.
You cannot delete a safeguarded source volume before the safeguarded backups
are deleted. The maximum size of a backup is 16 TB.
z/OS Global Mirror
z/OS Global Mirror (previously known as Extended Remote Copy or XRC)
provides a long-distance remote copy solution across two sites for open systems
and IBM Z data with asynchronous technology.
z/OS Metro/Global Mirror Incremental Resync
z/OS Metro/Global Mirror Incremental Resync (RMZ) is an enhancement for z/OS
Global Mirror. z/OS Metro/Global Mirror Incremental Resync can eliminate the
need for a full copy after a HyperSwap situation in 3-site z/OS Global Mirror
configurations.
The storage system supports z/OS Global Mirror that is a 3-site mirroring solution
that uses IBM System Storage Metro Mirror and z/OS Global Mirror (XRC). The
z/OS Metro/Global Mirror Incremental Resync capability is intended to enhance
this solution by enabling resynchronization of data between sites by using only the
changed data from the Metro Mirror target to the z/OS Global Mirror target after a
HyperSwap operation.
Copy Services Manager on the Hardware Management Console license
IBM Copy Services Manager facilitates the use and management of Copy Services
functions such as the remote mirror and copy functions (Metro Mirror and Global
Mirror) and the point-in-time function (FlashCopy). IBM Copy Services Manager is
available on the Hardware Management Console (HMC), which eliminates the
need to maintain a separate server for Copy Services functions.
The Copy Services Manager on Hardware Management Console (CSM on HMC)
license is available for the following license scopes: FB and ALL (both FB and
CKD).
The CSM on HMC license feature codes are ordered in increments up to a specific
capacity. For example, if you require 160 TB of capacity, order 10 of feature code
8451 (10 TB each up to 100 TB capacity), and 4 of feature code 8452 (15 TB each,
for an extra 60 TB).
The CSM on HMC license includes the following feature codes.
Feature CodeFeature code for licensed function indicator
8450CSM on HMC - inactive
8451CSM on HMC - 10 TB (up to 100 TB capacity)
8452CSM on HMC - 15 TB (from 100.1 TB to 250 TB capacity)
8453CSM on HMC - 25 TB (from 250.1 TB to 500 TB capacity)
Chapter 5. Planning use of licensed functions87
Page 98
88DS8882F Introduction and Planning Guide
Page 99
Chapter 6. Delivery and installation requirements
You must ensure that you properly plan for the delivery and installation of your
storage system.
This information provides the following planning information for the delivery and
installation of your storage system:
v Planning for delivery of your storage system
v Planning the physical installation site
v Planning for power requirements
v Planning for network and communication requirements
For more information about the equipment and documents that IBM includes with
storage system shipments, see Appendix C, “IBM equipment and documents,” on
page 125.
Acclimation
Server and storage equipment must be gradually acclimated to the surrounding
environment to prevent condensation.
When server and storage equipment is shipped in a climate where the outside
temperature is below the dew point of the destination (indoor location), there is a
possibility that water condensation can form on the cooler inside and outside
surfaces of the equipment when the equipment is brought indoors.
Sufficient time must be allowed for the shipped equipment to gradually reach
thermal equilibrium with the indoor environment before you remove the shipping
bag and energize the equipment. Follow these guidelines to properly acclimate
your equipment:
v Leave the system in the shipping bag. If the installation or staging environment
allows it, leave the product in the full package to minimize condensation on or
within the equipment.
v Allow the packaged product to acclimate for 24 hours.1If there are visible signs
of condensation (either external or internal to the product) after 24 hours,
acclimate the system without the shipping bag for an additional 12 - 24 hours or
until no visible condensation remains.
v Acclimate the product away from perforated tiles or other direct sources of
forced air convection to minimize excessive condensation on or within the
equipment.
1
Unless otherwise stated by product-specific installation instructions.
Note: Condensation is a normal occurrence, especially when you ship equipment
in cold-weather climates. All IBM®products are tested and verified to withstand
condensation that is produced under these circumstances. When sufficient time is
provided to allow the hardware to gradually acclimate to the indoor environment,
there should be no issues with long-term reliability of the product.
You must ensure that your loading dock and receiving area can support the weight
and dimensions of the packaged storage system shipments.
“Shipment weights and dimensions” shows the final packaged dimensions and
maximum packaged weight of the storage system component shipments.
Table 34. Packaged dimensions and weight for storage systems (all countries)
ContainerPackaged dimensionsMaximum packaged
DS8882F model 983
Receiving delivery
The shipping carrier is responsible for delivering and unloading the storage system
as close to its final destination as possible. You must ensure that your loading
ramp and your receiving area can accommodate your storage system shipment.
About this task
Height 1.49 m (58.7 in.)
Width 1.05 m (41.3 in.)
Depth 1.30 m (51.2 in.)
weight
338 kg (745 lb)
Use the following steps to ensure that your receiving area and loading ramp can
safely accommodate the delivery of your storage system:
Procedure
1. Find out the packaged weight and dimensions of the shipping containers in
your shipment.
2. Ensure that your loading dock, receiving area, and elevators can safely support
the packaged weight and dimensions of the shipping containers.
Installation site requirements
You must ensure that the location where you plan to install your storage system
meets all requirements.
Planning the rack configuration
Ensure that the rack where you plan to install your storage system meets the rack
requirements.
About this task
When you are planning the rack for your storage system, you must answer the
following questions that relate to rack specifications and available space:
v Where are you installing the storage system? The DS8882F model 983 is a rack
mountable system consisting of eight 2U modules. There are three different rack
scenarios:
– An existing IBM z14 Model ZR1 (z14 Model ZR1)
– An existing IBM LinuxONE Rockhopper II (z14 Model LR1)
– Other standard 19-inch wide rack that conforms to EIA 310D specifications:
90DS8882F Introduction and Planning Guide
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.