Before using this information and the product it supports, read the information in “Safety and
environmental notices ” on page 136 and “Notices” on page 135.
This edition applies to version 9 of IBM® DS8000® and to all subsequent releases and modications until otherwise
indicated in new editions.
Copyright International Business Machines Corporation 2019.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Page 3
Contents
About this book....................................................................................................vii
Who should use this book.......................................................................................................................... vii
Conventions and terminology.................................................................................................................... vii
Publications and related information.........................................................................................................vii
IBM Publications Center............................................................................................................................. xi
Feedback..................................................................................................................................................... xi
Summary of changes .......................................................................................... xiii
ESE capacity controls for thin provisioning......................................................................................... 37
IBM Easy Tier............................................................................................................................................. 38
VMware vStorage API for Array Integration support ...............................................................................40
Performance for IBM Z ..............................................................................................................................41
System memory................................................................................................................................... 64
Selecting power features...........................................................................................................................65
Power cords..........................................................................................................................................65
Power feature conguration rules....................................................................................................... 66
Selecting other conguration features......................................................................................................66
Rules for ordering licensed functions....................................................................................................... 71
Base Function license................................................................................................................................71
IBM Easy Tier........................................................................................................................................72
This book describes how to plan for a new installation of DS8900F. It includes information about planning
requirements and considerations, customization guidance, and conguration worksheets.
Who should use this book
This book is intended for personnel that are involved in planning. Such personnel include IT facilities
managers, individuals responsible for power, cooling, wiring, network, and general site environmental
planning and setup.
Conventions and terminology
Different typefaces are used in this guide to show emphasis, and various notices are used to highlight key
information.
The following typefaces are used to show emphasis:
TypefaceDescription
BoldText in bold represents menu items.
bold monospaceText in bold monospace represents command names.
ItalicsText in italics is used to emphasize a word. In command syntax, it is used for
variables for which you supply actual values, such as a default directory or
the name of a system.
MonospaceText in monospace identies the data or commands that you type, samples of
command output, examples of program code or messages from the system,
or names of command flags, parameters, arguments, and name-value pairs.
These notices are used to highlight key information:
Notice
NoteThese notices provide important tips, guidance, or advice.
ImportantThese notices provide information or advice that might help you avoid
AttentionThese notices indicate possible damage to programs, devices, or data. An
Description
inconvenient or difcult situations.
attention notice is placed before the instruction or situation in which damage
can occur.
Publications and related information
Product guides, other IBM publications, and websites contain information that relates to the IBM DS8000
series.
To view a PDF le, you need Adobe Reader. You can download it at no charge from the Adobe website
(get.adobe.com/reader/).
The IBM DS8000 series online product documentation ( http://www.ibm.com/support/knowledgecenter/
ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/f2c_securitybp.html) contains all of the information that is
required to install, congure, and manage DS8000 storage systems. The online documentation is updated
between product releases to provide the most current documentation.
Publications
You can order or download individual publications (including previous versions) that have an order
number from the IBM Publications Center website(https://www.ibm.com/e-business/linkweb/
publications/servlet/pbi.wss). Publications without an order number are available on the documentation
CD or can be downloaded here.
Table 1. DS8000 series product publications
TitleDescriptionOrder number
IBM DS8900F
Introduction and Planning
Guide
IBM DS8882F
Introduction and Planning
Guide
IBM DS8880 Introduction
and Planning Guide
This publication provides an overview
of the new DS8900F, the latest storage
system in the DS8000 series. The
DS8900F provides two system types:
DS8910F Flexibility Class models 993
and 994 and DS8950F Agility Class
models 996 and E96.
This publication provides an overview
of the DS8882F, the latest storage
system in the DS8000 series. The
DS8882F provides the new model 983.
This publication provides an overview
of the product and technical concepts
for DS8882F.
This publication provides an overview
of the product and technical concepts
for DS8880. It also describes the
ordering features and how to plan for
an installation and initial conguration
of the storage system.
This publication provides an overview
of the product and technical concepts
for DS8870. It also describes the
ordering features and how to plan for
an installation and initial conguration
of the storage system.
Table 1. DS8000 series product publications (continued)
TitleDescriptionOrder number
IBM DS8000 CommandLine Interface User's Guide
IBM DS8000 Host Systems
Attachment Guide
This publication describes how to use
the DS8000 command-line interface
(DS CLI) to manage DS8000
conguration and Copy Services
relationships, and write customized
scripts for a host system. It also
includes a complete list of CLI
commands with descriptions and
example usage.
This publication provides information
about attaching hosts to the storage
system. You can use various host
attachments to consolidate storage
capacity and workloads for open
systems and IBM Z hosts.
Table 2. DS8000 series warranty, notices, and licensing publications
TitleLocation
IBM Warranty
Information for DS8000
series
IBM Safety NoticesIBM Systems Safety Notices
IBM Systems
Environmental Notices
This publication provides an overview
of the Representational State Transfer
(RESTful) API, which provides a
platform independent means by which
to initiate create, read, update, and
delete operations in the DS8000 and
supporting storage devices.
Table 2. DS8000 series warranty, notices, and licensing publications (continued)
TitleLocation
International Agreement
IBM Support Portal website
for Acquisition of
Software Maintenance
(Not all software will
offer Software
Maintenance under this
agreement.)
IBM License Agreement
IBM Support Portal website
for Machine Code
See the Agreements and License Information CD that was included with the DS8000 series for the
following documents:
• License Information
• Notices and Information
• Supplemental Notices and Information
Related websites
View the websites in the following table to get more information about DS8000 series.
Table 3. DS8000 series related websites
TitleDescription
IBM website (ibm.com®)Find more information about IBM products and services.
IBM Support Portal website https://
www.ibm.com/support/docview.wss?
uid=isg3T1025361
IBM Directory of Worldwide Contacts
website (www.ibm.com/planetwide)
IBM DS8000 series website
(www.ibm.com/servers/storage/disk/
Find support-related information such as downloads,
documentation, troubleshooting, and service requests and
PMRs.
Find contact information for general inquiries, technical
support, and hardware and software support by country.
Find product overviews, details, resources, and reviews for
the DS8000 series.
ds8000)
IBM Redbooks
(www.redbooks.ibm.com/)
IBM System Storage® Interoperation
Center (SSIC) website (www.ibm.com/
systems/support/storage/cong/ssic)
IBM Data storage feature activation
(DSFA) website (www.ibm.com/storage/
®
Find technical information developed and published by IBM
International Technical Support Organization (ITSO).
Find information about host system models, operating
systems, adapters, and switches that are supported by the
DS8000 series.
Download licensed machine code (LMC) feature keys that you
ordered for your DS8000 storage systems.
dsfa)
IBM Fix Central (www-933.ibm.com/
support/xcentral)
IBM Java™ SE (JRE) (www.ibm.com/
developerworks/java/jdk)
Download utilities such as the IBM Easy Tier® Heat Map
Transfer utility and Storage Tier Advisor tool.
Download IBM versions of the Java SE Runtime Environment
(JRE), which is often required for IBM products.
x About this book
Page 11
Table 3. DS8000 series related websites (continued)
TitleDescription
IBM Security Key Lifecycle Manager
online product documentation
(www.ibm.com/support/
knowledgecenter/SSWPVP/)
IBM Spectrum Control online product
documentation in IBM Knowledge
Center (www.ibm.com/support/
knowledgecenter)
DS8880 Code Bundle Information
website (www.ibm.com/support/
docview.wss?uid=ssg1S1005392)
IBM Publications Center
The IBM Publications Center is a worldwide central repository for IBM product publications and
marketing material.
Procedure
•The IBM Publications Center website (ibm.com/shop/publications/order) offers customized search
functions to help you nd the publications that you need. You can view or download publications at no
charge.
This online documentation provides information about IBM
Security Key Lifecycle Manager, which you can use to manage
encryption keys and certicates.
This online documentation provides information about IBM
Spectrum Control, which you can use to centralize, automate,
and simplify the management of complex and heterogeneous
storage environments including DS8000 storage systems and
other components of your data storage infrastructure.
Find information about code bundles for DS8880.
The version of the currently active installed code bundle
displays with the DS CLI ver command when you specify the
-l parameter.
Sending comments
Your feedback is important in helping to provide the most accurate and highest quality information.
Procedure
To submit any comments about this publication or any other IBM storage product documentation:
•Send your comments by email to ibmkc@us.ibm.com. Be sure to include the following information:
– Exact publication title and version
– Publication form number (for example, GA32-1234-00)
– Page, table, or illustration numbers that you are commenting on
– A detailed description of any information that should be changed
About this book
xi
Page 12
xii IBM DS8900F: DS8900F Introduction and Planning Guide
Page 13
Summary of changes
DS8000 Version 9 introduces the following new features.
Version 9
This table provides the current technical changes and enhancement to the IBM DS8000 as of October 25,
2019. Changed and new information is indicated by a vertical bar (|) to the left of the change.
FunctionDescription
DS8910F Flexibility Class models 993 and
994
DS8950F Agility Class models 996 and E96See “DS8950F model 996” on page 6 for more
Support for POWER9 processorsSee “Feature codes for processor licenses” on page 64
New power cordsSee “Feature codes for power cords” on page 65 for
Support for 32 Gbps Fibre Channel host
adapters
New Fibre Channel cablesSee “Feature codes for Fibre Channel cables” on page
Support for IBM Fibre Channel Endpoint
Security
See “DS8910F Rack Mounted model 993” on page 4
and “DS8910F model 994” on page 6 for more
information.
information.
for more information.
more information.
See “Feature codes for Fibre Channel host adapters” on
page 61 for more information.
62 for more information.
See “IBM Fibre Channel Endpoint Security” on page 75
xiv IBM DS8900F: DS8900F Introduction and Planning Guide
Page 15
Chapter 1. Overview
The IBM DS8900F is a high-performance, high-capacity storage system that supports continuous
operation, data security, and data resiliency. For high-availability, the hardware components are
redundant.
The DS8900F provides two system types within the 533x all-flash machine type family:
DS8910F Flexibility Class
The DS8910F Flexibility Class Rack Mounted model 993 and model 994 strike the perfect balance of
performance, capacity, and cost, all delivered within a flexible space-saving footprint.
Rack Mounted model 993
DS8910F Flexibility Class Rack Mounted model 993 provides a modular rack-mountable
enterprise storage system within the 533x all-flash machine type family. The modular system can
be integrated into an existing IBM Z model ZR1, IBM LinuxONE Rockhopper II model LR1, or other
standard 19-inch wide rack that conforms to EIA 310D specications. The DS8910F allows you to
take advantage of the DS8900F advanced features while limiting datacenter footprint and power
infrastructure requirements. The modular system contains processor nodes, an I/O Enclosure,
High Performance Flash Enclosures Gen2, and a Management Enclosure (which includes the
HMCs, Ethernet Switches, and RPCs). The DS8910F model 993 includes 8-core processors and is
scalable with up to 96 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, up to 512 GB system
memory, and up to 32 host adapter ports.
Model 994
The DS8910F Flexibility Class model 994 includes 8-core processors and is scalable with up to
192 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, up to 512 GB system memory, and up to 64
host adapter ports. The DS8910F model 994 includes a base frame with 40U capacity.
DS8950F Agility Class
The DS8950F Agility Class consolidates all your workloads for IBM Z®, IBM LinuxONE, IBM Power
System and distributed environments under a single all-flash storage solution.
Model 996 and E96
The DS8950F Agility Class model 996 and E96 includes up to 20-core processors and is scalable
with up to 384 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives, up to 2048 GB system memory,
and up to 128 host adapter ports. The DS8950F includes a base frame (model 996) and a one
expansion frame (model E96).
The DS8900F models 994, 996, and E96 feature new 19-inch wide frames with reduced footprint and
height.
®
Licensed functions are available in four groups:
Base Function
The Base Function license is required for each storage system. The licensed functions include
Encryption Authorization, Easy Tier, the Operating Environment License, and Thin Provisioning.
z-synergy Services
The z-synergy Services include z/OS® functions that are supported on the storage system. The
licensed functions include zHyperLink, transparent cloud tiering, High Performance FICON® for z
Systems®, HyperPAV, PAV, SuperPAV, zHyperWrite, and z/OS Distributed Data Backup.
Copy Services
Copy Services features help you implement storage solutions to keep your business running 24 hours
a day, 7 days a week by providing data duplication, data migration, and disaster recovery functions.
The licensed functions include Global Mirror, Metro Mirror, Metro/Global Mirror, Point-in-Time Copy/
FlashCopy®, z/OS Global Mirror, Safeguarded Copy, and z/OS Metro/Global Mirror Incremental Resync
(RMZ).
Copy Services Manager on Hardware Management Console
The Copy Services Manager on Hardware Management Console (CSM on HMC) license enables IBM
Copy Services Manager to run on the Hardware Management Console, which eliminates the need to
maintain a separate server for Copy Services functions. The CSM software license is required to
enable CSM usage on the HMC.
The storage system also includes features such as:
• POWER9™ processors
• Power-usage reporting
• National Institute of Standards and Technology (NIST) SP 800-131A enablement
Other functions that are supported in both the DS8000 Storage Management GUI and the DS commandline interface (DS CLI) include:
• Easy Tier
• Data encryption
• Thin provisioning
You can use the DS8000 Storage Management GUI and the DS command-line interface (DS CLI) to
manage and logically congure the storage system.
Functions that are supported in only the DS command-line interface (DS CLI) include:
• Point-in-time copy functions with IBM FlashCopy
• Remote Mirror and Copy functions, including
– Metro Mirror
– Global Copy
– Global Mirror
– Metro/Global Mirror
– z/OS Global Mirror
– z/OS Metro/Global Mirror
– Multiple Target PPRC
The storage systems meets hazardous substances (RoHS) requirements by conforming to the following
EC directives:
• Directive 2011/65/EU of the European Parliament and of the Council of 8 June 2011 on the restriction
of the use of certain hazardous substances in electrical and electronic equipment. It has been
demonstrated that the requirements specied in Article 4 are met.
• EN 50581:2012 technical documentation for the assessment of electrical and electronic products
regarding the restriction of hazardous substances.
The IBM Security Key Lifecycle Manager stores data keys that are used to secure the key hierarchy that is
associated with the data encryption functions of various devices, including the DS8000 series. It can be
used to provide, protect, and maintain encryption keys that are used to encrypt information that is written
to and decrypt information that is read from encryption-enabled disks. IBM Security Key Lifecycle
Manager operates on various operating systems.
Machine types overview
There are several machine type options available for the DS8000 series. Order a hardware machine type
for the storage system and a corresponding function authorization machine type for the licensed
functions that are planned for use.
The following tables list the available hardware machine types and their corresponding function
authorization machine types.
2
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 17
Table 4. Available hardware and function-authorization machine types that support all-flash system
types
HardwareLicensed functions
Hardware machine
5331 (1-year warranty
period)
5332 (2-year warranty
period)
5333 (3-year warranty
period)
5334 (4-year warranty
period)
The machine types for the DS8000 series specify the service warranty period. The warranty is used for
service entitlement checking when notications for service are called home. All DS8000 series models
report 2107 as the machine type to attached host systems.
Hardware
The architecture of the IBM DS8000 series is based on three major elements that provide function
specialization and three tiers of processing power.
Figure 1 on page 4 illustrates the following elements:
type
Available hardware
models
993
994
996 and E96
Corresponding function
authorization machine
type
9046 (1-year warranty
period)
9047 (2-year warranty
period)
9048 (3-year warranty
period)
9049 (4-year warranty
period)
Available function
authorization models
FF8
• Host adapters manage external I/O interfaces that use Fibre Channel protocols for host-system
attachment and for replicating data between storage systems.
• Flash RAID adapters manage the internal storage devices. They also manage the SAS paths to drives,
RAID protection, and drive sparing.
• A pair of high-performance redundant active-active Power servers is functionally positioned between
the adapters and a key feature of the architecture.
The internal Power servers support the bulk of the processing to be done in the storage system. Each
Power server has multiple processor cores. The cores are managed as a symmetric multiprocessing
(SMP) pool of shared processing power to process the work that is done on the Power server. Each
Power server runs an AIX® kernel that manages the processors, manages processor memory as a data
cache, and more. For more information, see IBM DS8000 Architecture and Implementation on the IBM
Redbooks (www.redbooks.ibm.com/)
Chapter 1. Overview
3
Page 18
Figure 1. DS8000 series architecture
The DS8000 series architecture has the following major benets.
• Server foundation
– Promotes high availability and high performance by using eld-proven Power servers
– Reduces custom components and design complexity
– Positions the storage system to reap the benets of server technology advances
• Operating environment
– Promotes high availability and provides a high-quality base for the storage system software through a
eld-proven AIX operating-system kernel
– Provides an operating environment that is optimized for Power servers, including performance and
reliability, availability, and serviceability
– Provides shared processor (SMP) efciency
– Reduces custom code and design complexity
– Uses Power rmware and software support for networking and service functions
DS8910F Rack Mounted model 993
The DS8910F Rack Mounted storage system is an entry-level, high-performance storage system that
includes only High Performance Flash Enclosures Gen2.
The DS8910F Rack Mounted storage system can be integrated into an IBM Z model ZR1 or IBM LinuxONE
Rockhopper II model LR1, or another standard 19-inch wide rack that conforms to EIA 310D
specications to take advantage of the DS8910F advanced features while limiting datacenter footprint
and power infrastructure requirements. This modular rack-mountable enterprise storage system features
8-core processors and supports one High Performance Flash Enclosure Gen2 pair with the model ZR1 or
LR1 installation, or up to two High Performance Flash Enclosure Gen2 pair with the standard 19-inch wide
rack installation, with up to 96 Flash Tier 0, Flash Tier 1, or Flash Tier 2 drives. The modular system
contains processor nodes, an I/O Enclosure, High Performance Flash Enclosures Gen2, and a
Management Enclosure (which includes the HMCs, Ethernet Switches, and RPCs). The HMCs are small
form factor computers. The DS8910F Rack Mounted storage system requires a minimum of 15U
contiguous rack space.
4
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 19
The standard 19-inch wide rack installation (feature code 0939) supports optional features that require
additional contiguous space:
• A second High Performance Flash Enclosure Gen2 pair (feature code 1605) requires an additional 4U
contiguous space.
• A 1U keyboard and display (feature code 1765) requires an additional 1U contiguous space. For
accessibility, the keyboard and display must be mounted at a height of 15 - 46 inches. If you add the
keyboard and display, ensure that you provide adequate space to accommodate them.
The DS8910F model 993 uses 16 and 32 Gbps Fibre Channel host adapters that run Fibre Channel
Protocol (FCP) or FICON protocol. The High Performance FICON (HPF) feature is also supported.
The DS8910F model 993 supports single-phase and three-phase power. The model 993 components all
have associated C13/C14 power cords for connection to iPDU features or to customer supplied rack
PDUs. If the location of the model 993 with relation to the rack C13 PDU receptacles requires 1 m jumper
C13/C14 power cables, they are also provided. These cords are used to attach to the model ZR1 or LR1
PDU receptacles available in the rack.
The following tables list the hardware components and maximum capacities that are supported for the
DS8910F model 993.
1. High Performance Flash Enclosures Gen2 are installed in pairs.
2. The standard 19-inch wide rack installation (feature code 0939) supports an optional second High Performance Flash Enclosure Gen2 pair (feature code
1605). The optional second High Performance Flash Enclosure Gen2 pair is not available with the model ZR1 installation (feature code 0937) or the model
LR1 installation (feature code 0938).
Host adapters (4
port)
High Performance Flash
Enclosure Gen2 pairs
2
2
1
Table 6. Maximum capacity for the DS8910F model 993
ProcessorsSystem memory
8-core192 GB961.47 PB
8-core512 GB961.47 PB
Maximum 2.5-in. Flash Tier 0, Flash
Tier 1, or Flash Tier 2 drives
Maximum storage capacity for 2.5in. flash drives
DS8910F (model 993) overview
The DS8910F Rack Mounted storage system model 993 consists of modules for installation in an existing
rack.
The model 993 includes the following components:
• High Performance Flash Enclosure Gen2 pair
Note: The standard 19-inch wide rack installation (feature code 0939) supports an optional second
High Performance Flash Enclosure Gen2 pair (feature code 1605). The optional second High
Performance Flash Enclosure Gen2 pair is not available with the model ZR1 installation (feature code
0937) or the model LR1 installation (feature code 0938).
• I/O enclosure pair
• Two processor nodes (available with POWER9 processors)
• Management enclosure
Chapter 1. Overview
5
Page 20
DS8910F model 994
The DS8910F model 994 is an entry-level, high-performance, high-capacity storage system that includes
only High Performance Flash Enclosures Gen2.
Model 994 features 8-core processors and is scalable and supports up to 192 Flash Tier 0, Flash Tier 1, or
Flash Tier 2 drives. The frame is 19 inches wide and 40U high. It supports up to four High Performance
Flash Enclosure Gen2 pair.
The DS8910F model 994 uses 16 or 32 Gbps Fibre Channel host adapters that run Fibre Channel Protocol
(FCP) or FICON®. The High Performance FICON (HPF) feature is also supported.
Model 994 supports single-phase and three-phase power.
The following tables list the hardware components and maximum capacities that are supported for the
DS8910F model 994, depending on the amount of memory that is available.
1. High Performance Flash Enclosures Gen2 are installed in pairs.
Table 8. Maximum capacity for the DS8910F model 994
ProcessorsSystem memory
8-core192 GB1922 PB
8-core512 GB1922 PB
Maximum 2.5-in. Flash Tier 0, Flash
Tier 1, or Flash Tier 2 drives
Host adapters (4
port)
Maximum storage capacity for 2.5in. flash drives
1
1
High Performance Flash
Enclosure Gen2 pairs
1
1. With 512 GB or less system memory, the maximum physical capacity is 2 PB when using large extents, or 512 TB when using small
extents. With 512 GB or less system memory, the maximum virtual capacity for volume allocation (including overprovisioning) is 4 PB
when using large extents, or 1 PB when using small extents (including Safeguarded Copy virtual capacity).
DS8910F (model 994) overview
The DS8910F model 994 includes a base frame.
The base frame includes the following components:
• High Performance Flash Enclosures Gen2
• I/O enclosure pair (optional second I/O enclosure pair)
• Processor nodes (available with POWER9 processors)
• Management enclosure
• Intelligent rack PDU (iPDU) pair
DS8950F model 996
DS8950F Agility Class consolidates all your workloads for IBM Z®, IBM LinuxONE, IBM Power® System and
distributed environments under a single all-flash storage solution.
DS8950F storage systems are scalable with up to 20-core processors, and up to 384 Flash Tier 0, Flash
Tier 1, or Flash Tier 2 drives. They are optimized and congured for cost. The frame is 19 inches wide and
40U high. They support the following High Performance Flash Enclosure Gen2 pairs:
• Up to four High Performance Flash Enclosure Gen2 pairs in the base frame (model 996).
• Up to four High Performance Flash Enclosure Gen2 pairs each in the rst expansion frame (model E96).
The DS8950F uses 16 or 32 Gbps Fibre Channel host adapters that run Fibre Channel Protocol (FCP) or
FICON®. The High Performance FICON (HPF) feature is also supported.
The DS8950F supports single-phase and three-phase power.
6
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 21
The following tables list the hardware components and maximum capacities that are supported for the
DS8950F models 996 and E96, depending on the amount of memory that is available.
Table 9. Components for the DS8950F models 996 and E96
1. High Performance Flash Enclosures Gen2 are installed in pairs.
Table 10. Maximum capacity for the DS8950F models 996 and E96
ProcessorsSystem memory
10-core512 GB1922 PB
20-core1024 GB3845.9 PB
20-core2048 GB3845.9 PB
1. With 512 GB or less system memory, the maximum physical capacity is 2 PB when using large extents, or 512 TB when using small
extents. With 512 GB or less system memory, the maximum virtual capacity for volume allocation (including overprovisioning) is 4 PB
when using large extents, or 1 PB when using small extents (including Safeguarded Copy virtual capacity).
Maximum 2.5-in. Flash Tier 0, Flash
Tier 1, or Flash Tier 2 drives
Host adapters
(4 port)
High Performance
Flash Enclosure Gen2
1
pairs
Maximum storage capacity for 2.5in. Flash Tier 0, Flash Tier 1, or
Flash Tier 2 drives
1
Expansion frames
DS8950F base frame (model 996) overview
The DS8950F includes a base frame (model 996).
The base frame includes the following components:
• High Performance Flash Enclosures Gen2
• I/O enclosure pair (optional second I/O enclosure pair)
• Processor nodes (available with POWER9 processors)
• Management enclosure
• Intelligent rack PDU (iPDU) pair (optional second iPDU pair)
DS8950F expansion frame (model E96) overview
The DS8950F supports an expansion frame (model E96) that can be added to a base frame (model 996).
An expansion frame requires a 20-core processor in the base frame. The expansion frame includes an
intelligent rack PDU (iPDU) pair, I/O enclosures, and you can add up to four High Performance Flash
Enclosure Gen2 pairs.
High Performance Flash Enclosures Gen2 pair
The High Performance Flash Enclosure Gen2 is a 2U storage enclosure that is installed in pairs.
The High Performance Flash Enclosure Gen2 pair contains the following hardware components:
• Two 2U 24-slot SAS flash drive enclosures. Each of the two enclosures contains the following
components:
– Two power supplies with integrated cooling fans
– Two SAS Expander Modules with two SAS ports each
– One midplane or backplane for plugging components that provides maintenance of flash drives,
Expander Modules, and power supplies
– Space for 24 2.5-inch flash drives or drive llers
Chapter 1. Overview
7
Page 22
I/O enclosures
The I/O enclosure provides connectivity between the adapters and the processor node.
The I/O enclosure uses PCIe interfaces to connect I/O adapters in the I/O enclosure to both processor
nodes. A PCIe device is an I/O adapter or a processor node.
To improve I/O operations per second (IOPS) and sequential read/write throughput, the I/O enclosure is
connected to each processor node with a point-to-point connection.
The I/O enclosure contain the following adapters:
Flash interface connectors
Interface connector that provides PCIe cable connection from the I/O enclosure to the High
Performance Flash Enclosure Gen2.
Host adapters
An I/O enclosure can support up to 16 host ports.
Each of the four 16 or 32 Gbps Fibre Channel ports on a PCIe-attached host adapter can be
independently congured to use SCSI/FCP or FICON/zHPF protocols. Both longwave and shortwave
adapter versions that support different maximum cable lengths are available. The host-adapter ports
can be directly connected to attached hosts systems or storage systems, or connected to a storage
area network. SCSI/FCP ports are used for connections between storage systems. SCSI/FCP ports
that are attached to a SAN can be used for both host and storage system connections.
The High Performance FICON Extension (zHPF) protocol can be used by FICON host channels that
have zHPF support. The use of zHPF protocols provides a signicant reduction in channel usage. This
reduction improves I/O input on a single channel and reduces the number of FICON channels that are
required to support the workload.
Processor nodes
The processor nodes drive all functions in the storage system. Each node consists of a Power server that
contains POWER9 processors and memory.
Management enclosure
The management enclosure houses the HMCs and the switch and power components that support them.
The management enclosure contains the following components:
• Two Hardware Management Consoles (HMCs)
• Two Ethernet switches
• Two power control cards
• Two power supply units (PSUs) to power the management enclosure
• One Local/Remote switch assembly
Management console
The management console is also referred to as the Hardware Management Console (or HMC). It supports
storage system hardware and rmware installation and maintenance activities.
The HMC connects to the customer network and provides access to functions that can be used to manage
the storage system. Management functions include logical conguration, problem notication, call home
for service, remote service, and Copy Services management. You can perform management functions
from the DS8000 Storage Management GUI, DS command-line interface (DS CLI), or other storage
management software that supports the storage system.
8
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 23
Power
Ethernet switches
The Ethernet switches provide internal communication between the management consoles and the
processor nodes. Two redundant Ethernet switches are provided.
Intelligent rack PDUs (iPDUs) supply power to the storage system, and backup power modules (BPMs)
provide power to the non-volatile dual in-line memory module (NVDIMM) when electrical power is
removed.
Note: iPDUs are included with models 994, 996, and E96. The Rack Mounted model 993 standard 19inch wide rack installation (feature code 0939) supports an optional iPDU.
iPDUs provide several benets.
• IBM Active Energy Manager(AEM) support
• IBM Power Line Disturbance (PLD) compliance up to 20 milliseconds
• Individual outlet monitoring and control
• Firmware updates
• Circuit breaker protection
• Worldwide voltage/power connector support
BPMs retain NVDIMM data when electrical power is removed, either from an unexpected power loss, or
from a normal system shutdown. This improves data security, reliability, and recovery time.
Functional overview
The following list provides an overview of some of the features that are associated with DS8900F.
Note: Some storage system functions are not available or are not supported in all environments. See the
IBM System Storage Interoperation Center (SSIC) website (www.ibm.com/systems/support/storage/cong/ssic) for the most current information on supported hosts, operating systems, adapters, and
switches.
Nondisruptive and disruptive activities
DS8900F supports hardware redundancy. It is designed to support nondisruptive changes: hardware
upgrades, repair, and licensed function upgrades. In addition, logical conguration changes can be
made nondisruptively. For example:
• The flexibility and modularity means that expansion frames can be added and usable storage
capacity can be increased within a frame without disrupting your applications.
• An increase in license scope is nondisruptive and takes effect immediately. A decrease in license
scope is also nondisruptive but does not take effect until the next IML.
• Easy Tier helps keep performance optimized by periodically redistributing data to help eliminate
drive hot spots that can degrade performance. This function helps balance I/O activity across the
drives in an existing drive tier. It can also automatically redistribute some data to new empty drives
added to a tier to help improve performance by taking advantage of the new resources. Easy Tier
does this I/O activity rebalancing automatically without disrupting access to your data.
The following examples include activities that are disruptive:
• The removal of an expansion frame from the base frame.
Energy reporting
You can use DS8900F to display the following energy measurements through the DS CLI:
• Average inlet temperature in Celsius
• Total data transfer rate in MB/s
• Timestamp of the last update for values
Chapter 1. Overview
9
Page 24
The derived values are averaged over a 5-minute period. For more information about energy-related
commands, see the commands reference.
You can also query power usage and data usage with the showsu command. For more information,
see the showsu description in the Command-Line Interface User's Guide.
National Institute of Standards and Technology (NIST) SP 800-131A security enablement
NIST SP 800-131A requires the use of cryptographic algorithms that have security strengths of 112
bits to provide data security and data integrity for secure data that is created in the cryptoperiod
starting in 2014. The DS8900F is enabled for NIST SP 800-131A. Conformance with NIST SP
800-131A depends on the use of appropriate prerequisite management software versions and
appropriate conguration of the DS8900F and other network-related entities.
Storage pool striping (rotate capacity)
Storage pool striping is supported on the DS8000 series, providing improved performance. The
storage pool striping function stripes new volumes across all arrays in a pool. The striped volume
layout reduces workload skew in the system without requiring manual tuning by a storage
administrator. This approach can increase performance with minimal operator effort. With storage
pool striping support, the system automatically performs close to highest efciency, which requires
little or no administration. The effectiveness of performance management tools is also enhanced
because imbalances tend to occur as isolated problems. When performance administration is
required, it is applied more precisely.
You can congure and manage storage pool striping by using the DS8000 Storage Management GUI,
DS CLI, and DS Open API. The rotate capacity allocation method (also referred to as rotate volumes)
is an alternative allocation method that tends to prefer volumes that are allocated to a single
managed array, and is not recommended. The rotate extents option (storage pool striping) is designed
to provide the best performance by striping volumes across arrays in the pool. Existing volumes can
be recongured nondisruptively by using manual volume migration and volume rebalance.
The storage pool striping function is provided with the DS8000 series at no additional charge.
Performance statistics
You can use usage statistics to monitor your I/O activity. For example, you can monitor how busy the
I/O ports are and use that data to help manage your SAN. For more information, see documentation
about performance monitoring in the DS8000 Storage Management GUI.
Sign-on support that uses Lightweight Directory Access Protocol (LDAP)
The DS8000 system provides support for both unied sign-on functions (available through the
DS8000 Storage Management GUI), and the ability to specify an existing Lightweight Directory Access
Protocol (LDAP) server. The LDAP server can have existing users and user groups that can be used for
authentication on the DS8000 system.
Setting up unied sign-on support for the DS8000 system is achieved by using IBM Copy Services
Manager or IBM Spectrum Control.
Note: Other supported user directory servers include IBM Directory Server and Microsoft Active
Directory.
Easy Tier
Easy Tier is designed to determine the appropriate tier of storage based on data access requirements
and then automatically and nondisruptively move data, at the subvolume or sub-LUN level, to the
appropriate tier on the DS8000 system. Easy Tier is an optional feature that offers enhanced
capabilities through features such as auto-rebalancing, hot spot management, rank depopulation, and
manual volume migration.
Easy Tier enables the DS8900F system to automatically balance I/O access to drives to avoid hot
spots on arrays. Easy Tier can place data in the storage tier that best suits the access frequency of the
data. Highly accessed data can be moved nondisruptively to a higher tier, and likewise cooler data is
moved to a lower tier.
Easy Tier also can benet homogeneous drive pools because it can move data away from over-utilized
arrays to under-utilized arrays to eliminate hot spots and peaks in drive response times.
10
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 25
z-synergy
The DS8900F storage system can work in cooperation with IBM Z hosts to provide the following
performance enhancement functions.
• Extended Address Volumes
• High Performance FICON for IBM Z
• Parallel Access Volumes and HyperPAV (also referred to as aliases)
• Quick initialization for IBM Z
• Transparent cloud tiering
• ZHyperLink technology that speeds up transaction processing and improves active log throughput
• IBM Fibre Channel Endpoint Security
zHyperLink read and write function
zHyperLink is a short distance, mainframe-attached link that provides up to 10 times lower latency
than high-performance FICON. zHyperLink speeds Db2 for z/OS transaction processing and improves
active log throughput. zHyperLink technology delivers low latency when traditional SAN-attached
storage systems are used with its complementary short-distance link technology.
Low latencies are provided for read and write operations for storage systems within the 150-meter
distance requirement by using a point-to-point link from the z14 to the storage system I/O bay. Low
I/O latencies deliver value through improved workload-elapsed times and faster transactional
response times, and contribute to lower scaling costs. This storage system implementation of
zHyperLink I/O delivers service times fast enough to enable a synchronous I/O model in highperformance IBM Z servers.
Copy Services
The DS8900F storage system supports a wide variety of Copy Service functions, including Remote
Mirror, Remote Copy, and Point-in-Time functions. The following includes key Copy Service functions:
• FlashCopy
• Remote Pair FlashCopy (Preserve Mirror)
• Safeguarded Copy
• Remote Mirror and Copy:
– Metro Mirror
– Global Copy
– Global Mirror
– Metro/Global Mirror
– Multi-Target PPRC
– z/OS Global Mirror
– z/OS Metro/Global Mirror
Multitenancy support (resource groups)
Resource groups provide additional policy-based limitations. Resource groups, together with the
inherent volume addressing limitations, support secure partitioning of Copy Services resources
between user-dened partitions. The process of specifying the appropriate limitations is performed
by an administrator using resource groups functions. DS CLI support is available for resource groups
functions.
Multitenancy can be supported in certain environments without the use of resource groups, if the
following constraints are met:
• Either Copy Services functions are disabled on all DS8000 systems that share the same SAN (local
and remote sites) or the landlord congures the operating system environment on all hosts (or host
LPARs) attached to a SAN, which has one or more DS8000 systems, so that no tenant can issue
Copy Services commands.
Chapter 1. Overview
11
Page 26
• The z/OS Distribute Data backup feature is disabled on all DS8000 systems in the environment
(local and remote sites).
• Thin provisioned volumes (ESE or TSE) are not used on any DS8000 systems in the environment
(local and remote sites).
• On zSeries systems there is only one tenant running in an LPAR, and the volume access is controlled
so that a CKD base volume or alias volume is only accessible by a single tenant’s LPAR or LPARs.
Restriction of hazardous substances (RoHS)
The DS8900F system meets RoHS requirements. It conforms to the following EC directives:
• Directive 2011/65/EU of the European Parliament and of the Council of 8 June 2011 on the
restriction of the use of certain hazardous substances in electrical and electronic equipment. It has
been demonstrated that the requirements specied in Article 4 have been met.
• EN 50581:2012 technical documentation for the assessment of electrical and electronic products
with respect to the restriction of hazardous substances.
Logical conguration
You can use either the DS8000 Storage Management GUI or the DS CLI to congure storage. Although the
end result of storage conguration is similar, each interface has specic terminology, concepts and
procedures.
Note: LSS is synonymous with logical control unit (LCU) and subsystem identication (SSID).
Logical conguration with DS8000 Storage Management GUI
Before you congure your storage system, it is important to understand the storage concepts and
sequence of system conguration.
Figure 2 on page 12
illustrates the concepts of conguration.
Figure 2. Logical conguration sequence
The following concepts are used in storage conguration.
12
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 27
Arrays
An array, also referred to as a managed array, is a group of storage devices that provides capacity for
a pool. An array generally consists of 8 drives that are managed as a Redundant Array of Independent
Disks (RAID).
Pools
A storage pool is a collection of storage that identies a set of storage resources. These resources
provide the capacity and management requirements for arrays and volumes that have the same
storage type, either xed block (FB) or count key data (CKD).
Volumes
A volume is a xed amount of storage on a storage device.
LSS
The logical subsystem (LSS) that enables one or more host I/O interfaces to access a set of devices.
Hosts
A host is the computer system that interacts with the storage system. Hosts dened on the storage
system are congured with a user-designated host type that enables the storage system to recognize
and interact with the host. Only hosts that are mapped to volumes can access those volumes.
Logical conguration of the storage system begins with managed arrays. When you create storage pools,
you assign the arrays to pools and then create volumes in the pools. FB volumes are connected through
host ports to an open systems host. CKD volumes require that logical subsystems (LSSs) be created as
well so that they can be accessed by an IBM Z host.
Pools must be created in pairs to balance the storage workload. Each pool in the pool pair is controlled by
a processor node (either Node 0 or Node 1). Balancing the workload helps to prevent one node from doing
most of the work and results in more efcient I/O processing, which can improve overall system
performance. Both pools in the pair must be formatted for the same storage type, either FB or CKD
storage. You can create multiple pool pairs to isolate workloads.
When you create a pair of pools, you can choose to automatically assign all available arrays to the pools,
or assign them manually afterward. If the arrays are assigned automatically, the system balances them
across both pools so that the workload is distributed evenly across both nodes. Automatic assignment
also ensures that spares and device adapter (DA) pairs are distributed equally between the pools.
If you are connecting to a IBM Z host, you must create a logical subsystem (LSS) before you can create
CKD volumes.
You can create a set of volumes that share characteristics, such as capacity and storage type, in a pool
pair. The system automatically balances the volumes between both pools. If the pools are managed by
Easy Tier, the capacity in the volumes is automatically distributed among the arrays. If the pools are not
managed by Easy Tier, you can choose to use the rotate capacity allocation method, which stripes
capacity across the arrays.
If the volumes are connecting to a IBM Z host, the next steps of the conguration process are completed
on the host.
If the volumes are connecting to an open systems host, map the volumes to the host, add host ports to
the host, and then map the ports to the I/O ports on the storage system.
FB volumes can only accept I/O from the host ports of hosts that are mapped to the volumes. Host ports
are zoned to communicate only with certain I/O ports on the storage system. Zoning is congured either
within the storage system by using I/O port masking, or on the switch. Zoning ensures that the workload
is spread properly over I/O ports and that certain workloads are isolated from one another, so that they
do not interfere with each other.
The workload enters the storage system through I/O ports, which are on the host adapters. The workload
is then fed into the processor nodes, where it can be cached for faster read/write access. If the workload
is not cached, it is stored on the arrays in the storage enclosures.
Chapter 1. Overview
13
Page 28
Logical conguration with DS CLI
Before you congure your storage system with the DS CLI, it is important to understand IBM terminology
for storage concepts and the storage hierarchy.
In the storage hierarchy, you begin with a physical disk. Logical groupings of eight disks form an array site.
Logical groupings of one array site form an array. After you dene your array storage type as CKD or xed
block, you can create a rank. A rank is divided into a number of xed-size extents. If you work with an
open-systems host, a large extent is 1 GiB, and a small extent is 16 MiB. If you work in an IBM Z
environment, a large extent is the size of an IBM 3390 Mod 1 disk drive (1113 cylinders), and a small
extent is 21 cylinders.
After you create ranks, your physical storage can be considered virtualized. Virtualization dissociates your
physical storage conguration from your logical conguration, so that volume sizes are no longer
constrained by the physical size of your arrays.
The available space on each rank is divided into extents. The extents are the building blocks of the logical
volumes. An extent is striped across all disks of an array.
Extents of the same storage type are grouped to form an extent pool. Multiple extent pools can create
storage classes that provide greater flexibility in storage allocation through a combination of RAID types,
DDM size, DDM speed, and DDM technology. This conguration allows a differentiation of logical volumes
by assigning them to the appropriate extent pool for the needed characteristics. Different extent sizes for
the same device type (for example, count-key-data or xed block) can be supported on the same storage
unit. The different extent types must be in different extent pools.
A logical volume is composed of one or more extents. A volume group species a set of logical volumes.
Identify different volume groups for different uses or functions (for example, SCSI target, remote mirror
and copy secondary volumes, FlashCopy targets, and Copy Services). Access to the set of logical volumes
that are identied by the volume group can be controlled. Volume groups map hosts to volumes. Figure 3
on page 15 shows a graphic representation of the logical conguration sequence.
When volumes are created, you must initialize logical tracks from the host before the host is allowed read
and write access to the logical tracks on the volumes. The Quick Initialization feature for open system on
FB ESE volumes allows quicker access to logical volumes. The volumes include host volumes and source
volumes that can be used Copy Services relationships, such as FlashCopy or Remote Mirror and Copy
relationships. This process dynamically initializes logical volumes when they are created or expanded,
allowing them to be congured and placed online more quickly.
You can specify LUN ID numbers through the graphical user interface (GUI) for volumes in a map-type
volume group. You can create a new volume group, add volumes to an existing volume group, or add a
volume group to a new or existing host. Previously, gaps or holes in LUN ID numbers might result in a
"map error" status. The Status eld is eliminated from the volume groups main page in the GUI and the
volume groups accessed table on the Manage Host Connections page. You can also assign host
connection nicknames and host port nicknames. Host connection nicknames can be up to 28 characters,
which is expanded from the previous maximum of 12. Host port nicknames can be 32 characters, which
are expanded from the previous maximum of 16.
14
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 29
Figure 3. Logical conguration sequence
RAID implementation
RAID implementation improves data storage reliability and performance.
Redundant array of independent disks (RAID) is a method of conguring multiple drives in a storage
subsystem for high availability and high performance. The collection of two or more drives presents the
image of a single drive to the system. If a single device failure occurs, data can be read or regenerated
from the other drives in the array.
RAID implementation provides fault-tolerant data storage by storing the data in different places on
multiple drives. By placing data on multiple drives, I/O operations can overlap in a balanced way to
improve the basic reliability and performance of the attached storage devices.
Chapter 1. Overview
15
Page 30
Usable capacity for the storage system can be congured as RAID 5, RAID 6, or RAID 10. RAID 6 can offer
excellent performance for some applications, while RAID 10 can offer better performance for selected
applications, in particular, high random, write content applications in the open systems environment.
RAID 6 increases data protection by adding an extra layer of parity over the RAID 5 implementation.
RAID 6 is the recommended and default RAID type for all drives over 1 TB. RAID 6 and RAID 10 are the
only supported RAID types for 1.92 TB Flash Tier 2 and 3.84 TB Flash Tier 1 drives. RAID 6 is the only
supported RAID type for 7.68 TB and 15.36 TB Flash Tier 2 drives.
Note: RAID 5 is not supported for drives larger than 1 TB and requires a request for price quote (RPQ). For
information, contact your sales representative.
RAID 5 overview
RAID 5 is a method of spreading volume data across multiple drives.
RAID 5 increases performance by supporting concurrent accesses to the multiple drives within each
logical volume. Data protection is provided by parity, which is stored throughout the drives in the array. If
a drive fails, the data on that drive can be restored using all the other drives in the array along with the
parity bits that were created when the data was stored.
RAID 5 is not supported for drives larger than 1 TB and requires a request for price quote (RPQ). For
information, contact your sales representative.
Note: RAID 6 is the recommended and default RAID type for all drives over 1 TB. RAID 6 and RAID 10 are
the only supported RAID types for 1.92 TB Flash Tier 2 and 3.84 TB Flash Tier 1 drives. RAID 6 is the only
supported RAID type for 7.68 TB and 15.36 TB Flash Tier 2 drives.
RAID 6 overview
RAID 6 is a method of increasing the data protection of arrays with volume data spread across multiple
disk drives.
RAID 6 increases data protection by adding an extra layer of parity over the RAID 5 implementation. By
adding this protection, RAID 6 can restore data from an array with up to two failed drives. The calculation
and storage of extra parity slightly reduces the capacity and performance compared to a RAID 5 array.
Note: RAID 6 is the recommended and default RAID type for all drives over 1 TB. RAID 6 and RAID 10 are
the only supported RAID types for 1.92 TB Flash Tier 2 and 3.84 TB Flash Tier 1 drives. RAID 6 is the only
supported RAID type for 7.68 TB and 15.36 TB Flash Tier 2 drives.
RAID 10 overview
RAID 10 provides high availability by combining features of RAID 0 and RAID 1.
RAID 0 increases performance by striping volume data across multiple disk drives. RAID 1 provides disk
mirroring, which duplicates data between two disk drives. By combining the features of RAID 0 and RAID
1, RAID 10 provides a second optimization for fault tolerance.
RAID 10 implementation provides data mirroring from one disk drive to another disk drive. RAID 10
stripes data across half of the disk drives in the RAID 10 conguration. The other half of the array mirrors
the rst set of disk drives. Access to data is preserved if one disk in each mirrored pair remains available.
In some cases, RAID 10 offers faster data reads and writes than RAID 6 because it is not required to
manage parity. However, with half of the disk drives in the group used for data and the other half used to
mirror that data, RAID 10 arrays have less capacity than RAID 6 arrays.
Note: RAID 6 is the recommended and default RAID type for all drives over 1 TB. RAID 6 and RAID 10 are
the only supported RAID types for 1.92 TB Flash Tier 2 and 3.84 TB Flash Tier 1 drives. RAID 6 is the only
supported RAID type for 7.68 TB and 15.36 TB Flash Tier 2 drives.
Logical subsystems
To facilitate conguration of a storage system, volumes are partitioned into groups of volumes. Each
group is referred to as a logical subsystem (LSS).
As part of the storage conguration process, you can congure the maximum number of LSSs that you
plan to use. The storage system can contain up to 255 LSSs and each LSS can be connected to 16 other
16
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 31
LSSs using a logical path. An LSS is a group of up to 256 volumes that have the same storage type, either
count key data (CKD) for IBM Z hosts or xed block (FB) for open systems hosts.
An LSS is uniquely identied within the storage system by an identier that consists of two hex characters
(0-9 or uppercase A-F) for which the volumes are associated. A fully qualied LSS is designated using the
storage system identier and the LSS identier, such as IBM.2107-921-12FA123/1E. The LSS identiers
are important for Copy Services operations. For example, for FlashCopy operations, you specify the LSS
identier when choosing source and target volumes because the volumes can span LSSs in a storage
system.
The storage system has a 64K volume address space that is partitioned into 255 LSSs, where each LSS
contains 256 logical volume numbers. The 255 LSS units are assigned to one of 16 address groups, where
each address group contains 16 LSSs, or 4K volume addresses.
Storage system functions, including some that are associated with FB volumes, might have dependencies
on LSS partitions. For example:
• The LSS partitions and their associated volume numbers must identify volumes that are specied for
storage system Copy Services operations.
• To establish Remote Mirror and Copy pairs, a logical path must be established between the associated
LSS pair.
• FlashCopy pairs must reside within the same storage system.
If you increase storage system capacity, you can increase the number of LSSs that you have dened. This
modication to increase the maximum is a nonconcurrent action. If you might need capacity increases in
the future, leave the number of LSSs set to the maximum of 255.
Note: If you reduce the CKD LSS limit to zero for IBM Z hosts, the storage system does not process
Remote Mirror and Copy functions. The FB LSS limit must be no lower then eight to support Remote
Mirror and Copy functions for open-systems hosts.
Allocation methods
Allocation methods (also referred to as extent allocation methods) determine the means by which
provisioned capacity is allocated within a pool.
All extents of the ranks that are assigned to an extent pool are independently available for allocation to
logical volumes. The extents for a LUN or volume are logically ordered, but they do not have to come from
one rank and the extents do not have to be contiguous on a rank. This construction method of using xed
extents to form a logical volume in the storage system allows flexibility in the management of the logical
volumes. You can delete volumes, resize volumes, and reuse the extents of those volumes to create other
volumes, different sizes. One logical volume can be deleted without affecting the other logical volumes
that are dened on the same extent pool.
Because the extents are cleaned after you delete a volume, it can take some time until these extents are
available for reallocation. The reformatting of the extents is a background process.
There are three allocation methods that are used by the storage system: rotate capacity (also referred to
as storage pool striping), rotate volumes, and managed.
Rotate capacity allocation method
The default allocation method is rotate capacity, which is also referred to as storage pool striping. The
rotate capacity allocation method is designed to provide the best performance by striping volume extents
across arrays in a pool. The storage system keeps a sequence of arrays. The rst array in the list is
randomly picked at each power-on of the storage subsystem. The storage system tracks the array in
which the last allocation started. The allocation of a rst extent for the next volume starts from the next
array in that sequence. The next extent for that volume is taken from the next rank in sequence, and so
on. The system rotates the extents across the arrays.
If you migrate a volume with a different allocation method to a pool that has the rotate capacity allocation
method, then the volume is reallocated. If you add arrays to a pool, the rotate capacity allocation method
reallocates the volumes by spreading them across both existing and new arrays.
Chapter 1. Overview
17
Page 32
You can congure and manage this allocation method by using the DS8000 Storage Management GUI and
DS CLI.
Rotate volumes allocation method
Volume extents can be allocated sequentially. In this case, all extents are taken from the same array until
there are enough extents for the requested volume size or the array is full, in which case the allocation
continues with the next array in the pool.
If more than one volume is created in one operation, the allocation for each volume starts in another
array. You might want to consider this allocation method when you prefer to manage performance
manually. The workload of one volume is allocated to one array. This method makes the identication of
performance bottlenecks easier; however, by putting all the volume data onto just one array, you might
introduce a bottleneck, depending on your actual workload.
Managed allocation method
When a volume is managed by Easy Tier, the allocation method of the volume is referred to as managed.
Easy Tier allocates the capacity in ways that might differ from both the rotate capacity and rotate volume
allocation methods.
Management interfaces
You can use various IBM storage management interfaces to manage your storage system.
These interfaces include DS8000 Storage Management GUI, DS Command-Line Interface (DS CLI), the DS
Open Application Programming Interface, DS8000 RESTful API, IBM Spectrum Controland IBM Copy
Services Manager.
DS8000 Storage Management GUI
Use the DS8000 Storage Management GUI to congure and manage storage, and to monitor performance
and Copy Services functions.
DS8000 Storage Management GUI is a web-based GUI that is installed on the Hardware Management
Console (HMC). You can access the DS8000 Storage Management GUI from any network-attached
system by using a supported web browser. For a list of supported browsers, see “DS8000 Storage
Management GUI web browser support and conguration” on page 20.
You can access the DS8000 Storage Management GUI from a browser by using the following web
address, where HMC_IP is the IP address or host name of the HMC.
https://HMC_IP
If the DS8000 Storage Management GUI does not display as anticipated, clear the cache for your
browser, and try to log in again.
Notes:
• If the storage system is congured for NIST SP 800-131A security conformance, a version of Java that
is NIST SP 800-131A compliant must be installed on all systems that run the DS8000 Storage
Management GUI. For more information about security requirements, see information about conguring
your environment for NIST SP 800-131A compliance in the IBM DS8000 series online product
documentation ( http://www.ibm.com/support/knowledgecenter/ST5GLJ_8.1.0/
com.ibm.storage.ssic.help.doc/f2c_securitybp.html).
• User names and passwords are encrypted for HTTPS protocol. You cannot access the DS8000 Storage
Management GUI over the non-secure HTTP protocol (port 8451).
18
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 33
DS command-line interface
The IBM DS command-line interface (DS CLI) can be used to create, delete, modify, and view Copy
Services functions and the logical conguration of a storage system. These tasks can be performed either
interactively, in batch processes (operating system shell scripts), or in DS CLI script les. A DS CLI script
le is a text le that contains one or more DS CLI commands and can be issued as a single command. DS
CLI can be used to manage logical conguration, Copy Services conguration, and other functions for a
storage system, including managing security settings, querying point-in-time performance information or
status of physical resources, and exporting audit logs.
Note: Java™ 1.8 must be installed on systems that run the DS CLI.
The DS CLI provides a full-function set of commands to manage logical congurations and Copy Services
congurations. The DS CLI is available in the DS8000 Storage Management GUI. The DS CLI client can
also be installed on and is supported in many different environments, including the following platforms:
• AIX 6.1, 7.1, 7.2
• Linux, Red Hat Enterprise Linux [RHEL] 6 and 7
• Linux, SUSE Linux, Enterprise Server (SLES) 11 and 12
• VMware ESX 5.5, 6 Console
• IBM i 7.1, 7.2
• Oracle Solaris 10 and 11
• Microsoft Windows Server 2008, 2012 and Windows 7, 8, 8.1, 10
Note: If the storage system is congured for NIST SP 800-131A security conformance, a version of Java
that is NIST SP 800-131A compliant must be installed on all systems that run DS CLI client. For more
information about security requirements, see documentation about conguring your environment for
NIST SP 800-131A compliance in IBM Knowledge Center (https://www.ibm.com/support/
knowledgecenter/ST5GLJ_8.5.0/com.ibm.storage.ssic.help.doc/f2c_securitybp_nist.html).
RESTful API
The RESTful API is an application on your storage system HMC for initiating simple storage operations
through the Web.
The RESTful (Representational State Transfer) API is a platform independent means by which to initiate
create, read, update, and delete operations in the storage system and supporting storage devices. These
operations are initiated with the HTTP commands: POST, GET, PUT, and DELETE.
The RESTful API is intended for development, testing, and debugging of the client management
infrastructures in your storage system. You can use the RESTful API with a CURL command or through
standard Web browsers. For instance, you can use the storage system with the RESTClient add-on.
IBM Spectrum Control
IBM Spectrum Control is an integrated software solution that can help you improve and centralize the
management of your storage environment through the integration of products. With IBM Spectrum
Control, it is possible to manage multiple DS8000 systems from a single point of control.
Note: IBM Spectrum Control is not required for the operation of a storage system. However, it is
recommended. IBM Spectrum Control can be ordered and installed as a software product on various
servers and operating systems. When you install IBM Spectrum Control, ensure that the selected version
supports the current system functions. Optionally, you can order a server on which IBM Spectrum Control
is preinstalled.
IBM Spectrum Control simplies storage management by providing the following benets:
• Centralizing the management of heterogeneous storage network resources with IBM storage
management software
• Providing greater synergy between storage management software and IBM storage devices
• Reducing the number of servers that are required to manage your software infrastructure
Chapter 1. Overview
19
Page 34
• Migrating from basic device management to storage management applications that provide higher-level
functions
For more information, see IBM Spectrum Control online product documentation in IBM Knowledge Center
(www.ibm.com/support/knowledgecenter).
IBM Copy Services Manager
IBM Copy Services Manager controls Copy Services in storage environments. Copy Services are features
that are used by storage systems, such as DS8000, to congure, manage, and monitor data-copy
functions.
IBM Copy Services Manager provides both a graphical interface and command line that you can use for
conguring and managing Copy Services functions across storage units. Copy Services include the pointin-time function – IBM FlashCopy and Safeguarded Copy, and the remote mirror and copy functions –
Metro Mirror, Global Mirror, and Metro Global Mirror. Copy Services Manager can automate the
administration and conguration of these services; and monitor and manage copy sessions.
You can use Copy Services Manager to complete the following data replication tasks and help reduce the
downtime of critical applications:
• Plan for replication when you are provisioning storage
• Keep data on multiple related volumes consistent across storage systems for a planned or unplanned
outage
• Monitor and track replication operations
• Automate the mapping of source volumes to target volumes
Starting with DS8000 Version 8.1, Copy Services Manager also comes preinstalled on the Hardware
Management Console (HMC). Therefore, you can enable the Copy Services Manager software that is
already on the hardware system. Doing so results in less setup time; and eliminates the need to maintain
a separate server for Copy Services functions.
You can also use Copy Services Manager to connect to an LDAP repository for remote authentication.
For more information, see the Copy Services Manager online product documentation at http://
www.ibm.com/support/knowledgecenter/SSESK4/csm_kcwelcome.html. The "What's new" topic
provides details on the features added for each version of Copy Services Manager that can be used by
DS8000, including HyperSwap for multi-target sessions, and incremental FlashCopy support.
DS8000 Storage Management GUI web browser support and conguration
To access the DS8000 Storage Management GUI, you must ensure that your web browser is supported
and has the appropriate settings enabled.
The DS8000 Storage Management GUI supports the following web browsers:
Table 11. Supported web browsers
DS8000 series versionSupported browsers
9.0
Mozilla Firefox 68
Microsoft Internet Explorer 11
Microsoft Edge 44
Google Chrome 76
IBM supports higher versions of the browsers as long as the vendors do not remove or disable
functionality that the product relies upon. For browser levels higher than the versions that are certied
with the product, customer support accepts usage-related and defect-related service requests. As with
operating system and virtualization environments, if the support center cannot re-create the issue in the
our lab, we might ask the client to re-create the problem on a certied browser version to determine
20
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 35
whether a product defect exists. Defects are not accepted for cosmetic differences between browsers or
browser versions that do not affect the functional behavior of the product. If a problem is identied in the
product, defects are accepted. If a problem is identied with the browser, IBM might investigate potential
solutions or workaround that the client can implement until a permanent solution becomes available.
Enabling TLS 1.2 support
If the security requirements for your storage system require conformance with NIST SP 800-131A,
enable transport layer security (TLS) 1.2 on web browsers that use SSL/TLS to access the DS8000
Storage Management GUI. See your web browser documentation for instructions on enabling TLS 1.2. For
Internet Explorer, complete the following steps to enable TLS 1.2.
1. On the Tools menu, click Internet Options.
2. On the Advanced tab, under Settings, select Use TLS 1.2.
Note: Firefox, Release 24 and later, supports TLS 1.2. However, you must congure Firefox to enable TLS
1.2 support.
For more information about security requirements, see the IBM DS8000 series online product
documentation for security best practices.
Selecting browser security settings
You must select the appropriate web browser security settings to access the DS8000 Storage
Management GUI. In Internet Explorer, use the following steps.
1. On the Tools menu, click Internet Options.
2. On the Security tab, select Internet and click Custom level.
3. Scroll to Miscellaneous, and select Allow META REFRESH.
4. Scroll to Scripting, and select Active scripting.
Conguring Internet Explorer to access the DS8000 Storage Management GUI
If DS8000 Storage Management GUI is accessed through IBM Spectrum Control with Internet Explorer,
complete the following steps to properly congure the web browser.
1. Disable the Pop-up Blocker.
Note: If a message indicates that content is blocked without a signed by a valid security certicate,
click the Information Bar at the top and select Show blocked content.
2. Add the IP address of the DS8000 Hardware Management Console (HMC) to the Internet Explorer list
of trusted sites.
For more information, see your browser documentation.
Chapter 1. Overview
21
Page 36
22 IBM DS8900F: DS8900F Introduction and Planning Guide
Page 37
Chapter 2. Hardware features
Use this information to assist you with planning, ordering, and managing your storage system.
The following table lists feature codes that are used to order hardware features for your system.
Table 12. Feature codes for hardware features
Feature
codeFeatureDescription
0200Shipping weight reductionMaximum shipping weight of any storage
0400BSMI certication documentsRequired when the storage system model
0403Non-encryption certication keyRequired when the storage system model
0937zFlex Frame eld mergeIndicates that the model 993 will be
0938Rockhopper II eld mergeIndicates that the model 993 will be
system base model or expansion model
does not exceed 909 kg (2000 lb) each.
Packaging adds 120 kg (265 lb).
is shipped to Taiwan.
is shipped to China or Russia.
installed in an existing IBM Z model ZR1
rack
installed in an existing IBM LinuxONE
Rockhopper II model LR1 rack
0939Customer Rack eld mergeIndicates that the model 993 will be
installed in an existing standard 19-inch
wide rack that conforms to EIA 310D
specications
0990On-site code loadRequired to opt out of remote code load
(feature code 0991). If selected, IBM
modies the relevant machine record
accordingly.
0991Remote code load (default)Indicates that the storage system code
loads are done remotely. This is the default
option between feature codes 0990 and
0991.
1014Front and rear door lock and key kit
1038Single-phase power cord, 208 V, 30 ANEMA L6-30P 2P+Gnd
1039Single-phase power cord, 250 V, 32 AIEC 309 P+N+Gnd
1040Three-phase power cord, 250 V, 60 AIEC 309 3P+Gnd (four-pin Delta)
1041Three-phase power cord, 250 V, 32 AIEC 309 3P+N+Gnd (ve-pin Wye)
1042Single-phase power cord, 250 V, 32 AFor use in Australia and New Zealand (not
IEC 309)
1043Single-phase power cord, 250 V, 30 AFor use in Korea
1044Single-phase power cord, 250 V, 32 AIEC 309 P+N+Gnd (halogen free)
Table 12. Feature codes for hardware features (continued)
Feature
codeFeatureDescription
17651U keyboard and display
1890DS8000 Licensed Machine Code R9.0Microcode bundle 89.x.xx.x for base model
1990DS8000 Licensed Machine Code R9.0
indicator
3353Fibre Channel host-adapter pair4-port, 16 Gbps shortwave FCP and FICON
3355Fibre Channel host-adapter pair4-port, 32 Gbps longwave FCP and FICON
3453Fibre Channel host-adapter pair4-port, 16 Gbps longwave FCP and FICON
3455Fibre Channel host-adapter pair4-port, 32 Gbps shortwave FCP and FICON
3500zHyperLink I/O-adapterRequired for feature code 1450, 1451, and
Optional for model 993
Not available with the model ZR1
installation (feature code 0937) or the
model LR1 installation (feature code 0938)
Required for models 994 and 996
993 994, and 996
Microcode bundle 89.x.xx.x for expansion
model E96
host adapter PCIe
host adapter PCIe
host adapter PCIe
host adapter PCIe
1452
3602Transparent cloud tiering adapter pair for 2U
processor node (optional)
3603Transparent cloud tiering adapter pair for 4U
processor node (optional)
43418-core POWER9 processor pair(8-core)
434210-core POWER9 processor pair(10-core)
4343Second 10-core POWER9 processor pair(20-core)
4450192 GB system memory(8-core)
4451512 GB system memory(8-core)
4452512 GB system memory(10-core)
44531024 GB system memory(20-core)
44542048 GB system memory(20-core)
Storage complexes
A storage complex is a set of storage units that are managed by management console units. Two
management console units are provided with a storage complex for redundancy.
2-port 10 Gbps SFP+ optical/2-port 1 Gbps
RJ-45 copper shortwave adapter pair for
model 994
2-port 10 Gbps SFP+ optical/2-port 1 Gbps
RJ-45 copper shortwave adapter pair for
models 996
26
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 41
Management console
The management console supports storage system hardware and rmware installation and maintenance
activities.
The management console is a dedicated processor unit that is located inside your storage system, and can
automatically monitor the state of your system, and notify you and IBM when service is required.
To provide continuous availability to the management-console functions, especially for storage
environments that use encryption, an additional management console is provided. For models 994 and
996, a 1U keyboard and display (feature code 1765) is required. The 1U keyboard and display is optional
for model 993.
Note: The 1U keyboard and display (feature code 1765) is not available with the model 993 ZR1
installation (feature code 0937) or LR1 installation (feature code 0938).
Hardware specics
The storage system models offer a high degree of availability and performance through the use of
redundant components that can be replaced while the system is operating. You can use a storage system
model with a mix of different operating systems and clustered and nonclustered variants of the same
operating systems.
Contributors to the high degree of availability and reliability include the structure of the storage unit, the
host systems that are supported, and the memory and speed of the processors.
Storage system structure
The design of the storage system contributes to the high degree of availability. The primary components
that support high availability within the storage unit are the storage server, the processor node, and the
power control card.
Storage system
The storage unit contains a storage server and one or more pair of storage enclosures that are
packaged in one or more frames with associated power supplies, batteries, and cooling.
Storage server
The storage server consists of two processor nodes, two or more I/O enclosures, and a pair of power
control cards.
Processor node
The processor node controls and manages the storage server functions in the storage system. The
two processor nodes form a redundant pair such that if either processor node fails, the remaining
processor node controls and manages all storage server functions.
Power control card
A redundant pair of power control cards located in the Management enclosure coordinate the power
management within the storage unit. The power control cards are attached to the service processors
in each processor node, the primary power supplies in each frame, and indirectly to the fan/sense
cards and storage enclosures in each frame.
Flash drives
The storage system provides you with a choice of drives.
The following drives are available:
• 2.5-inch Flash Tier 0 drives with FDE
– 800 GB
– 1.6 TB
– 3.2 TB
Chapter 2. Hardware features
27
Page 42
• 2.5-inch Flash Tier 1 drives with FDE
– 3.84 TB
• 2.5-inch Flash Tier 2 drives with FDE
– 1.92 TB
– 7.68 TB
– 15.36 TB
Note: Intermix of high performance drives (Flash Tier 0) with high capacity drives (Flash Tier 1 or Flash
Tier 2) is not supported within an enclosure pair.
Drive maintenance policy
A minimum of two spare drives are allocated in a device adapter loop.
Internal maintenance functions continuously monitor and report (by using the call home feature) to IBM
when the number of drives in a spare pool reaches a preset threshold. This design ensures continuous
availability of devices while it protects data and minimizing any service disruptions.
It is not recommended to replace a drive unless an event is generated indicating that service is needed.
Host attachment overview
The storage system provides various host attachments so that you can consolidate storage capacity and
workloads for open-systems hosts and IBM Z .
The storage system provides extensive connectivity using Fibre Channel adapters across a broad range of
server environments.
Host adapter intermix support
A maximum of four host adapters per I/O enclosure is supported, including 4-port 16 Gbps adapters and
4-port 32 Gbps adapters.
Models 993, 994, and 996
The following table shows the host adapter plug order.
Table 13. Plug order for 4-port HA slots for two and four I/O enclosures
I/O enclosuresSlot number
C1C2C3C4C5C6
For two I/O enclosures (all models)
Top I/O enclosure 1
Bottom I/O
enclosure 3
Top I/O enclosure 2
Bottom I/O
enclosure 4
For four I/O enclosures (model 996)
Top I/O enclosure 1715311
3715
2846
Bottom I/O
enclosure 3
Top I/O enclosure 2412816
Bottom I/O
enclosure 4
28 IBM DS8900F: DS8900F Introduction and Planning Guide
51319
210614
Page 43
The following HA-type plug order is used during manufacturing when different types of HA cards are
installed.
1. 4-port 32 Gbps longwave host adapters
2. 4-port 32 Gbps shortwave host adapters
3. 4-port 16 Gbps longwave host adapters
4. 4-port 16 Gbps shortwave host adapters
Open-systems host attachment with Fibre Channel adapters
You can attach a storage system to an open-systems host with Fibre Channel adapters.
The storage system supports SAN speeds of up to 32 Gbps with the current 32 Gbps host adapters. The
storage system detects and operates at the greatest available link speed that is shared by both sides of
the system.
Fibre Channel technology transfers data between the sources and the users of the information. Fibre
Channel connections are established between Fibre Channel ports that reside in I/O devices, host
systems, and the network that interconnects them. The network consists of elements like switches,
bridges, and repeaters that are used to interconnect the Fibre Channel ports.
FICON attached IBM Z hosts overview
The storage system can be attached to FICON attached IBM Z host operating systems under specied
adapter congurations.
Each storage system Fibre Channel adapter has four ports. Each port has a unique worldwide port name
(WWPN). You can congure the port to operate with the FICON upper-layer protocol.
With Fibre Channel adapters that are congured for FICON, the storage system provides the
following congurations:
• Either fabric or point-to-point topologies
• A maximum of 32 ports on model 993, 64 ports on model 994, and 128 ports on model 996.
• A maximum of 509 logins per Fibre Channel port
• A maximum of 8,192 logins per storage system
• A maximum of 1,280 logical paths on each Fibre Channel port
• Access to all 255 control-unit images (65,280 CKD devices) over each FICON port
• A maximum of 512 logical paths per control unit image
Note: IBM z13® and IBM z14™ servers support 32,768 devices per FICON host channel, while IBM
zEnterprise® EC12 and IBM zEnterprise BC12 servers support 24,576 devices per FICON host channel.
Earlier IBM Z servers support 16,384 devices per FICON host channel. To fully access 65,280 devices, it
is necessary to connect multiple FICON host channels to the storage system. You can access the devices
through a Fibre Channel switch or FICON director to a single storage system FICON port.
The storage system supports the following operating systems for IBM Z hosts:
For the most current information on supported hosts, operating systems, adapters, and switches, go to
the IBM System Storage Interoperation Center (SSIC) website (www.ibm.com/systems/support/storage/cong/ssic).
Chapter 2. Hardware features
29
Page 44
I/O load balancing
You can maximize the performance of an application by spreading the I/O load across processor nodes,
arrays, and device adapters in the storage system.
During an attempt to balance the load within the storage system, placement of application data is the
determining factor. The following resources are the most important to balance, roughly in order of
importance:
• Activity to the RAID drive groups. Use as many RAID drive groups as possible for the critical
applications. Most performance bottlenecks occur because a few drive are overloaded. Spreading an
application across multiple RAID drive groups ensures that as many drives as possible are available.
This is extremely important for open-system environments where cache-hit ratios are usually low.
• Activity to the nodes. When selecting RAID drive groups for a critical application, spread them across
separate nodes. Because each node has separate memory buses and cache memory, this maximizes
the use of those resources.
• Activity to the device adapters. When selecting RAID drive groups within a cluster for a critical
application, spread them across separate device adapters.
• Activity to the Fibre Channel ports. Use the IBM Multipath Subsystem Device Driver (SDD) or similar
software for other platforms to balance I/O activity across Fibre Channel ports.
Note: For information about SDD, see IBM Multipath Subsystem Device Driver User's Guide (http://
www-01.ibm.com/support/docview.wss?uid=ssg1S7000303). This document also describes the
product engineering tool, the ESSUTIL tool, which is supported in the pcmpath commands and the
datapath commands.
Storage consolidation
When you use a storage system, you can consolidate data and workloads from different types of
independent hosts into a single shared resource.
You can mix production and test servers in an open systems environment or mix open systems and IBM Z
hosts. In this type of environment, servers rarely, if ever, contend for the same resource.
Although sharing resources in the storage system has advantages for storage administration and resource
sharing, there are more implications for workload planning. The benet of sharing is that a larger resource
pool (for example, drives or cache) is available for critical applications. However, you must ensure that
uncontrolled or unpredictable applications do not interfere with critical work. This requires the same
workload planning that you use when you mix various types of work on a server.
If your workload is critical, consider isolating it from other workloads. To isolate the workloads, place the
data as follows:
• On separate RAID drive groups. Data for open systems or IBM Z hosts is automatically placed on
separate arrays, which reduce the contention for drive use.
• On separate device adapters.
• In separate processor nodes, which isolate use of memory buses, microprocessors, and cache
resources. Before you decide, verify that the isolation of your data to a single node provides adequate
data access performance for your application.
Count key data
In count-key-data (CKD) disk data architecture, the data eld stores the user data.
Because data records can be variable in length, in CKD they all have an associated count eld that
indicates the user data record size. The key eld enables a hardware search on a key. The commands
30
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 45
used in the CKD architecture for managing the data and the storage devices are called channel command
words.
Fixed block
In xed block (FB) architecture, the data (the logical volumes) are mapped over xed-size blocks or
sectors.
With an FB architecture, the location of any block can be calculated to retrieve that block. This
architecture uses tracks and cylinders. A physical disk contains multiple blocks per track, and a cylinder is
the group of tracks that exists under the disk heads at one point in time without performing a seek
operation.
T10 DIF support
American National Standards Institute (ANSI) T10 Data Integrity Field (DIF) standard is supported on
IBM Z for SCSI end-to-end data protection on xed block (FB) LUN volumes. This support applies to the
IBM DS8900F unit (99x models). IBM Z support applies to FCP channels only.
IBM Z provides added end-to-end data protection between the operating system and the DS8900F unit.
This support adds protection information consisting of CRC (Cyclic Redundancy Checking), LBA (Logical
Block Address), and host application tags to each sector of FB data on a logical volume.
Data protection using the T10 Data Integrity Field (DIF) on FB volumes includes the following features:
• Ability to convert logical volume formats between standard and protected formats supported through
PPRC between standard and protected volumes
• Support for earlier versions of T10-protected volumes on the DS8900F with non T10 DIF-capable hosts
• Allows end-to-end checking at the application level of data stored on FB disks
• Additional metadata stored by the storage facility image (SFI) allows host adapter-level end-to-end
checking data to be stored on FB disks independently of whether the host uses the DIF format.
Notes:
• This feature requires changes in the I/O stack to take advantage of all the capabilities the protection
offers.
• T10 DIF volumes can be used by any type of Open host with the exception of iSeries, but active
protection is supported only for Linux on IBM Z or AIX on IBM Power Systems. The protection can only
be active if the host server has T10 DIF enabled.
• T10 DIF volumes can accept SCSI I/O of either T10 DIF or standard type, but if the FB volume type is
standard, then only standard SCSI I/O is accepted.
Logical volumes
A logical volume is the storage medium that is associated with a logical disk. It typically resides on two or
more hard disk drives.
For the storage unit, the logical volumes are dened at logical conguration time. For count-key-data
(CKD) servers, the logical volume size is dened by the device emulation mode and model. For xed block
(FB) hosts, you can dene each FB volume (LUN) with a minimum size of a single block (512 bytes) to a
maximum size of 232 blocks or 16 TB.
A logical device that has nonremovable media has one and only one associated logical volume. A logical
volume is composed of one or more extents. Each extent is associated with a contiguous range of
addressable data units on the logical volume.
Chapter 2. Hardware features
31
Page 46
Allocation, deletion, and modication of volumes
Extent allocation methods (namely, rotate volumes and pool striping) determine the means by which
actions are completed on storage system volumes.
All extents of the ranks assigned to an extent pool are independently available for allocation to logical
volumes. The extents for a LUN or volume are logically ordered, but they do not have to come from one
rank and the extents do not have to be contiguous on a rank. This construction method of using xed
extents to form a logical volume in the storage system allows flexibility in the management of the logical
volumes. You can delete volumes, resize volumes, and reuse the extents of those volumes to create other
volumes, different sizes. One logical volume can be deleted without affecting the other logical volumes
dened on the same extent pool.
Because the extents are cleaned (overwritten with zeros) after you delete a volume, it can take some time
until these extents are available for reallocation for volume specic metadata. The reformatting of the
extents is a background process.
There are two extent allocation methods used by the storage system: rotate volumes and storage pool
striping (rotate extents).
Storage pool striping: extent rotation
The default storage allocation method is storage pool striping. The extents of a volume can be striped
across several ranks. The storage system keeps a sequence of ranks. The rst rank in the list is randomly
picked at each power on of the storage subsystem. The storage system tracks the rank in which the last
allocation started. The allocation of a rst extent for the next volume starts from the next rank in that
sequence. The next extent for that volume is taken from the next rank in sequence, and so on. The system
rotates the extents across the ranks.
If you migrate an existing non-striped volume to the same extent pool with a rotate extents allocation
method, then the volume is "reorganized." If you add more ranks to an existing extent pool, then the
"reorganizing" existing striped volumes spreads them across both existing and new ranks.
You can congure and manage storage pool striping using the DS Storage Manager, and DS CLI, and DS
Open API. The default of the extent allocation method (EAM) option that is allocated to a logical volume is
now rotate extents. The rotate extents option is designed to provide the best performance by striping
volume extents across ranks in extent pool.
Managed EAM: Once a volume is managed by Easy Tier, the EAM of the volume is changed to managed
EAM, which can result in placement of the extents differing from the rotate volume and rotate extent
rules. The EAM only changes when a volume is manually migrated to a non-managed pool.
Rotate volumes allocation method
Extents can be allocated sequentially. In this case, all extents are taken from the same rank until there
are enough extents for the requested volume size or the rank is full, in which case the allocation
continues with the next rank in the extent pool.
If more than one volume is created in one operation, the allocation for each volume starts in another rank.
When allocating several volumes, rotate through the ranks. You might want to consider this allocation
method when you prefer to manage performance manually. The workload of one volume is going to one
rank. This method makes the identication of performance bottlenecks easier; however, by putting all the
volumes data onto just one rank, you might introduce a bottleneck, depending on your actual workload.
LUN calculation
The storage system uses a provisioned capacity algorithm (calculation) to provide a logical unit number
(LUN).
In the storage system, usable storage capacities are expressed in powers of 10. Logical or effective
storage capacities (logical volumes, ranks, extent pools) and processor memory capacities are expressed
in powers of 2. Both of these conventions are used for logical volume effective storage capacities.
32
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 47
On open volumes with 512 byte blocks (including T10-protected volumes), you can specify an exact block
count to create a LUN. You can specify a standard LUN size (which is expressed as an exact number of
binary GiBs (230)) or you can specify an ESS volume size (which is expressed in decimal GiBs (109)
accurate to 0.1 GB). The unit of storage allocation for xed block open systems volumes is one extent.
The extent sizes for open volumes is either exactly 1 GiB, or 16 MiB. Any logical volume that is not an
exact multiple of 1 GiB does not use all the capacity in the last extent that is allocated to the logical
volume. Supported block counts are from 1 to 4 194 304 blocks (2 binary TiB) in increments of one block.
Supported sizes are from 1 to 16 TiB in increments of 1 GiB. The supported ESS LUN sizes are limited to
the exact sizes that are specied from 0.1 to 982.2 GB (decimal) in increments of 0.1 GB and are rounded
up to the next larger 32 K byte boundary. The ESS LUN sizes do not result in standard LUN sizes.
Therefore, they can waste capacity. However, the unused capacity is less than one full extent. ESS LUN
sizes are typically used when volumes must be copied between the storage system and ESS.
On open volumes with 520 byte blocks, you can select one of the supported LUN sizes that are used on
IBM i processors to create a LUN. The operating system uses 8 of the bytes in each block. This leaves 512
bytes per block for your data. Variable volume sizes are also supported.
Table 14 on page 33 shows the disk capacity for the protected and unprotected models. Logically
unprotecting a storage LUN allows the IBM i host to start system level mirror protection on the LUN. The
IBM i system level mirror protection allows normal system operations to continue running in the event of
a failure in an HBA, fabric, connection, or LUN on one of the LUNs in the mirror pair.
Note: On IBM i, logical volume sizes in the range 17.5 GB to 141.1 GB are supported as load source units.
Logical volumes smaller than 17.5 GB or larger than 141.1 GB cannot be used as load source units.
Table 14. Capacity and models of disk volumes for IBM i hosts running IBM i operating system
SizeProtected modelUnprotected model
8.5 GBA01A81
17.5 GBA02A82
35.1 GBA05A85
70.5 GBA04A84
141.1 GBA06A86
282.2 GBA07A87
1 GB to 2000 GB099050
On CKD volumes, you can specify an exact cylinder count or a standard volume size to create a LUN. The
standard volume size is expressed as an exact number of Mod 1 equivalents (which is 1113 cylinders).
The unit of storage allocation for CKD volumes is one CKD extent. The extent size for a CKD volume is
either exactly a Mod-1 equivalent (which is 1113 cylinders), or it is 21 cylinders when using the smallextents option. Any logical volume that is not an exact multiple of 1113 cylinders (1 extent) does not use
all the capacity in the last extent that is allocated to the logical volume. For CKD volumes that are created
with 3380 track formats, the number of cylinders (or extents) is limited to either 2226 (1 extent) or 3339
(2 extents). For CKD volumes that are created with 3390 track formats, you can specify the number of
cylinders in the range of 1 - 65520 (x'0001' - x'FFF0') in increments of one cylinder, for a standard (nonEAV) 3390. The allocation of an EAV volume is expressed in increments of 3390 mod1 capacities (1113
cylinders) and can be expressed as integral multiples of 1113 between 65,667 - 1,182,006 cylinders or
as the number of 3390 mod1 increments in the range of 59 - 1062.
Chapter 2. Hardware features
33
Page 48
Extended address volumes for CKD
Count key data (CKD) volumes now support the additional capacity of 1 TB. The 1 TB capacity is an
increase in volume size from the previous 223 GB.
This increased provisioned capacity is referred to as extended address volumes (EAV) and is supported by
the 3390 Model A. Use a maximum size volume of up to 1,182,006 cylinders for the IBM z/OS. This
support is available to you for the z/OS version 12.1, and later.
You can create a 1 TB IBM Z CKD volume. A IBM Z CKD volume is composed of one or more extents from
a CKD extent pool. CKD extents are 1113 cylinders in size. When you dene a IBM Z CKD volume, you
must specify the number of cylinders that you want for the volume. The storage system and the z/OS have
limits for the CKD EAV sizes. You can dene CKD volumes with up to 1,182,006 cylinders, about 1 TB on
the DS8900F.
If the number of cylinders that you specify is not an exact multiple of 1113 cylinders, then some space in
the last allocated extent is wasted. For example, if you dene 1114 or 3340 cylinders, 1112 cylinders are
wasted. For maximum storage efciency, consider allocating volumes that are exact multiples of 1113
cylinders. In fact, multiples of 3339 cylinders should be considered for future compatibility. If you want to
use the maximum number of cylinders for a volume (that is 1,182,006 cylinders), you are not wasting
cylinders, because it is an exact multiple of 1113 (1,182,006 divided by 1113 is exactly 1062). This size
is also an even multiple (354) of 3339, a model 3 size.
Quick initialization
Quick initialization improves device initialization speed and allows a Copy Services relationship to be
established after a device is created.
Quick volume initialization for IBM Z environments is supported. This support helps users who frequently
delete volumes by reconguring capacity without waiting for initialization. Quick initialization initializes
the data logical tracks or block within a specied extent range on a logical volume with the appropriate
initialization pattern for the host.
Normal read and write access to the logical volume is allowed during the initialization process. Therefore,
the extent metadata must be allocated and initialized before the quick initialization function is started.
Depending on the operation, the quick initialization can be started for the entire logical volume or for an
extent range on the logical volume.
34
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 49
Chapter 3. Data management features
The storage system is designed with many management features that allow you to securely process and
access your data according to your business needs, even if it is 24 hours a day and 7 days a week.
This section contains information about the data management features in your storage system. Use the
information in this section to assist you in planning, ordering licenses, and in the management of your
storage system data management features.
Transparent cloud tiering
Transparent cloud tiering is a licensed function that enables volume data to be copied and transferred to
cloud storage. DS8000 transparent cloud tiering is a feature in conjunction with z/OS and DFSMShsm that
provides server-less movement of archive and backup data directly to an object storage solution.
Offloading the movement of the data from the host to the DS8000 unlocks DFSMShsm efciencies and
saves z/OS CPU cycles.
DFSMShsm has been the leading z/OS data archive solution for over 30 years. Its architecture is designed
and optimized for tape, being the medium in which the data is transferred and archived.
Due to this architectural design point, there are inherent inefciencies that consume host CPU cycles,
including the following examples:
Movement of data through the host
All of the data must move from the disk through the host and out to the tape device.
Dual Data Movement
DSS must read the data from the disk and then pass the data from DSS to HSM, which then moves the
data from the host to the tape.
16K block sizes
HSM separates the data within z/OS into small 16K blocks.
Recycle
When a tape is full, HSM must continually read the valid data from that tape volume and write it to a
new tape.
HSM inventory
Reorgs, audits, and backups of the HSM inventory via the OCDS.
Transparent cloud tiering resolves these inefciencies by moving the data directly from the DS8000 to the
cloud object storage. This process eliminates the movement of data through the host, dual data
movement, and the small 16K block size requirement. This process also eliminates recycle processing
and the OCDS.
Transparent cloud tiering translates into signicant savings in CPU utilization within z/OS, specically
when you are using both DFSMShsm and transparent cloud tiering.
Modern enterprises adopted cloud storage to overcome the massive amount of data growth. The
transparent cloud tiering system supports creating connections to cloud service providers to store data in
private or public cloud storage. With transparent cloud tiering, administrators can move older data to
cloud storage to free up capacity on the system. Point-in-time snapshots of data can be created on the
system and then copied and stored on the cloud storage.
An external cloud service provider manages the cloud storage, which helps to reduce storage costs for
the system. Before data can be copied to cloud storage, a connection to the cloud service provider must
be created from the system. A cloud account is an object on the system that represents a connection to a
cloud service provider by using a particular set of credentials. These credentials differ depending on the
type of cloud service provider that is being specied. Most cloud service providers require the host name
of the cloud service provider and an associated password, and some cloud service providers also require
certicates to authenticate users of the cloud storage.
Public clouds use certicates that are signed by well-known certicate authorities. Private cloud service
providers can use either self-signed certicate or a certicate that is signed by a trusted certicate
authority. These credentials are dened on the cloud service provider and passed to the system through
the administrators of the cloud service provider. A cloud account denes whether the system can
successfully communicate and authenticate with the cloud service provider by using the account
credentials. If the system is authenticated, it can then access cloud storage to either copy data to the
cloud storage or restore data that is copied to cloud storage back to the system. The system supports one
cloud account to a single cloud service provider. Migration between providers is not supported.
Client-side encryption for transparent cloud tiering ensures that data is encrypted before it is transferred
to cloud storage. The data remains encrypted in cloud storage and is decrypted after it is transferred back
to the storage system. You can use client-side encryption for transparent cloud tiering to download and
decrypt data on any DS8000 storage system that uses the same set of key servers as the system that rst
encrypted the data.
Notes:
• Client-side encryption for transparent cloud tiering requires IBM Security Key Lifecycle Manager
v3.0.0.2 or higher. For more information, see the IBM Security Key Lifecycle Manager online product
documentation (www.ibm.com/support/knowledgecenter/SSWPVP/).
Cloud object storage is inherently multi-tenant, which allows multiple users to store data on the device,
segregated from the other users. Each cloud service provider divides cloud storage into segments for
each client that uses the cloud storage. These objects store only data specic to that client. Within the
segment that is controlled by the user’s name, DFSMShsm and its inventory system controls the creation
and segregation of containers that it uses to store the client data objects.
The storage system supports the OpenStack Swift and Amazon S3 APIs. The storage system also
supports the IBM TS7700 as an object storage target and the following cloud service providers:
• Amazon S3
• IBM Cloud™ Object Storage
• OpenStack Swift Based Private Cloud
Dynamic volume expansion
Dynamic volume expansion is the capability to increase provisioned capacity up to a maximum size while
volumes are online to a host and not in a Copy Services relationship.
Dynamic volume expansion increases the capacity of open systems, IBM i, and IBM Z volumes, while the
volume remains connected to a host system. This capability simplies data growth by providing volume
expansion without taking volumes offline.
Some operating systems do not support a change in volume size. Therefore, a host action is required to
detect the change after the provisioned capacity is increased.
The following volume sizes are the maximum that are supported for each storage type.
• Open systems FB volumes: 16 TB
• IBM i variable size volumes
• IBM Z CKD volume types 3390 model 9 and custom: 65520 cylinders
• IBM Z CKD volume type 3390 model 3: 3339 cylinders
• IBM Z CKD volume types 3390 model A: 1,182,006 cylinders
Note: Volumes cannot be in Copy Services relationships (point-in-time copy, FlashCopy SE, Metro Mirror,
Global Mirror, Metro/Global Mirror, and z/OS Global Mirror) during expansion.
36
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 51
Count key data and xed block volume deletion prevention
By default, DS8000 attempts to prevent volumes that are online and in use from being deleted. The DS
CLI and DS Storage Manager provides an option to force the deletion of count key data (CKD) and xed
block (FB) volumes that are in use.
For CKD volumes, in use means that the volumes are participating in a Copy Services relationship or are in
a path group. For FB volumes, in use means that the volumes are participating in a Copy Services
relationship or there is I/O access to the volume in the last ve minutes. With Safeguarded Copy, in use
means that the volumes have data saved in the backup repository.
If you specify the -safe option when you delete an FB volume, the system determines whether the
volumes are assigned to non-default volume groups. If the volumes are assigned to a non-default (user-dened) volume group, the volumes are not deleted.
If you specify the -force option when you delete a volume, the storage system deletes volumes
regardless of whether the volumes are in use. However, an in use volume that has Safeguarded Copy
space cannot be deleted with the -force option.
Thin provisioning
Thin provisioning denes logical volume sizes that are larger than the usable capacity installed on the
system. The volume allocates capacity on an as-needed basis as a result of host-write actions.
The thin provisioning feature enables the creation of extent space efcient logical volumes. Extent space
efcient volumes are supported for FB and CKD volumes and are supported for all Copy Services
functionality, including FlashCopy targets where they provide a space efcient FlashCopy capability.
Releasing space on CKD volumes that use thin provisioning
On an IBM Z host, the DFSMSdss SPACEREL utility can release space from thin provisioned CKD
volumes that are used by either Global Copy or Global Mirror.
For Global Copy, space is released on the primary and secondary copies. If the secondary copy is the
primary copy of another Global Copy relationship, space is also released on secondary copies of that
relationship.
For Global Mirror, space is released on the primary copy after a new consistency group is formed.
Space is released on the secondary copy after the next consistency group is formed and a FlashCopy
commit is performed. If the secondary copy is the primary copy of another Global Mirror relationship,
space is also released on secondary copies of that relationship.
Extent Space Efcient (ESE) capacity controls for thin provisioning
Use of thin provisioning can affect the amount of storage capacity that you choose to order. ESE capacity
controls allow you to allocate storage appropriately.
With the mixture of thin-provisioned (ESE) and fully-provisioned (non-ESE) volumes in an extent pool, a
method is needed to dedicate some of the extent-pool storage capacity for ESE user data usage, as well
as limit the ESE user data usage within the extent pool. Another thing that is needed is the ability to
detect when the available storage space within the extent pool for ESE volumes is running out of space.
Thin-provisioning capacity controls provide extent pool attributes to limit the maximum extent pool
storage available for ESE user data usage, and to guarantee a proportion of the extent pool storage to be
available for ESE user data usage.
An SNMP trap that is associated with the thin-provisioning capacity controls noties you when the ESE
extent usage in the pool exceeds an ESE extent threshold set by you. You are also notied when the
extent pool is out of storage available for ESE user data usage.
Thin-provisioning capacity controls include the following attributes:
Chapter 3. Data management features
37
Page 52
ESE Extent Threshold
The percentage that is compared to the actual percentage of storage capacity available for ESE
customer extent allocation when determining the extent pool ESE extent status.
ESE Extent Status
One of the three following values:
• 0: the percent of the available thin-provisioning capacity is greater than the ESE extent threshold
• 1: the percent of the available thin-provisioning capacity is greater than zero but less than or equal
to the ESE extent threshold
• 10: the percent of the available thin-provisioning capacity is zero
Note: When the size of the extent pool remains xed or is only increased, the allocatable physical
capacity remains greater than or equal to the allocated physical capacity. However, a reduction in the size
of the extent pool can cause the allocatable physical capacity to become less than the allocated physical
capacity in some cases.
For example, if the user requests that one of the ranks in an extent pool be depopulated, the data on that
rank are moved to the remaining ranks in the pool causing the rank to become not allocated and removed
from the pool. The user is advised to inspect the limits and threshold on the extent pool following any
changes to the size of the extent pool to ensure that the specied values are still consistent with the
user’s intentions.
IBM Easy Tier
Easy Tier is an optional feature that is provided at no cost. It can greatly increase the performance of your
system by ensuring frequently accessed data is put on faster storage. Its capabilities include manual
provisioned capacity rebalance, auto performance rebalancing in both homogeneous and hybrid pools,
hot spot management, rank depopulation, manual volume migration, and thin provisioning support (ESE
volumes only). Easy Tier determines the appropriate tier of storage that is based on data access
requirements and then automatically and non-disruptively moves data, at the subvolume or sub-LUN
level, to the appropriate tier in the storage system.
Use Easy Tier to dynamically move your data to the appropriate drive tier in your storage system with its
automatic performance monitoring algorithms. You can use this feature to increase the efciency of your
flash drives and the efciency of all the tiers in your storage system.
You can use the features of Easy Tier between three tiers of storage within a DS8900F.
Easy Tier features help you to effectively manage your system health, storage performance, and storage
capacity automatically. Easy Tier uses system conguration and workload analysis with warm demotion
to achieve effective overall system health. Simultaneously, data promotion and auto-rebalancing address
performance while cold demotion works to address capacity.
Easy Tier data in memory persists in local storage or storage in the peer server, ensuring the Easy Tier
congurations are available at failover, cold start, or Easy Tier restart.
With Easy Tier Application, you can also assign logical volumes to a specic tier. This feature can be
useful when certain data is accessed infrequently, but needs to always be highly available.
Easy Tier Application is enhanced by two related functions:
• Easy Tier Application for IBM Z provides comprehensive data-placement management policy support
from application to storage.
• Easy Tier application controls over workload learning and data migration provides a granular pool-level
and volume-level Easy Tier control as well as volume-level tier restriction where a volume can be
excluded from the Nearline tier.
The Easy Tier Heat Map Transfer utility replicates Easy Tier primary storage workload learning results to
secondary storage sites, synchronizing performance characteristics across all storage systems. In the
event of data recovery, storage system performance is not sacriced.
38
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 53
Easy Tier helps manage thin provisioned volumes. If the initial allocation of new extents is set to the
highest performance tier in the pool, as thin provisioned volumes grow, Easy Tier automatically detects if
the data placed in these new extents should remain in the higher performance tier, or if they should be
demoted to a capacity tier.
An additional feature provides the capability for you to use Easy Tier manual processing for thin
provisioning. Rank depopulation is supported on ranks with ESE volumes allocated (extent space-efcient) or auxiliary volumes.
Use the capabilities of Easy Tier to support:
Drive classes
The following drive classes are available, in order from highest to lowest performance. A pool can
contain up to three drive classes.
Flash Tier 0 drives
The highest performance drives, which provide high I/O throughput and low latency.
Flash Tier 1 drives
The rst tier of high capacity drives.
Flash Tier 2 drives
The second tier of high capacity drives.
Three tiers
Using three tiers (each representing a separate drive class) and efcient algorithms improves system
performance and cost effectiveness.
You can select from four drive classes to create up to three tiers. The drives within a tier must be
homogeneous.
The following table lists the possible tier assignments for the drive classes. The tiers are listed
according to the following values:
0
Hot data tier, which contain the most active data. This tier can also serve as the home tier for new
data allocations.
1
Mid-data tier, which can be combined with one or both of the other tiers and will contain data not
moved to either of these tiers. This is by default the home tier for new data allocations.
2
Cold data tier, which contains the least active data.
Table 15. Drive class combinations and tiers for systems with Flash Tier 0 drives as the highest performance
drive class
Drive class combinations
Flash Tier 0Flash Tier 0 +
Flash Tier 1
Drive classes
Flash Tier 00000
Flash Tier 111
Flash Tier 212
Flash Tier 0 +
Flash Tier 2
Flash Tier 0 +
Flash Tier 1 +
Flash Tier 2
Chapter 3. Data management features 39
Page 54
Table 16. Drive class combinations and tiers for systems with Flash Tier 1 drives as the highest performance
drive class
Drive class combinations
Drive classes
Flash Tier 100
Flash Tier 21
Cold demotion
Cold data (or extents) stored on a higher-performance tier is demoted to a more appropriate tier. Easy
Tier is available with two-tier disk-drive pools and three-tier pools. Sequential bandwidth is moved to
the lower tier to increase the efcient use of your tiers.
Warm demotion
Active data that has larger bandwidth is demoted to the next lowest tier. Warm demotion is triggered
whenever the higher tier is over its bandwidth capacity. Selected warm extents are demoted to allow
the higher tier to operate at its optimal load. Warm demotes do not follow a predetermined schedule.
Warm promotion
Active data that has higher IOPS is promoted to the next highest tier. Warm promotion is triggered
whenever the lower tier is over its IOPS capacity. Selected warm extents are promoted to allow the
lower tier to operate at its optimal load. Warm promotes do not follow a predetermined schedule.
Manual volume or pool rebalance
Volume rebalancing relocates the smallest number of extents of a volume and restripes those extents
on all available ranks of the extent pool.
Auto-rebalancing
Automatically balances the workload of the same storage tier within both the homogeneous and the
hybrid pool that is based on usage to improve system performance and resource use. Use the autorebalancing functions of Easy Tier to manage a combination of homogeneous and hybrid pools,
including relocating hot spots on ranks. With homogeneous pools, systems with only one tier can use
Easy Tier technology to optimize their RAID array usage.
Rank depopulations
Allows ranks that have extents (data) allocated to them to be unassigned from an extent pool by using
extent migration to move extents from the specied ranks to other ranks within the pool.
Flash Tier 1Flash Tier 1 + Flash Tier 2
Easy Tier provides a performance monitoring capability, regardless of whether the Easy Tier feature is
activated. Easy Tier uses the monitoring process to determine what data to move and when to move it
when you use automatic mode. You can enable monitoring independently (with or without the Easy Tier
feature activated) for information about the behavior and benets that can be expected if automatic mode
were enabled.
Data from the monitoring process is included in a summary report that you can download to your local
system.
VMware vStorage API for Array Integration support
The storage system provides support for the VMware vStorage API for Array Integration (VAAI).
The VAAI API offloads storage processing functions from the server to the storage system, reducing the
workload on the host server hardware for improved performance on both the network and host servers.
The following operations are supported:
Atomic test and set or VMware hardware-assisted locking
The hardware-assisted locking feature uses the VMware Compare and Write command for reading
and writing the volume's metadata within a single operation. With the Compare and Write command,
the storage system provides a faster mechanism that is displayed to the volume as an atomic action
that does not require locking the entire volume.
40
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 55
The Compare and Write command is supported on all open systems xed block volumes, including
Metro Mirror and Global Mirror primary volumes and FlashCopy source and target volumes.
XCOPY or Full Copy
The XCOPY (or extended copy) command copies multiple les from one directory to another or across
a network.
Full Copy copies data from one storage array to another without writing to the VMware ESX Server
(VMware vStorage API).
The following restrictions apply to XCOPY:
• XCOPY is not supported on Extent Space Efcient (ESE) volumes
• XCOPY is not supported on volumes greater than 2 TB
• The target of an XCOPY cannot be a Metro Mirror or Global Mirror primary volume
• The Copy Services license is required
Block Zero (Write Same)
The SCSI Write Same command is supported on all volumes. This command efciently writes each
block, faster than standard SCSI write commands, and is optimized for network bandwidth usage.
IBM vCenter plug-in for ESX 4.x
The IBM vCenter plug-in for ESX 4.x provides support for the VAAI interfaces on ESX 4.x.
VMware vCenter Site Recovery Manager 5.0
VMware vCenter Site Recovery Manager (SRM) provides methods to simplify and automate disaster
recovery processes. IBM Site Replication Adapter (SRA) communicates between SRM and the storage
replication interface. SRA support for SRM 5.0 includes the new features for planned migration,
reprotection, and failback. The supported Copy Services are Metro Mirror, Global Mirror, Metro-Global
Mirror, and FlashCopy.
The IBM Storage Management Console plug-in enables VMware administrators to manage their systems
from within the VMware management environment. This plug-in provides an integrated view of IBM
storage to VMware virtualize datastores that are required by VMware administrators. For information, see
the IBM Storage Management Console for VMware vCenter (http://www.ibm.com/support/
knowledgecenter/en/STAV45/hsg/hsg_vcplugin_kcwelcome_sonas.html) online documentation.
Performance for IBM Z
The storage system supports the following IBM performance enhancements for IBM Z environments.
• Parallel Access Volumes (PAVs)
• Multiple allegiance
• z/OS Distributed Data Backup
• z/HPF extended distance capability
• zHyperLink
Parallel Access Volumes
A PAV capability represents a signicant performance improvement by the storage unit over traditional
I/O processing. With PAVs, your system can access a single volume from a single host with multiple
concurrent requests.
You must congure both your storage unit and operating system to use PAVs. You can use the logical
congurationdenition to dene PAV-bases, PAV-aliases, and their relationship in the storage unit
hardware. This unit address relationship creates a single logical volume, allowing concurrent I/O
operations.
Static PAV associates the PAV-base address and its PAV aliases in a predened and xed method. That is,
the PAV-aliases of a PAV-base address remain unchanged. Dynamic PAV, on the other hand, dynamically
Chapter 3. Data management features
41
Page 56
associates the PAV-base address and its PAV aliases. The device number types (PAV-alias or PAV-base)
must match the unit address types as dened in the storage unit hardware.
You can further enhance PAV by adding the IBM HyperPAV feature. IBM HyperPAV associates the
volumes with either an alias address or a specied base logical volume number. When a host system
requests IBM HyperPAV processing and the processing is enabled, aliases on the logical subsystem are
placed in an IBM HyperPAV alias access state on all logical paths with a specic path group ID. IBM
HyperPAV is only supported on FICON channel paths.
PAV can improve the performance of large volumes. You get better performance with one base and two
aliases on a 3390 Model 9 than from three 3390 Model 3 volumes with no PAV support. With one base, it
also reduces storage management costs that are associated with maintaining large numbers of volumes.
The alias provides an alternate path to the base device. For example, a 3380 or a 3390 with one alias has
only one device to write to, but can use two paths.
The storage unit supports concurrent or parallel data transfer operations to or from the same volume
from the same system or system image for IBM Z or S/390® hosts. PAV software support enables multiple
users and jobs to simultaneously access a logical volume. Read and write operations can be accessed
simultaneously to different domains. (The domain of an I/O operation is the specied extents to which the
I/O operation applies.)
Multiple allegiance
With multiple allegiance, the storage unit can run concurrent, multiple requests from multiple hosts.
Traditionally, IBM storage subsystems allow only one channel program to be active to a disk volume at a
time. This means that, after the subsystem accepts an I/O request for a particular unit address, this unit
address appears "busy" to subsequent I/O requests. This single allegiance capability ensures that
additional requesting channel programs cannot alter data that is already being accessed.
By contrast, the storage unit is capable of multiple allegiance (or the concurrent execution of multiple
requests from multiple hosts). That is, the storage unit can queue and concurrently run multiple requests
for the same unit address, if no extent conflict occurs. A conflict refers to either the inclusion of a Reserve
request by a channel program or a Write request to an extent that is in use.
z/OS Distributed Data Backup
z/OS Distributed Data Backup (zDDB) allows hosts, which are attached through a FICON interface, to
access data on xed block (FB) volumes through a device address on FICON interfaces.
If the zDDB LIC feature key is installed and enabled and a volume group type species either FICON
interfaces, this volume group has implicit access to all FB logical volumes that are congured in addition
to all CKD volumes specied in the volume group. In addition, this optional feature enables data backup
of open systems from distributed server platforms through a IBM Z host. The feature helps you manage
multiple data protection environments and consolidate those into one environment that is managed by
IBM Z . For more information, see “z/OS Distributed Data Backup” on page 75
z/HPF extended distance
z/HPF extended distance reduces the impact that is associated with supported commands on current
adapter hardware, improving FICON throughput on the I/O ports. The storage system also supports the
new zHPF I/O commands for multitrack I/O operations.
zHyperLink
zHyperLink is a short distance link technology that is designed for up to 10 times lower latency than zHPF.
It can speed up transaction processing and improve active log throughput. zHyperLink is intended to
complement FICON technology to accelerate I/O requests that are typically used for transaction
processing.
.
42
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 57
Copy Services
Copy Services functions can help you implement storage solutions to keep your business running 24
hours a day, 7 days a week. Copy Services include a set of disaster recovery, data migration, and data
duplication functions.
The storage system supports Copy Service functions that contribute to the protection of your data. These
functions are also supported on the IBM TotalStorage™ Enterprise Storage Server®.
Notes:
• If you are creating paths between an older release of the DS8000 (Release 5.1 or earlier), which
supports only 4-port host adapters, and a newer release of the DS8000 (Release 6.0 or later), which
supports 8-port host adapters, the paths connect only to the lower four ports on the newer storage
system.
• If you are creating paths from a model 993 4-port host adapter to a previous release DS8000 (Release
6.0 or later), which supports 8-port host adapters, you can only connect the lower four ports of the 8port host adapter.
• The maximum number of FlashCopy relationships that are allowed on a volume is 65534. If that
number is exceeded, the FlashCopy operation fails.
• The size limit for volumes or extents in a Copy Service relationship is 2 TB.
• Thin provisioning functions in open-system environments are supported for the following Copy Services
functions:
– FlashCopy relationships
– Global Mirror relationships if the Global Copy A and B volumes are Extent Space Efcient (ESE)
volumes. The FlashCopy target volume (Volume C) in the Global Mirror relationship can be an ESE
volume or standard volume.
• PPRC supports any intermix of T10-protected or standard volumes. FlashCopy does not support
intermix.
• PPRC supports copying from standard volumes to ESE volumes, or ESE volumes to Standard volumes,
to allow migration with PPRC failover when both source and target volumes are on a DS8000 version
8.2 or higher.
The following Copy Services functions are available as optional features:
• Point-in-time copy, which includes IBM FlashCopy.
The FlashCopy function allows you to make point-in-time, full volume copies of data so that the copies
are immediately available for read or write access. In IBM Z environments, you can also use the
FlashCopy function to perform data set level copies of your data.
• Remote mirror and copy, which includes the following functions:
– Metro Mirror
Metro Mirror provides real-time mirroring of logical volumes between two storage system that can be
located up to 300 km from each other. It is a synchronous copy solution where write operations are
completed on both copies (local and remote site) before they are considered to be done.
– Global Copy
Global Copy is a nonsynchronous long-distance copy function where incremental updates are sent
from the local to the remote site on a periodic basis.
– Global Mirror
Global Mirror is a long-distance remote copy function across two sites by using asynchronous
technology. Global Mirror processing is designed to provide support for unlimited distance between
the local and remote sites, with the distance typically limited only by the capabilities of the network
and the channel extension technology.
Chapter 3. Data management features
43
Page 58
– Metro/Global Mirror (a combination of Metro Mirror and Global Mirror)
Metro/Global Mirror is a three-site remote copy solution. It uses synchronous replication to mirror
data between a local site and an intermediate site, and asynchronous replication to mirror data from
an intermediate site to a remote site.
– Multiple Target PPRC
Multiple Target PPRC builds and extends the capabilities of Metro Mirror and Global Mirror. It allows
data to be mirrored from a single primary site to two secondary sites simultaneously. You can dene
any of the sites as the primary site and then run Metro Mirror replication from the primary site to
either of the other sites individually or both sites simultaneously.
• Remote mirror and copy for IBM Z environments, which includes z/OS Global Mirror.
Note: When FlashCopy is used on FB (open) volumes, the source and the target volumes must have the
same protection type of either T10 DIF or standard.
The point-in-time and remote mirror and copy features are supported across variousIBM server
environments such as IBM i, System p, and IBM Z , as well as servers from Oracle and Hewlett-Packard.
You can manage these functions through a command-line interface that is called the DS CLI. You can use
the DS8000 Storage Management GUI to set up and manage the following types of data-copy functions
from any point where network access is available:
Point-in-time copy (FlashCopy)
You can use the FlashCopy function to make point-in-time, full volume copies of data, with the copies
immediately available for read or write access. In IBM Z environments, you can also use the FlashCopy
function to perform data set level copies of your data. You can use the copy with standard backup tools
that are available in your environment to create backup copies on tape.
FlashCopy is an optional function.
The FlashCopy function creates a copy of a source volume on the target volume. This copy is called a
point-in-time copy. When you initiate a FlashCopy operation, a FlashCopy relationship is created between
a source volume and target volume. A FlashCopy relationship is a mapping of the FlashCopy source
volume and a FlashCopy target volume. This mapping allows a point-in-time copy of that source volume
to be copied to the associated target volume. The FlashCopy relationship exists between the volume pair
in either case:
• From the time that you initiate a FlashCopy operation until the storage system copies all data from the
source volume to the target volume.
• Until you explicitly delete the FlashCopy relationship if it was created as a persistent FlashCopy
relationship.
One of the main benets of the FlashCopy function is that the point-in-time copy is immediately available
for creating a backup of production data. The target volume is available for read and write processing so it
can be used for testing or backup purposes. Data is physically copied from the source volume to the
target volume by using a background process. (A FlashCopy operation without a background copy is also
possible, which allows only data modied on the source to be copied to the target volume.) The amount of
time that it takes to complete the background copy depends on the following criteria:
• The amount of data to be copied
• The number of background copy processes that are occurring
• The other activities that are occurring on the storage systems
The FlashCopy function supports the following copy options:
Consistency groups
Creates a consistent point-in-time copy of multiple volumes, with negligible host impact. You can
enable FlashCopy consistency groups from the DS CLI.
44
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 59
Change recording
Activates the change recording function on the volume pair that is participating in a FlashCopy
relationship. This function enables a subsequent refresh to the target volume.
Establish FlashCopy on existing Metro Mirror source
Establish a FlashCopy relationship, where the target volume is also the source of an existing remote
mirror and copy source volume. This allows you to create full or incremental point-in-time copies at a
local site and then use remote mirroring commands to copy the data to the remote site.
Fast reverse
Reverses the FlashCopy relationship without waiting for the nish of the background copy of the
previous FlashCopy. This option applies to the Global Mirror mode.
Inhibit writes to target
Ensures that write operations are inhibited on the target volume until a refresh FlashCopy operation is
complete.
Multiple Incremental FlashCopy
Allows a source volume to establish incremental flash copies to a maximum of 12 targets.
Multiple Relationship FlashCopy
Allows a source volume to have multiple (up to 12) target volumes at the same time.
Persistent FlashCopy
Allows the FlashCopy relationship to remain even after the FlashCopy operation completes. You must
explicitly delete the relationship.
Refresh target volume
Refresh a FlashCopy relationship, without recopying all tracks from the source volume to the target
volume.
Resynchronizing FlashCopy volume pairs
Update an initial point-in-time copy of a source volume without having to recopy your entire volume.
Reverse restore
Reverses the FlashCopy relationship and copies data from the target volume to the source volume.
Reset SCSI reservation on target volume
If there is a SCSI reservation on the target volume, the reservation is released when the FlashCopy
relationship is established. If this option is not specied and a SCSI reservation exists on the target
volume, the FlashCopy operation fails.
Remote Pair FlashCopy
Figure 4 on page 46
to copy data from Local A to Local B, an equivalent operation is also performed from Remote A to
Remote B. FlashCopy can be performed as described for a Full Volume FlashCopy, Incremental
FlashCopy, and Dataset Level FlashCopy.
The Remote Pair FlashCopy function prevents the Metro Mirror relationship from changing states and
the resulting momentary period where Remote A is out of synchronization with Remote B. This feature
provides a solution for data replication, data migration, remote copy, and disaster recovery tasks.
Without Remote Pair FlashCopy, when you established a FlashCopy relationship from Local A to Local
B, by using a Metro Mirror primary volume as the target of that FlashCopy relationship, the
corresponding Metro Mirror volume pair went from "full duplex" state to "duplex pending" state if the
FlashCopy data was being transferred to the Local B. The time that it took to complete the copy of the
FlashCopy data until all Metro Mirror volumes were synchronous again, depended on the amount of
data transferred. During this time, the Local B would be inconsistent if a disaster were to have
occurred.
Note: Previously, if you created a FlashCopy relationship with the Preserve Mirror, Required option,
by using a Metro Mirror primary volume as the target of that FlashCopy relationship, and if the status
of the Metro Mirror volume pair was not in a "full duplex" state, the FlashCopy relationship failed. That
restriction is now removed. The Remote Pair FlashCopy relationship completes successfully with the
"Preserve Mirror, Required" option, even if the status of the Metro Mirror volume pair is either in a
suspended or duplex pending state.
illustrates how Remote Pair FlashCopy works. If Remote Pair FlashCopy is used
Chapter 3. Data management features
45
Page 60
Figure 4. Remote Pair FlashCopy
Note: The storage system supports Incremental FlashCopy and Metro Global Mirror Incremental Resync
on the same volume.
Safeguarded Copy
The Safeguarded Copy feature creates safeguarded backups that are not accessible by the host system
and protects these backups from corruption that can occur in the production environment. You can dene
a Safeguarded Copy schedule to create multiple backups on a regular basis, such as hourly or daily. You
can also restore a backup to the source volume or to a different volume. A backup contains the same
metadata as the safeguarded source volume.
Safeguarded Copy can create backups with more frequency and capacity in comparison to FlashCopy
volumes. The creation of safeguarded backups also impacts performance less than the multiple target
volumes that are created by FlashCopy.
With backups that are outside of the production environment, you can use the backups to restore your
environment back to a specied point in time. You can also extract and restore specic data from the
backup or use the backup to diagnose production issues.
You cannot delete a safeguarded source volume before the safeguarded backups are deleted. The
maximum size of a backup is 16 TB.
Copy Services Manager (available on the Hardware Management Console) is required to facilitate the use
and management of Safeguarded Copy functions.
Remote mirror and copy
The remote mirror and copy feature is a flexible data mirroring technology that allows replication between
a source volume and a target volume on one or two disk storage systems. You can also issue remote
mirror and copy operations to a group of source volumes on one logical subsystem (LSS) and a group of
target volumes on another LSS. (An LSS is a logical grouping of up to 256 logical volumes for which the
volumes must have the same disk format, either count key data or xed block.)
Remote mirror and copy is an optional feature that provides data backup and disaster recovery.
Note: You must use Fibre Channel host adapters with remote mirror and copy functions. To see a current
list of environments, congurations, networks, and products that support remote mirror and copy
functions, click Interoperability Matrix at the following location IBM System Storage Interoperation
Center (SSIC) website (www.ibm.com/systems/support/storage/cong/ssic).
46
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 61
The remote mirror and copy feature provides synchronous (Metro Mirror) and asynchronous (Global Copy)
data mirroring. The main difference is that the Global Copy feature can operate at long distances, even
continental distances, with minimal impact on applications. Distance is limited only by the network and
channel extenders technology capabilities. The maximum supported distance for Metro Mirror is 300 km.
With Metro Mirror, application write performance depends on the available bandwidth. Global Copy
enables better use of available bandwidth capacity to allow you to include more of your data to be
protected.
The enhancement to Global Copy is Global Mirror, which uses Global Copy and the benets of FlashCopy
to form consistency groups. (A consistency group is a set of volumes that contain consistent and current
data to provide a true data backup at a remote site.) Global Mirror uses a master storage system (along
with optional subordinate storage systems) to internally, without external automation software, manage
data consistency across volumes by using consistency groups.
Consistency groups can also be created by using the freeze and run functions of Metro Mirror. The freeze
and run functions, when used with external automation software, provide data consistency for multiple
Metro Mirror volume pairs.
The following sections describe the remote mirror and copy functions.
Synchronous mirroring (Metro Mirror)
Provides real-time mirroring of logical volumes (a source and a target) between two storage systems
that can be located up to 300 km from each other. With Metro Mirror copying, the source and target
volumes can be on the same storage system or on separate storage systems. You can locate the
storage system at another site, some distance away.
Metro Mirror is a synchronous copy feature where write operations are completed on both copies
(local and remote site) before they are considered to be complete. Synchronous mirroring means that
a storage server constantly updates a secondary copy of a volume to match changes that are made to
a source volume.
The advantage of synchronous mirroring is that there is minimal host impact for performing the copy.
The disadvantage is that since the copy operation is synchronous, there can be an impact to
application performance because the application I/O operation is not acknowledged as complete until
the write to the target volume is also complete. The longer the distance between primary and
secondary storage systems, the greater this impact to application I/O, and therefore, application
performance.
Asynchronous mirroring (Global Copy)
Copies data nonsynchronously and over longer distances than is possible with the Metro Mirror
feature. When operating in Global Copy mode, the source volume sends a periodic, incremental copy
of updated tracks to the target volume instead of a constant stream of updates. This function causes
less impact to application writes for source volumes and less demand for bandwidth resources. It
allows for a more flexible use of the available bandwidth.
The updates are tracked and periodically copied to the target volumes. As a consequence, there is no
guarantee that data is transferred in the same sequence that was applied to the source volume.
To get a consistent copy of your data at your remote site, periodically switch from Global Copy to
Metro Mirror mode, then either stop the application I/O or freeze data to the source volumes by using
a manual process with freeze and run commands. The freeze and run functions can be used with
external automation software such as Geographically Dispersed Parallel Sysplex (GDPS®), which is
available for IBM Z environments, to ensure data consistency to multiple Metro Mirror volume pairs in
a specied logical subsystem.
Common options for Metro Mirror/Global Mirror and Global Copy include the following modes:
Suspend and resume
If you schedule a planned outage to perform maintenance at your remote site, you can suspend
Metro Mirror/Global Mirror or Global Copy processing on specic volume pairs during the duration
of the outage. During this time, data is no longer copied to the target volumes. Because the
primary storage system tracks all changed data on the source volume, you can resume operations
later to synchronize the data between the volumes.
Chapter 3. Data management features
47
Page 62
Copy out-of-synchronous data
You can specify that only data updated on the source volume while the volume pair was
suspended is copied to its associated target volume.
Copy an entire volume or not copy the volume
You can copy an entire source volume to its associated target volume to guarantee that the source
and target volume contain the same data. When you establish volume pairs and choose not to
copy a volume, a relationship is established between the volumes but no data is sent from the
source volume to the target volume. In this case, it is assumed that the volumes contain the same
data and are consistent, so copying the entire volume is not necessary or required. Only new
updates are copied from the source to target volumes.
Global Mirror
Provides a long-distance remote copy across two sites by using asynchronous technology. Global
Mirror processing is most often associated with disaster recovery or disaster recovery testing.
However, it can also be used for everyday processing and data migration.
Global Mirror integrates both the Global Copy and FlashCopy functions.
The Global Mirror function mirrors data between volume pairs of two storage systems over greater
distances without affecting overall performance. It also provides application-consistent data at a
recovery (or remote) site in a disaster at the local site. By creating a set of remote volumes every few
seconds, the data at the remote site is maintained to be a point-in-time consistent copy of the data at
the local site.
Global Mirror operations periodically start point-in-time FlashCopy operations at the recovery site, at
regular intervals, without disrupting the I/O to the source volume, thus giving a continuous, near upto-date data backup. By grouping many volumes into a session that is managed by the master storage
system, you can copy multiple volumes to the recovery site simultaneously maintaining point-in-time
consistency across those volumes. (A session contains a group of source volumes that are mirrored
asynchronously to provide a consistent copy of data at the remote site. Sessions are associated with
Global Mirror relationships and are dened with an identier [session ID] that is unique across the
enterprise. The ID identies the group of volumes in a session that are related and that can participate
in the Global Mirror consistency group.)
Global Mirror supports up to 32 Global Mirror sessions per storage facility image. Previously, only one
session was supported per storage facility image.
You can use multiple Global Mirror sessions to fail over only data assigned to one host or application
instead of forcing you to fail over all data if one host or application fails. This process provides
increased flexibility to control the scope of a failover operation and to assign different options and
attributes to each session.
The DS CLI and DS Storage Manager display information about the sessions, including the copy state
of the sessions.
Practice copying and consistency groups
To get a consistent copy of your data, you can pause Global Mirror on a consistency group boundary.
Use the pause command with the secondary storage option. (For more information, see the DS CLI
Commands reference.) After verifying that Global Mirror is paused on a consistency boundary (state is
Paused with Consistency), the secondary storage system and the FlashCopy target storage system or
device are consistent. You can then issue either a FlashCopy or Global Copy command to make a
practice copy on another storage system or device. You can immediately resume Global Mirror,
without the need to wait for the practice copy operation to nish. Global Mirror then starts forming
consistency groups again. The entire pause and resume operation generally takes just a few seconds.
Metro/Global Mirror
Provides a three-site, long-distance disaster recovery replication that combines Metro Mirror with
Global Mirror replication for both IBM Z and open systems data. Metro/Global Mirror uses
synchronous replication to mirror data between a local site and an intermediate site, and
asynchronous replication to mirror data from an intermediate site to a remote site.
48
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 63
In a three-site Metro/Global Mirror, if an outage occurs, a backup site is maintained regardless of
which one of the sites is lost. Suppose that an outage occurs at the local site, Global Mirror continues
to mirror updates between the intermediate and remote sites, maintaining the recovery capability at
the remote site. If an outage occurs at the intermediate site, data at the local storage system is not
affected. If an outage occurs at the remote site, data at the local and intermediate sites is not
affected. Applications continue to run normally in either case.
With the incremental resynchronization function enabled on a Metro/Global Mirror conguration, if the
intermediate site is lost, the local and remote sites can be connected, and only a subset of changed
data is copied between the volumes at the two sites. This process reduces the amount of data
needing to be copied from the local site to the remote site and the time it takes to do the copy.
Multiple Target PPRC
Provides an enhancement to disaster recovery solutions by allowing data to be mirrored from a single
primary site to two secondary sites simultaneously. The function builds on and extends Metro Mirror
and Global Mirror capabilities. Various interfaces and operating systems support the function.
Disaster recovery scenarios depend on support from controlling software such as Geographically
Dispersed Parallel Sysplex (GDPS) and IBM Copy Services Manager.
z/OS Global Mirror
If workload peaks, which might temporarily overload the bandwidth of the Global Mirror conguration,
the enhanced z/OS Global Mirror function initiates a Global Mirror suspension that preserves primary
site application performance. If you are installing new high-performance z/OS Global Mirror primary
storage subsystems, this function provides improved capacity and application performance during
heavy write activity. This enhancement can also allow Global Mirror to be congured to tolerate longer
periods of communication loss with the primary storage subsystems. This enables the Global Mirror to
stay active despite transient channel path recovery events. In addition, this enhancement can provide
fail-safe protection against application system impact that is related to unexpected data mover
system events.
The z/OS Global Mirror function is an optional function.
z/OS Metro/Global Mirror Incremental Resync
z/OS Metro/Global Mirror Incremental Resync is an enhancement for z/OS Metro/Global Mirror. z/OS
Metro/Global Mirror Incremental Resync can eliminate the need for a full copy after a HyperSwap
®
situation in 3-site z/OS Metro/Global Mirror congurations. The storage system supports z/OS Metro/
Global Mirror that is a 3-site mirroring solution that uses IBM System Storage Metro Mirror and z/OS
Global Mirror (XRC). The z/OS Metro/Global Mirror Incremental Resync capability is intended to
enhance this solution by enabling resynchronization of data between sites by using only the changed
data from the Metro Mirror target to the z/OS Global Mirror target after a HyperSwap operation.
If an unplanned failover occurs, you can use the z/OS Soft Fence function to prevent any system from
accessing data from an old primary PPRC site. For more information, see the GDPS/PPRC Installation
and Customization Guide, or the GDPS/PPRC HyperSwap Manager Installation and Customization
Guide.
z/OS Global Mirror Multiple Reader (enhanced readers)
z/OS Global Mirror Multiple Reader provides multiple Storage Device Manager readers that allow
improved throughput for remote mirroring congurations in IBM Z environments. z/OS Global Mirror
Multiple Reader helps maintain constant data consistency between mirrored sites and promotes
efcient recovery. This function is supported on the storage system running in a IBM Z environment
with version 1.7 or later at no additional charge.
Interoperability with existing and previous generations of the DS8000 series
All of the remote mirroring solutions that are documented in the sections above use Fibre Channel as the
communications link between the primary and secondary storage systems. The Fibre Channel ports that
are used for remote mirror and copy can be congured as either a dedicated remote mirror link or as a
shared port between remote mirroring and Fibre Channel Protocol (FCP) data trafc.
The remote mirror and copy solutions are optional capabilities and are compatible with previous
generations of DS8000 series. They are available as follows:
Chapter 3. Data management features
49
Page 64
• Metro Mirror indicator feature numbers 75xx and 0744 and corresponding DS8000 series function
authorization (2396-LFA MM feature numbers 75xx)
• Global Mirror indicator feature numbers 75xx and 0746 and corresponding DS8000 series function
authorization (2396-LFA GM feature numbers 75xx).
Global Copy is a non-synchronous long-distance copy option for data migration and backup.
Disaster recovery through Copy Services
Through Copy Services functions, you can prepare for a disaster by backing up, copying, and mirroring
your data at local and remote sites.
Having a disaster recovery plan can ensure that critical data is recoverable at the time of a disaster.
Because most disasters are unplanned, your disaster recovery plan must provide a way to recover your
applications quickly, and more importantly, to access your data. Consistent data to the same point-intime across all storage units is vital before you can recover your data at a backup (normally your remote)
site.
Most users use a combination of remote mirror and copy and point-in-time copy (FlashCopy) features to
form a comprehensive enterprise solution for disaster recovery. In an event of a planned event or
unplanned disaster, you can use failover and failback modes as part of your recovery solution. Failover
and failback modes can reduce the synchronization time of remote mirror and copy volumes after you
switch between local (or production) and intermediate (or remote) sites during an outage. Although
failover transmits no data, it changes the status of a device, and the status of the secondary volume
changes to a suspended primary volume. The device that initiates the failback command determines the
direction of the transmitted data.
Recovery procedures that include failover and failback modes use remote mirror and copy functions, such
as Metro Mirror, Global Copy, Global Mirror, Metro/Global Mirror, Multiple Target PPRC, and FlashCopy.
Note: See the IBM DS8000 Command-Line Interface User's Guide for specic disaster recovery tasks.
Data consistency can be achieved through the following methods:
Manually using external software (without Global Mirror)
You can use Metro Mirror, Global Copy, and FlashCopy functions to create a consistent and restartable
copy at your recovery site. These functions require a manual and periodic suspend operation at the
local site. For instance, you can enter the freeze and run commands with external automated
software. Then, you can initiate a FlashCopy function to make a consistent copy of the target volume
for backup or recovery purposes. Automation software is not provided with the storage system; it
must be supplied by the user.
Note: The freeze operation occurs at the same point-in-time across all links and all storage
systems.
Automatically (with Global Mirror and FlashCopy)
You can automatically create a consistent and restartable copy at your intermediate or remote site
with minimal or no interruption of applications. This automated process is available for two-site
Global Mirror or three-site Metro / Global Mirror congurations. Global Mirror operations automate the
process of continually forming consistency groups. It combines Global Copy and FlashCopy
operations to provide consistent data at the remote site. A master storage unit (along with
subordinate storage units) internally manages data consistency through consistency groups within a
Global Mirror conguration. Consistency groups can be created many times per hour to increase the
currency of data that is captured in the consistency groups at the remote site.
Note: A consistency group is a collection of session-grouped volumes across multiple storage
systems. Consistency groups are managed together in a session during the creation of consistent
copies of data. The formation of these consistency groups is coordinated by the master storage unit,
which sends commands over remote mirror and copy links to its subordinate storage units.
If a disaster occurs at a local site with a two or three-site conguration, you can continue production
on the remote (or intermediate) site. The consistent point-in-time data from the remote site
consistency group enables recovery at the local site when it becomes operational.
50
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 65
Resource groups for Copy Services scope limiting
Resource groups are used to dene a collection of resources and associate a set of policies relative to
how the resources are congured and managed. You can dene a network user account so that it has
authority to manage a specic set of resources groups.
Copy Services scope limiting overview
Copy services scope limiting is the ability to specify policy-based limitations on Copy Services requests.
With the combination of policy-based limitations and other inherent volume-addressing limitations, you
can control which volumes can be in a Copy Services relationship, which network users or host LPARs
issue Copy Services requests on which resources, and other Copy Services operations.
Use these capabilities to separate and protect volumes in a Copy Services relationship from each other.
This can assist you with multitenancy support by assigning specic resources to specic tenants, limiting
Copy Services relationships so that they exist only between resources within each tenant's scope of
resources, and limiting a tenant's Copy Services operators to an "operator only" role.
When managing a single-tenant installation, the partitioning capability of resource groups can be used to
isolate various subsets of an environment as if they were separate tenants. For example, to separate
mainframes from distributed system servers, Windows from UNIX, or accounting departments from
telemarketing.
Using resource groups to limit Copy Service operations
Figure 5 on page 51 illustrates one possible implementation of an exemplary environment that uses
resource groups to limit Copy Services operations. Two tenants (Client A and Client B) are illustrated that
are concurrently operating on shared hosts and storage systems.
Each tenant has its own assigned LPARs on these hosts and its own assigned volumes on the storage
systems. For example, a user cannot copy a Client A volume to a Client B volume.
Resource groups are congured to ensure that one tenant cannot cause any Copy Services relationships
to be initiated between its volumes and the volumes of another tenant. These controls must be set by an
administrator as part of the conguration of the user accounts or access-settings for the storage system.
Figure 5. Implementation of multiple-client volume administration
Resource groups functions provide additional policy-based limitations to users or the DS8000 storage
systems, which in conjunction with the inherent volume addressing limitations support secure partitioning
of Copy Services resources between user-dened partitions. The process of specifying the appropriate
limitations is completed by an administrator using resource groups functions.
Note: User and administrator roles for resource groups are the same user and administrator roles used
for accessing your DS8000 storage system. For example, those roles include storage administrator, Copy
Services operator, and physical operator.
The process of planning and designing the use of resource groups for Copy Services scope limiting can be
complex. For more information on the rules and policies that must be considered in implementing
Chapter 3. Data management features
51
Page 66
resource groups, see topics about resource groups. For specic DS CLI commands used to implement
resource groups, see the IBM DS8000 Command-Line Interface User's Guide.
Comparison of Copy Services features
The features of the Copy Services aid with planning for a disaster.
Table 17 on page 52 provides a brief summary of the characteristics of the Copy Services features that
are available for the storage system.
Table 17. Comparison of features
FeatureDescriptionAdvantagesConsiderations
Multiple Target PPRCSynchronous and
asynchronous
replication
Metro/Global MirrorThree-site, long distance
disaster recovery
replication
Metro MirrorSynchronous data copy
at a distance
Global CopyContinuous copy without
data consistency
Global MirrorAsynchronous copyNearly unlimited
Mirrors data from a
single primary site to
two secondary sites
simultaneously.
A backup site is
maintained regardless of
which one of the sites is
lost.
No data loss, rapid
recovery time for
distances up to 300 km.
Nearly unlimited
distance, suitable for
data migration, only
limited by network and
channel extenders
capabilities.
distance, scalable, and
low RPO. The RPO is the
time needed to recover
from a disaster; that is,
the total system
downtime.
Disaster recovery
scenarios depend on
support from controlling
software such as
Geographically
Dispersed Parallel
Sysplex (GDPS) and IBM
Copy Services Manager
Recovery point objective
(RPO) might grow if
bandwidth capability is
exceeded.
Slight performance
impact.
Copy is normally fuzzy
but can be made
consistent through
synchronization.
RPO might grow when
link bandwidth
capability is exceeded.
z/OS Global MirrorAsynchronous copy
controlled by IBM Z host
software
52 IBM DS8900F: DS8900F Introduction and Planning Guide
Nearly unlimited
distance, highly
scalable, and very low
RPO.
Additional host server
hardware and software
is required. The RPO
might grow if bandwidth
capability is exceeded or
host performance might
be impacted.
Page 67
Securing data
You can secure data with the encryption features that are supported by the storage system. The DS8900F
systems use AES-256 encryption.
Encryption technology has a number of considerations that are critical to understand to maintain the
security and accessibility of encrypted data. For example, encryption must be enabled by feature code
and congured to protect data in your environment. Encryption also requires access to at least two
external key servers.
It is important to understand how to manage IBM encrypted storage and comply with IBM encryption
requirements. Failure to follow these requirements might cause a permanent encryption deadlock, which
might result in the permanent loss of all key-server-managed encrypted data at all of your installations.
The storage system automatically tests access to the encryption keys every 8 hours and access to the key
servers every 5 minutes. You can verify access to key servers manually, initiate key retrieval, and monitor
the status of attempts to access the key server.
Chapter 3. Data management features 53
Page 68
54 IBM DS8900F: DS8900F Introduction and Planning Guide
Page 69
Chapter 4. Physical conguration
Physical conguration planning is your responsibility. Your technical support representative can help you
to plan for the physical conguration and to select features.
This section includes the following information:
• Explanations for available features that can be added to the physical conguration of your system
model
• Feature codes to use when you order each feature
• Conguration rules and guidelines
Conguration controls
Indicator features control the physical conguration of the storage system.
These indicator features are for administrative use only. The indicator features ensure that each storage
system (the base frame plus any expansion frames) has a valid conguration. There is no charge for these
features.
Your storage system can include the following indicators:
Expansion-frame position indicators
Expansion-frame position indicators flag models that are attached to expansion frames. They also flag
the position of each expansion frame within the storage system. For example, a position 1 indicator
flags the expansion frame as the rst expansion frame within the storage system.
Administrative indicators
If applicable, models also include the following indicators:
• IBM / Openwave alliance
• IBM / EPIC attachment
• IBM systems, including System p and IBM Z
• Lenovo System x and BladeCenter
• IBM storage systems, including IBM System Storage ProtecTIER®, IBM Storwize® V7000, and IBM
System Storage N series
• IBM SAN Volume Controller
• Linux
• VMware VAAI indicator
• Storage Appliance
Determining physical conguration features
You must consider several guidelines for determining and then ordering the features that you require to
customize your storage system. Determine the feature codes for the optional features you select and use
those feature codes to complete your conguration.
Procedure
1. Calculate your overall storage needs, including the licensed functions.
The Copy Services and z-Synergy Services licensed functions are based on usage requirements.
2. Determine the models of which your storage system is to be comprised.
3. For each model, determine the storage features that you need.
a) Select the drive set feature codes and determine the amount of each feature code that you must
order for each model.
b) Select the storage enclosure feature codes and determine the amount that you must order to
enclose the drive sets that you are ordering.
c) Select the disk cable feature codes and determine the amount that you need of each.
4. Determine the I/O adapter features that you need for your storage system.
a) Select the flash RAID and host adapters feature codes to order, and choose a model to contain the
adapters.
b) For each model chosen to contain adapters, determine the number of each I/O enclosure feature
codes that you must order.
c) Select the cables that you require to support the adapters.
5. Based on the disk storage and adapters, determine the appropriate processor memory feature code
that is needed.
6. Decide which power features that you must order to support each model.
7. Review the other features and determine which feature codes to order.
Storage features
You must select the storage features that you want on your storage system.
The storage features are separated into the following categories:
• Drive-set features and storage-enclosure features
• Enclosure ller features
• Device adapter features
Storage enclosures and drives
DS8900F supports various storage enclosures and drive options.
Feature codes for drive sets
Use these feature codes to order sets of encryption capable flash drives.
Table 18. Feature codes for flash-drive sets for High Performance Flash Enclosures Gen2
Feature codeDisk sizeDrive typeDrives per set Drive speed
in RPM
(K=1000)
1611800 GB2.5-in. Flash
16N/AYes5, 6, 10
Tier 0 drives
16121.6 TB2.5-in. Flash
16N/AYes6, 10
Tier 0 drives
16133.2 TB2.5-in. Flash
16N/AYes6, 10
Tier 0 drives
4
1622
1.92 TB2.5-in. Flash
16N/AYes6, 10
Tier 2 drives
16233.84 TB2.5-in. Flash
16N/AYes6, 10
Tier 1 drives
16247.68 TB2.5-in. Flash
16N/AYes6
Tier 2 drives
Encryption
capable drive
RAID support
1, 2
1, 2
2
1, 2
1, 2
56 IBM DS8900F: DS8900F Introduction and Planning Guide
Page 71
Table 18. Feature codes for flash-drive sets for High Performance Flash Enclosures Gen2 (continued)
Feature codeDisk sizeDrive typeDrives per set Drive speed
in RPM
(K=1000)
162515.36 TB2.5-in.Flash
Tier 2 drives
Note:
1. RAID 5 is not supported for drives larger than 1 TB, and requires a request for price quote (RPQ). For
information, contact your sales representative.
2. RAID 6 is the default RAID type for all drives larger than 1 TB, and it is the only supported RAID type for
7.68 TB drives and 15.36 TB drives.
3. Within a High Performance Flash Enclosure Gen2 pair, no intermix of high performance drives (Flash Tier 0)
with high capacity drives (Flash Tier 1 or Flash Tier 2) is supported.
4. 1.92 TB drives sets are supported only on models 993 and 994.
Feature codes for storage enclosures
Use these feature codes to identify the type of drive enclosures for your storage system.
Table 19. Feature codes for storage enclosures
Feature codeDescriptionModels
1605High Performance Flash Enclosure Gen2 pair for
Storage-enclosure llersll empty drive slots in the storage enclosures. The llers ensure sufcient
airflow across populated storage.
For High Performance Flash Enclosures Gen2, one ller feature provides a set of 16 llers.
Feature codes for storage enclosure llers
Use these feature codes to order ller sets for High Performance Flash Enclosures Gen2.
Table 20. Feature codes for storage enclosures
Feature codeDescription
1699Filler set for 2.5-in. High Performance Flash Enclosures Gen2;
includes 16 llers
Conguration rules for storage features
Use the following general conguration rules and ordering information to help you order storage features.
High Performance Flash Enclosures Gen2
Follow these conguration rules when you order storage features for storage systems with High
Performance Flash Enclosures Gen2.
Flash drive sets
The High Performance Flash Enclosure Gen2 pair requires a minimum of one 16 flash-drive set.
Storage enclosure llers
For the High Performance Flash Enclosures Gen2, one ller feature provides a set of 16 llers. If only
one flash-drive set is ordered, then two storage enclosure llers are needed to ll the remaining 32
Chapter 4. Physical
conguration57
Page 72
slots in the High Performance Flash Enclosures Gen2 pair. If two drive sets are ordered (32 drives),
one ller set is require to ll the remaining 16 slots. Each drive slot in a High Performance Flash
Enclosures Gen2 must have either a flash drive or a ller.
Physical and effective capacity
Use the following information to calculate the physical and effective capacity of a storage system.
To calculate the total physical capacity of a storage system, multiply each drive-set feature by its total
physical capacity and sum the values. For the standard drive enclosures, a full drive-set feature consists
of 16 identical disk drives with the same drive type, capacity, and speed. For High Performance Flash
Enclosures Gen2, there are 16 identical flash drives.
The logical conguration of your storage affects the effective capacity of the drive set.
Specically, effective capacities vary depending on the following congurations:
RAID type and spares
Drives in the DS8000 must be congured as RAID 5, RAID 6, or RAID 10 arrays before they can be
used, and then spare drives are assigned. RAID 10 can offer better performance for selected
applications, in particular, high random, write content applications in the open systems environment.
RAID 6 increases data protection by adding an extra layer of parity over the RAID 5 implementation.
Data format
Arrays are logically congured and formatted as xed block (FB) or count key data (CKD) ranks. Data
that is accessed by open systems hosts or Linux on IBM Z that support Fibre Channel protocol must
be logically congured as FB. Data that is accessed by IBM Z hosts with z/OS or z/VM must be
congured as CKD. Each RAID array is divided into equal-sized segments that are known as extents.
The storage administrator has the choice to create extent pools of different extent sizes. The
supported extent sizes for FB volumes are 1 GB or 16 MB and for CKD volumes it is one 3390 Mod1,
which is 1113 cylinders or 21 cylinders. An extent pool cannot have a mix of different extent sizes.
On prior models of DS8000 series, a xed area on each rank was assigned to be used for volume
metadata, which reduced the amount of space available for use by volumes. In the DS8900F family, there
is no xed area for volume metadata, and this capacity is added to the space available for use. The
metadata is allocated in the storage pool when volumes are created and is referred to as the pool
overhead.
The amount of space that can be allocated by volumes is variable and depends on both the number of
volumes and the logical capacity of these volumes. If thin provisioning is used, then the metadata is
allocated for the entire volume when the volume is created, and not when extents are used, so overprovisioned environments have more metadata.
Metadata is allocated in units that are called metadata extents, which are 16 MB for FB data and 21
cylinders for CKD data. There are 64 metadata extents in each user extent for FB and 53 for CKD. The
metadata space usage is as follows:
• Each volume takes one metadata extent.
• Ten extents (or part thereof) for the volume take one metadata extent.
For example, both a 3390-3 and a 3390-9 volume each take two metadata extents and a 128 GB FB
volume takes 14 metadata extents.
Note: In a multiple tier pool volume, metadata is allocated on the upper tiers to provide maximum
performance. A pool with 10% Flash/SSD or greater would have all of the volume metadata on this tier.
A simple way of estimating the maximum space that might be used by volume metadata is to use the
following calculations:
FB Pool Overhead = (#volumes*2 + total volume extents / 10)/64 - rounded up
to the nearest integer
CKD Pool Overhead = (#volumes*2 + total volume extents / 10)/53 - rounded up
to the nearest integer
58
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 73
These calculations overestimate the space that is used by metadata by a small amount, but the precise
details of each volume do not need to be known.
Examples:
• For an FB storage pool with 6,190 extents in which you expect to use thin provisioning and allocate up
to 12,380 extents (2:1 overprovisioning) on 100 volumes, you would have a pool overhead of 23 extents
-> (100*2+12380/10)/64=22.46.
• For a CKD storage pool with 6,190 extents in which you expect to allocate all the space on 700 volumes,
then you would have a pool overhead of 39 extents -> (700*2+6190/10)/53=38.09.
RAID capacities
Use the following information to calculate the raw capacity and usable capacity for High Performance
Flash Enclosures Gen2.
RAID 6 is the recommended and default RAID type for all drives over 1 TB. RAID 6 and RAID 10 are the
only supported RAID types for 1.92 TB Flash Tier 2 and 3.84 TB Flash Tier 1 drives. RAID 6 is the only
supported RAID type for 7.68 TB and 15.36 TB Flash Tier 2 drives.. RAID 5 is not supported for drives
larger than 1 TB, and requires a request for price quote (RPQ). For information, contact your sales
representative.
Table 21. RAID capacities for High Performance Flash Enclosures Gen2
Flash drive disk
size
800 GB12.8 TBFB Lg Ext213328554300502335784300
1.6 TB25.6 TBFB Lg Ext43015746n/an/a71918636
1.92 TB15.4 TBFB Lg Ext51686902n/an/a863610370
3.2 TB51.2 TBFB Lg Ext863711527n/an/a1441717307
3.84 TB61.4 TBFB Lg Ext1037113839n/an/a1730820776
7.68 TB123 TBFB Lg Extn/an/an/an/a3465041587
Physical
capacity of
Flash drive
set
Rank typeEffective capacity of one rank in number of extents
RAID-10 arraysRAID-5 arraysRAID-6 arrays
3 + 34 + 46 + P7 + P5 + P + Q6 + P + Q
FB Sm Ext136542182781275254321475229015275239
CKD Lg Ext239232034823563340134823
CKD Sm Ext126821169768255651298601212705255655
FB Sm Ext275284367771n/an/a460243552727
CKD Lg Ext48246445n/an/a80659686
CKD Sm Ext255684341586n/an/a427475513372
FB Sm Ext330783441769n/an/a552748663727
CKD Lg Ext57967741n/an/a968611631
CKD Sm Ext307231410315n/an/a513392616474
FB Sm Ext552771737753n/an/a9227331107703
CKD Lg Ext968712928n/an/a1617019412
CKD Sm Ext513414685225n/an/a8570291028843
FB Sm Ext663766885747n/an/a11077251329703
CKD Lg Ext1163215522n/an/a1941223302
CKD Sm Ext616506822682n/an/a10288481235028
FB Sm Extn/an/an/an/a22176632661631
CKD Lg Extn/an/an/an/a3886346643
CKD Sm Extn/an/an/an/a20597602472118
Chapter 4. Physical conguration59
Page 74
Table 21. RAID capacities for High Performance Flash Enclosures Gen2 (continued)
Flash drive disk
size
15.36 TB246 TBFB Lg Extn/an/an/an/a6898082782
Physical
capacity of
Flash drive
set
Rank typeEffective capacity of one rank in number of extents
RAID-10 arraysRAID-5 arraysRAID-6 arrays
3 + 34 + 46 + P7 + P5 + P + Q6 + P + Q
FB Sm Extn/an/an/an/a44147355298103
CKD Lg Extn/an/an/an/a7736592846
CKD Sm Extn/an/an/an/a41003924920882
I/O adapter features
You must select the I/O adapter features that you want for your storage system.
The I/O adapter features are separated into the following categories:
• I/O enclosures
• Host adapters
• Host adapters Fibre Channel cables
• zHyperLink adapter
• zHyperLink cables
• Transparent cloud tiering adapters
• Flash RAID adapters
I/O enclosures
I/O enclosures are required for your storage system conguration.
The I/O enclosures hold the I/O adapters and provide connectivity between the I/O adapters and the
storage processors. I/O enclosures are ordered and installed in pairs.
The I/O adapters in the I/O enclosures can be either device or host adapters. Each I/O enclosure pair can
support up to four device adapters (two pairs), and up to eight host adapters (not to exceed 32 host
adapter ports).
Feature codes for I/O enclosures
Use this feature code to order I/O enclosures for your storage system.
The I/O enclosure feature includes two I/O enclosures. This feature supports up to two device adapter
pairs, up to four host adapters with eight ports, and up to eight host adapters with four ports.
Table 22. Feature codes for I/O enclosures
Feature codeDescription
1303I/O enclosure pair for PCIe group 3
60 IBM DS8900F: DS8900F Introduction and Planning Guide
Page 75
Feature codes for I/O cables
I/O cables connect device and host adapters in an I/O enclosure pair to the processor. Use these feature
codes to order an I/O cable if your storage system is a DS8950F Agility Class that includes the model E96
expansion frame.
Table 23. Feature codes for PCIe cables
Feature
Code
1340PCIe3 cable set for adjacent
134120 m (65.6 ft) PCIe3 cable
Cable GroupDescriptionModels
For an adjacent expansion frame, one per
expansion frame
set for remote expansion
frame
I/O enclosure pair is required
For a remote expansion frame, one per I/O
enclosure pair is required
Fibre Channel (SCSI-FCP and FICON) host adapters and cables
You can order Fibre Channel host adapters for your storage-system conguration.
The Fibre Channel host adapters enable the storage system to attach to Fibre Channel (SCSI-FCP) and
FICON servers, and SAN fabric components. They are also used for remote mirror and copy control paths
between DS8000 series storage systems. Fibre Channel host adapters are installed in an I/O enclosure.
Adapters are either 4-port 16 Gbps or 4-port 32 Gbps.
Supported protocols include the following types:
• SCSI-FCP upper layer protocol (ULP) on point-to-point or fabric topologies.
• FICON ULP on point-to-point and fabric topologies.
Notes:
1. SCSI-FCP and FICON are supported simultaneously on the same adapter, but not on the same port.
2. For highest availability, ensure that you add adapters in pairs.
E96
E96
A Fibre Channel cable is required to attach each Fibre Channel adapter port to a server or fabric
component port. The Fibre Channel cables can be 50 or 9 micron, OM3 or higher ber graded, single or
multimode cables.
Feature codes for Fibre Channel host adapters
Use these feature codes to order Fibre Channel host adapters for your storage system.
A maximum of eight Fibre Channel host adapters can be ordered with a DS8910F Rack Mounted model
993.
A maximum of 16 Fibre Channel host adapters can be ordered with a DS8910F model 994, DS8950F
model 996, or DS8950F model E96.
Table 24. Feature codes for Fibre Channel host adapters
Feature codeDescriptionReceptacle type
33534-port, 16 Gbps shortwave FCP and FICON host
adapter, PCIe
33554-port, 32 Gbps longwave FCP and FICON host
adapter, PCIe
34534-port, 16 Gbps longwave FCP and FICON host
adapter, PCIe
34554-port, 32 Gbps shortwave FCP and FICON host
adapter, PCIe
LC
LC
LC
LC
Chapter 4. Physical conguration61
Page 76
Feature codes for Fibre Channel cables
Use these feature codes to order Fibre Channel cables to connect Fibre Channel host adapters to your
storage system. Take note of the distance capabilities for cable types.
Feature codes for overhead cable management (top-exit bracket)
Use this feature code to order cable management for overhead cabling (top exit bracket) for your model
994, 996, or E96.
Note: In addition to the top-exit bracket, one ladder (feature code 1101) must also be purchased for a
site where the top-exit bracket for ber cable feature is used. The ladder is used to ensure safe access
when your storage system is serviced with a top-exit bracket feature installed.
Table 26. Feature codes for the overhead cable (top-exit bracket)
Feature CodeDescription
1401Top-exit bracket for ber cable
zHyperLink adapters and cables
You can order zHyperLink adapters for your storage system conguration.
zHyperLink connections with IBM Z hosts provide low latency for random reads and writes.
Note: The z-synergy Services license is required for zHyperLink.
3 m (10 ft)Shortwave Fibre Channel or
FICON host adapters (feature
codes 3353 and 3455)
62
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 77
Feature codes for zHyperLink I/O adapters
Use these feature codes to order zHyperLink I/O adapters.
Each zHyperLink connection requires a zHyperLink I/O adapter to connect the zHyperLink cable to the
storage system. Each zHyperLink I/O adapter card has one port, but you must order them in sets of two.
Table 27. Feature codes for zHyperLink I/O adapters
Feature codeDescriptionModels
35001-port zHyperLink I/O adapter cardall models
Feature codes for zHyperLink cables
Use these feature codes to order cables to connect zHyperLink I/O adapters to the storage system. Take
note of the distance capabilities for cable types.
multimode, MTP connectors for a
model 993 installed in an existing
model LR1 or ZR1 rack
40 m
(131 ft)
150 m
(492 ft)
3 m
(9.8 ft)
Feature codes for Transparent cloud tiering adapters
Use these feature codes to order adapter pairs to enhance Transparent cloud tiering connectivity for your
storage system.
Transparent cloud tiering connectivity can be enhanced with 10 Gbps adapter pairs to improve bandwidth
for a native cloud storage tier in IBM Z environments.
Table 29. Feature codes for Transparent cloud tiering adapter pairs
copper shortwave adapter pair for 4U processor
node
Feature codes for flash RAID adapters
Use these feature codes to order flash RAID adapters.
You must order a flash RAID adapter pair for each High Performance Flash Enclosure Gen2 pair.
models 996
Chapter 4. Physical
conguration63
Page 78
Table 30. Feature codes for flash RAID adapters
Feature codeDescriptionModels
1604Flash RAID adapter pairmodel 993, 994, 996, and
Processor node features
These features specify the number and type of core processors in the processor node. All base frames
(model 993, 994, and 996) contain two processor enclosures (POWER9 servers) that contain the
processors and memory that drives all functions in the storage system.
Feature codes for processor licenses
Use these processor-license feature codes to plan for and order processor memory for your storage
system. You can order only one processor license per system.
Table 31. Feature codes for processor licenses
Feature codeDescription
E96
Corequisite feature code for
memory
43418-core POWER9 processor
434210-core POWER9 processor
4343Second 10-core POWER9
Processor memory features
These features specify the amount of memory that you need depending on the processors in the storage
system.
Feature codes for system memory
Use these feature codes to order system memory for your storage system.
Note: Memory is not the same as cache. The amount of cache is less than the amount of available
memory. See the DS8000 Storage Management GUI.
Table 32. Feature codes for system memory
Feature codeDescription
4450 or 4451
feature
4452
feature
4453 or 4454
processor feature for 20-core
conguration
Corequisite feature
code for processor
license
4450192 GB system memory4341 (8-core)
4451512 GB system memory4341 (8-core)
4452512 GB system memory4342 (10-core)
44531024 GB system memory4342 and 4343 (20-
44542048 GB system memory4342 and 4343 (20-
64 IBM DS8900F: DS8900F Introduction and Planning Guide
core)
core)
Page 79
Power features
You must specify the power features to include on your storage system.
The power features are separated into the following categories:
• Power cords
• Input voltage
Power cords
A pair of power cords (also known as power cables) is required for each base or expansion frame.
The DS8000 series has redundant primary power supplies. For redundancy, ensure that each power cord
to the frame is supplied from an independent power source.
Feature codes for power cords
Use these feature codes to order power cords for DS8900F base or expansion racks. Each feature code
includes two power cords. Ensure that you meet the requirements for each power cord and connector
type that you order.
Important: A minimum of one safety-approved ladder (feature code 1101) must be available at each
installation site when the top exit bracket (feature code 1401) is specied for overhead cabling and when
the maximum height of the overhead power source is 10 ft from the ground level. This ladder is a
requirement for storage-system installation and service.
Attention:
circuit, use the appropriate main power cables for EMEA (Europe, Middle East, and Africa) and
Asia/Pacic. If input voltage for the country uses a delta circuit, use the appropriate main power
cables for United States, Canada, and Latin America. For more information about electric currents
for various countries, see the International Trade Administration website (http://trade.gov/
publications/abstracts/electric-current-abroad-2002.asp).
Note: The IEC 60309 standard commercial/industrial pin and sleeve power connectors are often
abbreviated "IEC '309" or simply "309 wall plug".
Ensure that you are familiar with the conguration rules and feature codes before you order power
features.
When you order power cord features, the following rules apply.
• You must order a minimum of one power cord feature for each frame. Each feature code represents a
pair of power cords (two cords).
• You must select the power cord that is appropriate to the input voltage and geographic region where the
storage system is located.
Other conguration features
Features are available for shipping and setting up the storage system.
You can select shipping and setup options for the storage system. The following list identies optional
feature codes that you can specify to customize or to receive your storage system.
• BSMI certicate (Taiwan)
• Shipping weight reduction option
BSMI certicate (Taiwan)
The BSMI certicate for Taiwan option provides the required Bureau of Standards, Metrology, and
Inspection (BSMI) ISO 9001 certication documents for storage system shipments to Taiwan.
If the storage system that you order is shipped to Taiwan, you must order this option for each model that
is shipped.
66
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 81
Feature code for BSMI certication documents (Taiwan)
Use this feature code to order the Bureau of Standards, Metrology, and Inspection (BSMI) certication
documents that are required when the storage system is shipped to Taiwan.
Table 34. Feature code for the BSMI certication documents (Taiwan)
Feature codeDescription
0400BSMI certication documents
Shipping weight reduction
Order the shipping weight reduction option to receive delivery of a storage system in multiple shipments.
If your site has delivery weight constraints, IBM offers a shipping weight reduction option that ensures
the maximum shipping weight of the initial frame shipment does not exceed 909 kg (2000 lb). The frame
weight is reduced by removing selected components, which are shipped separately.
The IBM technical service representative installs the components that were shipped separately during
the storage system installation. This feature increases storage system installation time, so order it only if
it is required.
Feature code for shipping weight reduction
Use this feature code to order the shipping-weight reduction option for your storage system.
This feature ensures that the maximum shipping weight of the base rack or expansion rack does not
exceed 909 kg (2000 lb) each. Packaging adds 120 kg (265 lb).
Table 35. Feature code for shipping weight reduction
Feature codeDescriptionModels
0200Shipping weight reductionAll
Chapter 4. Physical conguration67
Page 82
68 IBM DS8900F: DS8900F Introduction and Planning Guide
Page 83
Chapter 5. Licensed functions
Licensed functions are the operating system and functions of the storage system. Required features and
optional features are included.
IBM authorization for licensed functions is purchased as 533x or 904x machine function authorizations.
However, the license functions are storage models. For example, the Base Function license is listed as a
533x or 904x model FF8. The 533x or 904x machine function authorization features are for billing
purposes only.
The following licensed functions are available:
Base Function
The Base Function license is required for each storage system.
z-synergy Services
The z-synergy Services include z/OS licensed features that are supported on the storage system.
Copy Services
Copy Services features help you implement storage solutions to keep your business running 24 hours
a day, 7 days a week by providing data duplication, data migration, and disaster recovery functions.
Copy Services Manager on Hardware Management Console
The Copy Services Manager on Hardware Management Console (CSM on HMC) license enables IBM
Copy Services Manager to run on the Hardware Management Console, which eliminates the need to
maintain a separate server for Copy Services functions.
Licensed function indicators
Each licensed function indicator feature that you order on a base frame enables that function at the
system level.
After you receive and apply the feature activation codes for the licensed function indicators, the licensed
functions are enabled for you to use. The licensed function indicators are also used for maintenance
billing purposes.
Note: Retrieving feature activation codes is part of managing and activating your licenses. Before you can
logically congure your storage system, you must rst manage and activate your licenses.
Each licensed function indicator requires a corequisite 904x function authorization. Function
authorization establishes the extent for the licensed function before the feature activation code is
provided. Each function authorization applies only to the specic storage system (by serial number) for
which it was acquired. The function authorization cannot be transferred to another storage system (with a
different serial number).
License scope
Licensed functions are activated and enforced within a dened license scope.
License scope refers to the following types of storage and types of servers with which the function can be
used:
Fixed block (FB)
The function can be used only with data from Fibre Channel attached servers. The Base Function,
Copy Services, and Copy Services Manager on the Hardware Management Console licensed functions
are available within this scope.
The function can be used only with data from FICON attached servers. The Copy Services, Copy
Services Manager on the Hardware Management Console, and z-synergy Services licensed functions
are available within this scope.
Both FB and CKD (ALL)
The function can be used with data from all attached servers. The Base Function, Copy Services, and
Copy Services Manager on the Hardware Management Console licensed functions are available within
this scope.
Some licensed functions have multiple license scope options, while other functions have only a single
license scope.
You do not specify the license scope when you order function authorization feature numbers. Feature
numbers establish only the extent of the authorization (in terms of usable capacity), regardless of the
storage type. However, if a licensed function has multiple license scope options, you must select a license
scope when you initially retrieve the feature activation codes for your storage system. This activity is
performed by using the IBM Data storage feature activation (DSFA) website (www.ibm.com/storage/
dsfa) .
Note: Retrieving feature activation codes is part of managing and activating your licenses. Before you can
logically congure your storage system, you must rst manage and activate your licenses.
When you use the DSFA website to change the license scope after a licensed function is activated, a new
feature activation code is generated. When you install the new feature activation code into the storage
system, the function is activated and enforced by using the newly selected license scope. The increase in
the license scope (changing FB or CKD to ALL) is a nondisruptive activity. A reduction of the license scope
(changing ALL to FB or CKD) is a disruptive activity, which takes effect at the next restart.
Ordering licensed functions
After you decide which licensed functions to use with your storage system, you are ready to order the
functions.
About this task
Licensed functions are purchased as function authorization features.
To order licensed functions, use the following general steps:
Procedure
1. Required. Order the Base Function license to support the total usable capacity of your storage system.
2. Optional. Order the z-synergy Services license to support the usable capacity of all arrays that are
formatted as CKD.
3. Optional. Order the Copy Services license to support the total usable capacity of all volumes that are
involved in one or more copy services functions.
Note: The Copy Services license is based on the usable capacity of volumes and not on physical
capacity. If overprovisioning is used on the DS8900F with a signicant amount of Copy Services
functionality, then the Copy Services license needs only to be equal to the total array usable capacity
(even if the logical provisioned capacity of volumes in Copy Services is greater). For example, if the
total provisioned usable capacity of a DS8900F is 100 TB but there are 200 TB of thin provisioning
volumes in Metro Mirror, then only a 100 TB of Copy Services license is needed.
4. Optional. Order the Copy Services Manager on the Hardware Management Console license that
support the total usable capacity of all volumes that are involved in one or more copy services
functions.
70
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 85
Rules for ordering licensed functions
A Base Function license is required for every base frame. All other licensed functions are optional and
must have a capacity that is equal to or less than the Base Function license.
For all licensed functions, you can combine feature codes to order the exact capacity that you need. For
example, if you require 160 TB of Base Function license capacity, order 10 of feature code 8151 (10 TB
each up to 100 TB capacity) and 4 of feature code 8152 (15 TB each, for an extra 60 TB).
When you calculate usable capacity for the Copy Services license, use the size of each volume involved in
a copy services relationship and multiply by the size of each extent.
When you calculate physical capacity, consider the capacity across the entire storage system, including
the base frame and any expansion frames. To calculate the physical capacity, use the following table to
determine the total size of each regular drive feature in your storage system, and then add all the values.
Table 36. Total physical capacity for drive-set features
Drive sizesTotal physical capacityDrives per feature
800 GB flash drives12.8 TB16
1.6 TB flash drives25.6 TB16
1.92 flash drives30.7 TB16
3.2 TB flash drives51.2 TB16
3.84 TB flash dives61.4 TB16
7.68 TB flash drives122.9 TB16
15.36 TB flash drives245.8 TB16
Rules for removing a licensed function
The initial enablement of any optional DS8000 licensed function is a concurrent activity (assuming that
the appropriate level of microcode is installed on the machine for the specic function). The removal of a
DS8000 licensed function is a nondisruptive activity but takes effect at the next machine IML.
If you have a licensed function and no longer want to use it, you can deactivate the license in one of the
following ways:
• Order an inactive or disabled license and replace the active license activation key with the new inactive
license activation key at the IBM Data storage feature activation (DSFA) website (www.ibm.com/
storage/dsfa).
• Go to the DSFA website and change the assigned value from the current number of terabytes (TB) to 0
TB. This value, in effect, makes the feature inactive. If this change is made, you can go back to DSFA and
reactivate the feature, up to the previously purchased level, without having to repurchase the feature.
Regardless of which method is used, the deactivation of a licensed function is a nondisruptive activity, but
takes effect at the next machine IML.
Note: Although you do not need to specify how the licenses are to be applied when you order them, you
must allocate the licenses to the storage image when you obtain your license keys on the IBM Data
storage feature activation (DSFA) website (www.ibm.com/storage/dsfa).
Base Function license
The Base Function license provides essential functions for your storage system. A Base Function license
is required for each storage system.
The Base Function license is available for the following license scopes: FB and ALL (both FB and CKD).
Chapter 5. Licensed functions
71
Page 86
The Base Function license includes the following features:
• Encryption Authorization
• Easy Tier
• Operating Environment License (OEL)
• Thin Provisioning
The Base Function license feature codes are ordered in increments up to a specic capacity. For example,
if you require 160 TB of capacity, order 10 of feature code 8151 (10 TB each up to 100 TB capacity) and 4
of feature code 8152 (15 TB each, for an extra 60 TB).
The Base Function license includes the following feature codes.
Table 37. Base Function license feature codes
Feature CodeFeature code for licensed function indicator
The Base Function license authorizes you to use the model conguration at a specic capacity level. The
Base Function license must cover the full physical capacity of your storage system, which includes the
physical capacity of any expansion frames within the storage system. The license capacity must cover
both open systems data (xed block data) and IBM Z data (count key data). All other licensed functions
must have a capacity that is equal to or less than the Base Function license.
Note: Your storage system cannot be logically congured until you activate the Base Function license. On
activation, drives can be logically congured up to the extent of the Base Function license authorization
level.
As you add more drives to your storage system, you must increase the Base Function license
authorization level for the storage system by purchasing more license features. Otherwise, you cannot
logically congure the additional drives for use.
Encryption Authorization
The Encryption Authorization feature provides data encryption by using IBM Full Disk Encryption (FDE)
and key managers, such as IBM Security Key Lifecycle Manager.
The Encryption Authorization feature secures data at rest and offers a simple, cost-effective solution for
securely erasing any disk drive that is being retired or re-purposed (cryptographic erasure). The storage
system uses disks that have FDE encryption hardware and can perform symmetric encryption and
decryption of data at full disk speed with no impact on performance.
IBM Easy Tier
Support for IBM Easy Tier is available with the IBM Easy Tier feature.
The Easy Tier feature enables the following modes:
• Easy Tier: automatic mode
• Easy Tier: manual mode
72
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 87
The feature enables the following functions for the storage type:
• Easy Tier application
• Easy Tier heat map transfer
• The capability to migrate volumes for logical volumes
• The recongure extent pool function of the extent pool
• The dynamic extent relocation with an Easy Tier managed extent pool
Operating environment license
The operating environment model and features establish the extent (capacity) that is authorized to use
the product operating environment.
To determine the operating environment license support function, see “Machine types overview” on page
2.
Thin provisioning
Thin provisioning denes logical volume sizes that are larger than the usable capacity installed on the
system. The volume allocates capacity on an as-needed basis as a result of host-write actions.
The thin provisioning feature enables the creation of extent space efcient logical volumes. Extent space
efcient volumes are supported for FB and CKD volumes and are supported for all Copy Services
functionality, including FlashCopy targets where they provide a space efcient FlashCopy capability.
z-synergy Services license
The z-synergy Services license includes z/OS® features that are supported on the storage system.
The z-synergy Services license is available for the following license scope: CKD.
The z-synergy Services license includes the following features:
• High Performance FICON for z Systems
• HyperPAV
• Parallel Access Volumes (PAV)
• Transparent cloud tiering
• z/OS Distributed Data Backup
• zHyperLink
• IBM Fibre Channel Endpoint Security
The z-synergy Services license also includes the ability to attach FICON channels.
The z-synergy Services license feature codes are ordered in increments up to a specic capacity. For
example, if you require 160 TB of capacity, order 10 of feature code 8351 (10 TB each up to 100 TB
capacity), and 4 of feature code 8352 (15 TB each, for an extra 60 TB).
The z-synergy Services license includes the feature codes listed in the following table.
A z-synergy Services license is required for only the usable capacity that is congured as count key data
(CKD) arrays for use with IBM Z host systems.
Note: If z/OS Distributed Data Backup is being used on a system with no CKD arrays, a 10 TB z-synergy
Services license must be ordered to enable the FICON attachment functionality.
High Performance FICON for z Systems
High Performance FICON for z Systems (zHPF) is an enhancement to the IBM FICON architecture to
offload I/O management processing from the z Systems channel subsystem to the DS8900F Host Adapter
and controller.
zHPF is an optional feature of z Systems server and of the DS8900F. Recent enhancements to zHPF
include Extended Distance Facility zHPF List Pre-fetch support for IBM DB2® and utility operations, and
zHPF support for sequential access methods. All of DB2 I/O is now zHPF-capable.
IBM HyperPAV
IBM HyperPAV associates the volumes with either an alias address or a specied base logical volume
number. When a host system requests IBM HyperPAV processing and the processing is enabled, aliases
on the logical subsystem are placed in an IBM HyperPAV alias access state on all logical paths with a
given path group ID.
Parallel Access Volumes
The parallel access volumes (PAV) features establish the extent of IBM authorization for the use of the
parallel access volumes function.
Parallel Access Volumes (PAVs), also referred to as aliases, provide your system with access to volumes
in parallel when you use an IBM Z host.
A PAV capability represents a signicant performance improvement by the storage unit over traditional
I/O processing. With PAVs, your system can access a single volume from a single host with multiple
concurrent requests.
Transparent cloud tiering
Transparent cloud tiering provides a native cloud storage tier for IBM Z environments. Transparent cloud
tiering moves data directly from the storage system to cloud object storage, without sending data through
the host.
Transparent cloud tiering provides cloud object storage (public, private, or on-premises) as a secure,
reliable, transparent storage tier that is natively integrated with the storage system. Transparent cloud
tiering on the storage system is fully integrated with DFSMShsm, which reduces CPU utilization on the
host when you are migrating and recalling data in cloud storage. You can use the IBM Z host to manage
transparent cloud tiering and attach metadata to cloud objects.
The storage system supports the OpenStack Swift and Amazon S3 APIs. The storage system also
supports the IBM TS7700 as an object storage target and the following cloud service providers:
• Amazon S3
74
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 89
• IBM Cloud Object Storage
• OpenStack Swift Based Private Cloud
z/OS Distributed Data Backup
z/OS Distributed Data Backup (zDDB) is a licensed feature on the base frame that allows hosts, which are
attached through a FICON interface, to access data on xed block (FB) volumes through a device address
on FICON interfaces.
If zDDB is installed and enabled and a volume group type species either FICON interfaces, this volume
group has implicit access to all FB logical volumes that are congured in addition to all CKD volumes
specied in the volume group. Then, with appropriate software, a z/OS host can complete backup and
restore functions for FB logical volumes that are congured on a storage system image for open systems
hosts.
zHyperLink
zHyperLink is a short distance link technology that is designed for up to 10 times lower latency than zHPF.
zHyperLink can speed up transaction processing and improve active log throughput.
zHyperLink is intended to complement FICON technology to accelerate I/O requests that are typically
used for transaction processing.
IBM Fibre Channel Endpoint Security
Use IBM Fibre Channel Endpoint Security to establish authenticated communication and encryption of
data in flight for Fibre Channel connections between an IBM z15 host and the storage system. The
connections are secured by Fibre Channel security protocols and key server authentication that uses
communication certicates. If both the host and storage system use a connection with Fibre Channel
ports that support encryption, the connection will transmit encrypted data between the ports.
Copy Services license
Copy Services features help you implement storage solutions to keep your business running 24 hours a
day, 7 days a week by providing data duplication, data migration, and disaster recovery functions. The
Copy Services license is based on usable capacity of the volumes involved in Copy Services functionality.
The Copy Services license is available for the following license scopes: FB and ALL (both FB and CKD).
The Copy Services license includes the following features:
The Copy Services license feature codes are ordered in increments up to a specic capacity. For example,
if you require 160 TB of capacity, order 10 of feature code 8251 (10 TB each up to 100 TB capacity), and
4 of feature code 8252 (15 TB each, for an extra 60 TB).
The Copy Services license includes the following feature codes.
Chapter 5. Licensed functions
75
Page 90
Table 39. Copy Services license feature codes
Feature CodeFeature code for licensed function indicator
The following ordering rules apply when you order the Copy Services license:
• The Copy Services license should be ordered based on the total usable capacity of all volumes involved
in one or more Copy Services relationships.
• The licensed authorization must be equal to or less that the total usable capacity allocated to the
volumes that participate in Copy Services operations.
• You must purchase features for both the source (primary) and target (secondary) storage system.
Remote mirror and copy functions
The Copy Services license establishes the extent of IBM authorization for the use of the remote mirror
and copy functions on your storage system.
The following functions are included:
• Metro Mirror
• Global Mirror
• Global Copy
• Metro/Global Mirror
• Multiple Target PPRC
FlashCopy function (point-in-time copy)
FlashCopy creates a copy of a source volume on the target volume. This copy is called a point-in-time
copy.
When you initiate a FlashCopy operation, a FlashCopy relationship is created between a source volume
and target volume. A FlashCopy relationship is a "mapping" of the FlashCopy source volume and a
FlashCopy target volume. This mapping allows a point-in-time copy of that source volume to be copied to
the associated target volume. The FlashCopy relationship exists between this volume pair from the time
that you initiate a FlashCopy operation until the storage unit copies all data from the source volume to the
target volume or you delete the FlashCopy relationship, if it is a persistent FlashCopy.
Safeguarded Copy
The Safeguarded Copy feature, available with the Copy Services license, creates backups of data that you
can restore to the source volume or a different volume.
The Safeguarded Copy feature creates safeguarded backups that are not accessible by the host system
and protects these backups from corruption that can occur in the production environment. You can dene
a Safeguarded Copy schedule to create multiple backups on a regular basis, such as hourly or daily. You
76
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 91
can also restore a backup to the source volume or to a different volume. A backup contains the same
metadata as the safeguarded source volume.
Safeguarded Copy can create backups with more frequency and capacity in comparison to FlashCopy
volumes. The creation of safeguarded backups also impacts performance less than the multiple target
volumes that are created by FlashCopy.
With backups that are outside of the production environment, you can use the backups to restore your
environment back to a specied point in time. You can also extract and restore specic data from the
backup or use the backup to diagnose production issues.
You cannot delete a safeguarded source volume before the safeguarded backups are deleted. The
maximum size of a backup is 16 TB.
z/OS Global Mirror
z/OS Global Mirror (previously known as Extended Remote Copy or XRC) provides a long-distance remote
copy solution across two sites for open systems and IBM Z data with asynchronous technology.
z/OS Metro/Global Mirror Incremental Resync
z/OS Metro/Global Mirror Incremental Resync (RMZ) is an enhancement for z/OS Global Mirror. z/OS
Metro/Global Mirror Incremental Resync can eliminate the need for a full copy after a HyperSwap
situation in 3-site z/OS Global Mirror congurations.
The storage system supports z/OS Global Mirror that is a 3-site mirroring solution that uses IBM System
Storage Metro Mirror and z/OS Global Mirror (XRC). The z/OS Metro/Global Mirror Incremental Resync
capability is intended to enhance this solution by enabling resynchronization of data between sites by
using only the changed data from the Metro Mirror target to the z/OS Global Mirror target after a
HyperSwap operation.
Copy Services Manager on the Hardware Management Console license
IBM Copy Services Manager facilitates the use and management of Copy Services functions such as the
remote mirror and copy functions (Metro Mirror and Global Mirror) and the point-in-time function
(FlashCopy). IBM Copy Services Manager is available on the Hardware Management Console (HMC),
which eliminates the need to maintain a separate server for Copy Services functions.
The Copy Services Manager on Hardware Management Console (CSM on HMC) license is available for the
following license scopes: FB and ALL (both FB and CKD).
The CSM on HMC license includes the following feature codes.
Feature CodeFeature code for licensed function indicator
8451CSM on HMC - active
Chapter 5. Licensed functions 77
Page 92
78 IBM DS8900F: DS8900F Introduction and Planning Guide
Page 93
Chapter 6. Delivery and installation requirements
You must ensure that you properly plan for the delivery and installation of your storage system.
This information provides the following planning information for the delivery and installation of your
storage system:
• Planning for delivery of your storage system
• Planning the physical installation site
• Planning for power requirements
• Planning for network and communication requirements
For more information about the equipment and documents that IBM includes with storage system
shipments, see Appendix C, “IBM equipment and documents ,” on page 127.
Delivery requirements
Before you receive your storage system shipment, ensure that the nal installation site meets all delivery
requirements.
Attention: Customers must prepare their environments to accept the storage system based on
this planning information, with assistance from an IBM Advanced Technical Services (ATS)
representative or a technical service representative. The nal installation site within the computer
room must be prepared before the equipment is delivered. If the site cannot be prepared before
the delivery time, customers must make arrangements to have the professional movers return to
nish the transportation later. Only professional movers can transport the equipment. The
technical service representative can minimally reposition the frame at the installation site, as
needed to complete required service actions. Customers are also responsible for using
professional movers in the case of equipment relocation or disposal.
Acclimation
Server and storage equipment (racks and frames) must be gradually acclimated to the surrounding
environment to prevent condensation.
When server and storage equipment (racks and frames) is shipped in a climate where the outside
temperature is below the dew point of the destination (indoor location), there is a possibility that water
condensation can form on the cooler inside and outside surfaces of the equipment when the equipment is
brought indoors.
Sufcient time must be allowed for the shipped equipment to gradually reach thermal equilibrium with
the indoor environment before you remove the shipping bag and energize the equipment. Follow these
guidelines to properly acclimate your equipment:
• Leave the system in the shipping bag. If the installation or staging environment allows it, leave the
product in the full package to minimize condensation on or within the equipment.
• Allow the packaged product to acclimate for 24 hours.1 If there are visible signs of condensation (either
external or internal to the product) after 24 hours, acclimate the system without the shipping bag for an
additional 12 - 24 hours or until no visible condensation remains.
• Acclimate the product away from perforated tiles or other direct sources of forced air convection to
minimize excessive condensation on or within the equipment.
1
Unless otherwise stated by product-specic installation instructions.
Note: Condensation is a normal occurrence, especially when you ship equipment in cold-weather
climates. All IBM® products are tested and veried to withstand condensation that is produced under
these circumstances. When sufcient time is provided to allow the hardware to gradually acclimate to the
indoor environment, there should be no issues with long-term reliability of the product.
You must ensure that your loading dock and receiving area can support the weight and dimensions of the
packaged storage system shipments.
You receive at least two, and up to three, shipping containers for each model that you order. You always
receive the following items:
• For model 993, a container with the storage system modules, or for models 994, 996, and E96, a
container with the storage system frame. In the People's Republic of China (including Hong Kong S.A.R.
of China), India, and Brazil, this container has a plywood front door on the package, with a corrugated
paperboard outer wrap. In all other countries, this container is a pallet that is covered by a corrugated
berboard (cardboard) cover
• A container with the remaining components, such as power cords, CDs, and other ordered features or
peripheral devices for your storage system.
Table 41 on page 80 shows the nal packaged dimensions and maximum packaged weight of the
storage system frame shipments.
To calculate the weight of your total shipment, add the weight of each frame container and the weight of
one ship group container for each frame.
Table 41. Packaged dimensions and weight for storage system frames (all countries)
ContainerPackaged dimensionsMaximum packaged
weight
DS8910F Rack Mounted model 993
DS8910 base frame model 994
DS8950 base frame model 996
DS8950 expansion frame model E96
Height
1.49 m (58.7 in.)
Width
1.05 m (41.3 in.)
Depth
1.30 m (51.2 in.)
Height
2.22 m (87.7 in.)
Width
1 m (39.4 in.)
Depth
1.50 m (59.1 in.)
Height
2.22 m (87.7 in.)
Width
1 m (39.4 in.)
Depth
1.50 m (59.1 in.)
Height
2.22 m (87.7 in.)
Width
1 m (39.4 in.)
Depth
1.50 m (59.1 in.)
295 kg (650 lb)
762 kg (1680 lb)
793 kg (1748 lb)
603 kg (1330 lb)
80 IBM DS8900F: DS8900F Introduction and Planning Guide
Page 95
Receiving delivery
The shipping carrier is responsible for delivering and unloading the storage system as close to its nal
destination as possible. You must ensure that your loading ramp and your receiving area can
accommodate your storage system shipment.
Before you begin
Ensure you read the following caution when you position the rack (model 994, 996, or E96). If you are
relocating the frame, ease it out of its current position and pull out the outriggers for the remaining major
part of the relocation. Roll the rack on its castors until you get close to its intended location. Keep the
supplemental outriggers in position as shown in the following image. When the rack is near the nal
location, you can recede the outriggers back into the recessed position, flush with the outsides of the
rack. The outriggers are only intended to help move the rack and are not intended to support the rack in
its nal location. To prevent unintended movement and ensure stability of the rack, you can put down the
leveler jacks.
About this task
Use the following steps to ensure that your receiving area and loading ramp can safely accommodate the
delivery of your storage system:
Procedure
1. Find out the packaged weight and dimensions of the shipping containers in your shipment.
2. Ensure that your loading dock, receiving area, and elevators can safely support the packaged weight
and dimensions of the shipping containers.
Note: You can order a weight-reduced shipment when a congured storage system exceeds the
weight capability of the receiving area at your site.
3. To compensate for the weight of the storage system shipment, ensure that the loading ramp at your
site does not exceed an angle of 10°. (See Figure 6 on page 82.)
Chapter 6. Delivery and installation requirements
81
Page 96
Figure 6. Maximum tilt for a packed frame is 10°
Installation site requirements
You must ensure that the location where you plan to install your storage system meets all requirements.
Planning the model 993 rack conguration
Ensure that the rack where you plan to install your storage system meets the rack requirements.
About this task
When you are planning the rack for your DS8910F Rack Mounted storage system (model 993), you must
answer the following questions that relate to rack specications and available space:
• Where are you installing the storage system? The model 993 is a rack mountable system. There are
three different rack scenarios:
– An existing IBM Z model ZR1
– An existing IBM LinuxONE Rockhopper II model LR1
– Other standard 19-inch wide rack that conforms to EIA 310D specications:
- 19-inch EIA rails
- Minimum rail depth of 700 mm
- Maximum rail depth of 780 mm
• Does the rack in which you are installing the model 993 have adequate space to accommodate the
components? The rack must have a minimum of 15U contiguous space to mount the modules.
– The standard 19-inch wide rack installation (feature code 0939) supports an optional second High
Performance Flash Enclosure Gen2 pair (feature code 1605). If you plan to order the second High
Performance Flash Enclosure Gen2 pair, you must have an additional 4U contiguous space in your
standard 19-inch wide rack. The optional second High Performance Flash Enclosure Gen2 pair is not
available with the model ZR1 installation (feature code 0937) or the model LR1 installation (feature
code 0938).
– The standard 19-inch wide rack installation (feature code 0939) supports an optional 1U keyboard
and display (feature code 1765). If you plan to order the 1U keyboard and display, you must have an
additional 1U contiguous space in your standard 19-inch wide rack. For accessibility, the keyboard
and display must be mounted at a height of 15 - 46 inches. If you add the keyboard and display,
ensure that you provide adequate space to accommodate them. The optional 1U keyboard and
display are not available with the model ZR1 installation (feature code 0937) or the model LR1
installation (feature code 0938).
82
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 97
Planning for floor and space requirements
Ensure that the location where you plan to install your storage system meets space and floor
requirements. Decide whether your storage system is to be installed on a raised or nonraised floor.
About this task
When you are planning the location of your storage system, you must answer the following questions that
relate to floor types, floor loads, and space:
• What type of floor does the installation site have? The storage system can be installed on a raised or
nonraised floor.
• If the installation site has a raised floor, does the floor require preparation (such as cutting out tiles) to
accommodate cable entry into the system?
• Does the floor of the installation site meet floor-load requirements?
• Can the installation site accommodate the amount of space that is required by the storage system, and
does the space meet the following criteria?
– Weight distribution area that is needed to meet floor load requirements
– Service clearance requirements
• Does the installation site require overhead cable management for host ber and power cables?
Procedure
Use the following steps to ensure that your planned installation site meets space and floor load
requirements:
1. Identify the base frame and expansion frames that are included in your storage system.
2. Decide whether to install the storage system on a raised or nonraised floor.
a) If the location has a raised floor, plan where the floor tiles must be cut to accommodate the cables.
b) If the location has a nonraised floor, resolve any safety problems, and any special equipment
considerations, caused by the location of cable exits and routing.
3. Determine whether the floor of the installation site meets the floor load requirements for your storage
system.
4. Calculate the amount of space to be used by your storage system.
a) Identify the total amount of space that is needed for your storage system by using the dimensions
of the frames and the weight distribution areas that are calculated in step “3” on page 83.
b) Ensure that the area around each frame and each storage system meets the service clearance
requirements.
Note: Any expansion frames in the storage system must be attached to the base frame on the right
side as you face the front of the storage system.
Installing on raised or nonraised floors
You can install your storage system on a raised or nonraised floor. Raised floors can provide better
cooling than nonraised floors.
Raised floor considerations
Installing your storage system on a raised floor provides the following benets:
• Improves operational efciency and allows greater flexibility in the arrangement of equipment.
• Increases air circulation for better cooling.
• Protects the interconnecting cables and power receptacles.
• Prevents tripping hazards because cables can be routed underneath the raised floor.
When you install a raised floor, consider the following factors:
• The raised floor must be constructed of re-resistant or noncombustible material.
Chapter 6. Delivery and installation requirements
83
Page 98
• The raised-floor height must be at least 30.5 cm (12 in.). Clearance must be adequate to accommodate
interconnecting cables, Fibre Channel cable raceways, power distribution, and any piping that is present
under the floor. Floors with greater raised-floor heights allow for better equipment cooling.
• Fully congured, two-frame storage systems can weigh in excess of 2844 kg (6270 lbs). You must
ensure that the raised floor on which the storage system is to be installed is able to support this weight.
Contact the floor-tile manufacturer and a structural engineer to verify that the raised floor is safe to
support the concentrated loads equal to one third of the total weight of one frame. Under certain
circumstances such as relocation, it is possible that the concentrated loads can be as high as one half of
the total weight of one frame per caster. When you install two adjacent frames, it is possible that two
casters induce a total load as high as one third of the total weight of two adjacent frames.
• Depending on the type of floor tile, more supports (pedestals) might be necessary to maintain the
structural integrity of an uncut panel or to restore the integrity of a floor tile that is cut for cable entry or
air supply. Contact the floor-tile manufacturer and a structural engineer to ensure that the floor tiles
and pedestals can sustain the concentrated loads.
• Pedestals must be rmly attached to the structural (concrete) floor by using an adhesive.
• Seal raised-floor cable openings to prevent chilled air that is not used to directly cool the equipment
from escaping.
• Use noncombustible protective molding to eliminate sharp edges on all floor cutouts, to prevent
damage to cables and hoses, and to prevent casters from rolling into the floor cutout.
• Avoid the exposure of metal or highly conductive material to the walking surface when a metallic raised
floor structure is used. Such exposure is considered an electrical safety hazard.
• Concrete subfloors require treatment to prevent the release of dust.
• The use of a protective covering (such as plywood, tempered masonite, or plyron) is required to prevent
damage to floor tiles, carpeting, and tiles while equipment is being moved to or is relocated within the
installation site. When the equipment is moved, the dynamic load on the casters is greater than when
the equipment is stationary.
Nonraised floor considerations
For environments with nonraised floors, an optional overhead cabling feature is available.
Follow the special considerations and installation guidelines as described in the topics about overhead
cable management.
When you install a storage system on a non-raised floor, consider the following factors:
• The use of a protective covering (such as plywood, tempered masonite, or plyron) is required to prevent
damage to floor and carpeting while equipment is being moved to or is relocated within the installation
site.
• Concrete floors require treatment to prevent the release of dust.
Overhead cable management (top-exit bracket)
Overhead cable management (top-exit bracket) is an optional feature that includes a top-exit bracket for
managing your Fibre cables. This feature is an alternative to the standard, floor-cable exit.
Using overhead cabling provides many of the cooling and safety benets that are provided by raised
flooring in a nonraised floor environment. Unlike raised-floor cabling, the installation planning, cable
length, and the storage-system location in relation to the cable entry point are critical to the successful
installation of the top-exit bracket.
Figure 7 on page 85
When you order the overhead-cable management feature, the feature includes clamping hardware and
internal cable routing brackets for rack 1 or rack 2. The following notes provide more information about
the color-coded cable routing and components in Figure 7 on page 85.
illustrates the location of the cabling for the top-exit bracket for ber cable feature.
1 Customer Fibre Channel host cables. The Fibre Channel host cables, which are shown in red, are
routed from the top of the rack down to I/O enclosure host adapters.
84
IBM DS8900F: DS8900F Introduction and Planning Guide
Page 99
2 Network Ethernet cable, power sequence cables, and customer analog phone line (if used). The
network Ethernet cable, in blue, is routed from the top of the rack to the rear rack connector. The rack
connector has an internal cable to the management console. The power sequence cables and private
network Ethernet cables (one gray and one black) for a partner storage system are also located here.
3 Mainline power cords. Two top-exit mainline power cords for each rack, which are shown in green,
are routed here.
Notes:
• A technical service representative tests the power sources. The customer is required to provide power
outlets (for connecting power cords) within the specied distance.
• Fibre Channel host cables are internally routed and connected by either the customer or by a technical
service representative.
• All remaining cables are internally routed and connected by a technical service representative.
Figure 7. Top exit feature installed (cable routing and top exit locations)
Feature codes for overhead cable management (top-exit bracket)
Use this feature code to order cable management for overhead cabling (top exit bracket) for your model
994, 996, or E96.
Note: In addition to the top-exit bracket, one ladder (feature code 1101) must also be purchased for a
site where the top-exit bracket for ber cable feature is used. The ladder is used to ensure safe access
when your storage system is serviced with a top-exit bracket feature installed.
Chapter 6. Delivery and installation requirements
85
Page 100
Table 42. Feature codes for the overhead cable (top-exit bracket)
Feature CodeDescription
1401Top-exit bracket for ber cable
Overhead cabling installation and safety requirements
Ensure that installation and safety requirements are met before your storage system is installed.
If the cables are too long, there is not enough room inside of the rack to handle the extra length and
excess cable might interfere with the service process, preventing concurrent repair. Consider the
following specications and limitations before you order this feature:
• In contrast to the raised-floor power cords, which have a length from the tailgate to the connector of
about 4.9 m (16 ft), the length of the top exit power cords are only 1.8 m (6 ft) from the top of the
storage system.
• Product safety requirements restrict the servicing of your overhead equipment to a maximum of 3 m (10
ft) from the floor. Therefore, your power source must not exceed 3 m (10 ft) from the floor and must be
within 1.5 m (5 ft) of the top of the power cord exit gate. Servicing any overhead equipment higher than
3 m (10 ft) requires a special bid contract. Contact your technical service representatives for more
information on special bids.
• To meet safety regulations in servicing your overhead equipment, you must purchase a minimum of one
feature code 1101 for your top exit bracket feature per site. This feature code provides a safetyapproved 5-foot ladder. This ladder provides technical service representatives the ability to perform
power safety checks and other service activities on the top of your storage system. Without this
approved ladder, technical service representatives are not able to install or service a storage system
with the top-cable exit features.
• To assist you with the top-exit host cable routing, feature code 1400 provides a cable channel bracket
that mounts directly below the topside of the tailgate and its opening. Cables can be easily slid into the
slots on its channels. The cable bracket directs the cables behind the rack ID card and towards the rear,
where the cables drop vertically into a second channel, which mounts on the left-side wall (when
viewing the storage system from the rear). There are openings in the vertical channel for cables to exit
toward the I/O enclosures.
Accommodating cables
You must ensure that the location and dimensions of the cable cutouts for each frame in the storage
system can be accommodated by the installation location. An overhead-cable management option (topexit bracket) is available for DS8900F for environments that have special planning and safety
requirements.
Use the following steps to ensure that you prepare for cabling for each storage system:
1. Based on your planned storage system layout, ensure that you can accommodate the locations of the
cables that exit each frame. See the following gure for the cable cutouts for the DS8900F.
86
IBM DS8900F: DS8900F Introduction and Planning Guide
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.