EMC VMAX 100K, VMAX 400K, VMAX 200K Product Manual

Page 1
EMC® VMAX3™ Family Product Guide
VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 2
Copyright © 2014-2017 EMC Corporation All rights reserved.
Published May 2017
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
EMC Corporation Hopkinton, Massachusetts 01748-9103 1-508-435-1000 In North America 1-866-464-7381 www.EMC.com
2 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 3

CONTENTS

Figures
Tables
Chapter 1
Chapter 2
7
9
Preface 11
Revision history...........................................................................................18
VMAX3 with HYPERMAX OS 21
Introduction to VMAX3 with HYPERMAX OS............................................. 22
VMAX3 Family 100K, 200K, 400K arrays.................................................... 23
VMAX3 Family specifications.........................................................24
HYPERMAX OS..........................................................................................34
What's new in HYPERMAX OS 5977 Q2 2017................................34
HYPERMAX OS emulations........................................................... 35
Container applications .................................................................. 36
Data protection and integrity.........................................................39
Management Interfaces 47
Management interface versions..................................................................48
Unisphere for VMAX...................................................................................48
Workload Planner.......................................................................... 49
FAST Array Advisor....................................................................... 49
Unisphere 360............................................................................................ 49
Solutions Enabler........................................................................................49
Mainframe Enablers................................................................................... 50
Geographically Dispersed Disaster Restart (GDDR)....................................51
SMI-S Provider........................................................................................... 51
VASA Provider............................................................................................ 51
eNAS management interface .....................................................................52
ViPR suite...................................................................................................52
ViPR Controller..............................................................................52
ViPR Storage Resource Management............................................52
vStorage APIs for Array Integration........................................................... 53
SRDF Adapter for VMware® vCenter™ Site Recovery Manager.................54
SRDF/Cluster Enabler ............................................................................... 54
EMC Product Suite for z/TPF....................................................................54
SRDF/TimeFinder Manager for IBM i......................................................... 55
AppSync.....................................................................................................55
Chapter 3
Open systems features 57
HYPERMAX OS support for open systems.................................................58
Backup and restore to external arrays........................................................ 59
Data movement............................................................................. 59
Typical site topology......................................................................60
ProtectPoint solution components................................................. 61
ProtectPoint and traditional backup.............................................. 62
Basic backup workflow.................................................................. 63
Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS 3
Page 4
CONTENTS
Basic restore workflow.................................................................. 65
VMware Virtual Volumes............................................................................ 69
VVol components...........................................................................69
VVol scalability...............................................................................70
VVol workflow................................................................................70
Chapter 4
Chapter 5
Chapter 6
Mainframe Features 73
HYPERMAX OS support for mainframe...................................................... 74
IBM z Systems functionality support.......................................................... 74
IBM 2107 support....................................................................................... 75
Logical control unit capabilities...................................................................75
Disk drive emulations.................................................................................. 76
Cascading configurations........................................................................... 76
Provisioning 77
Thin provisioning.........................................................................................78
Thin devices (TDEVs).................................................................... 78
Thin CKD....................................................................................... 79
Thin device oversubscription......................................................... 79
Open Systems-specific provisioning.............................................. 79
Mainframe-specific provisioning.....................................................81
Storage Tiering 83
Fully Automated Storage Tiering................................................................ 84
Pre-configuration for FAST........................................................... 85
FAST allocation by storage resource pool...................................... 87
Service Levels............................................................................................ 88
FAST/SRDF coordination...........................................................................89
FAST/TimeFinder management..................................................................90
External provisioning with FAST.X............................................................. 90
Chapter 7
Chapter 8
4 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Native local replication with TimeFinder 91
About TimeFinder....................................................................................... 92
Interoperability with legacy TimeFinder products.......................... 92
Targetless snapshots.....................................................................96
Secure snaps................................................................................. 96
Provision multiple environments from a linked target.................... 96
Cascading snapshots..................................................................... 97
Accessing point-in-time copies......................................................98
Mainframe SnapVX and zDP.......................................................................98
Remote replication solutions 101
Native remote replication with SRDF........................................................ 102
SRDF 2-site solutions...................................................................103
SRDF multi-site solutions.............................................................105
Concurrent SRDF solutions.......................................................... 107
Cascaded SRDF solutions............................................................ 108
SRDF/Star solutions.................................................................... 109
Interfamily compatibility................................................................114
SRDF device pairs......................................................................... 117
SRDF device states.......................................................................119
Dynamic device personalities........................................................122
Page 5
CONTENTS
SRDF modes of operation............................................................ 122
SRDF groups................................................................................ 124
Director boards, links, and ports...................................................125
SRDF consistency........................................................................ 126
SRDF data compression............................................................... 126
SRDF write operations................................................................. 127
SRDF/A cache management........................................................ 132
SRDF read operations.................................................................. 135
SRDF recovery operations............................................................137
Migration using SRDF/Data Mobility............................................140
SRDF/Metro ............................................................................................ 145
SRDF/Metro integration with FAST............................................. 147
SRDF/Metro life cycle..................................................................147
SRDF/Metro resiliency.................................................................149
Witness failure scenarios..............................................................153
Deactivate SRDF/Metro.............................................................. 154
SRDF/Metro restrictions............................................................. 155
RecoverPoint............................................................................................ 156
Remote replication using eNAS.................................................................156
Chapter 9
Chapter 10
Appendix A
Blended local and remote replication 159
SRDF and TimeFinder............................................................................... 160
R1 and R2 devices in TimeFinder operations.................................160
SRDF/AR..................................................................................... 160
SRDF/AR 2-site solutions............................................................. 161
SRDF/AR 3-site solutions............................................................. 161
TimeFinder and SRDF/A.............................................................. 162
TimeFinder and SRDF/S.............................................................. 163
SRDF and EMC FAST coordination.............................................. 163
Data Migration 165
Overview...................................................................................................166
Data migration solutions for open systems environments..........................166
Non-Disruptive Migration overview..............................................166
About Open Replicator................................................................. 170
PowerPath Migration Enabler.......................................................172
Data migration using SRDF/Data Mobility.................................... 172
Data migration solutions for mainframe environments...............................176
Volume migration using z/OS Migrator.........................................177
Dataset migration using z/OS Migrator........................................178
Mainframe Error Reporting 179
Error reporting to the mainframe host...................................................... 180
SIM severity reporting.............................................................................. 180
Environmental errors.................................................................... 181
Operator messages...................................................................... 184
Appendix B
Licensing 187
eLicensing................................................................................................. 188
Capacity measurements............................................................... 189
Open systems licenses.............................................................................. 190
License pack................................................................................ 190
License suites...............................................................................190
Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS 5
Page 6
CONTENTS
Individual licenses.........................................................................196
Ecosystem licenses...................................................................... 197
Mainframe licenses................................................................................... 198
License packs...............................................................................198
Individual license.......................................................................... 199
6 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 7

FIGURES

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
D@RE architecture, embedded................................................................................... 41
D@RE architecture, external.......................................................................................41
ProtectPoint data movement..................................................................................... 60
Typical RecoverPoint backup/recovery topology........................................................61
Basic backup workflow............................................................................................... 64
Object-level restoration workflow.............................................................................. 66
Full-application rollback restoration workflow.............................................................67
Full database recovery to production devices.............................................................68
Auto-provisioning groups............................................................................................ 81
FAST data movement................................................................................................. 84
FAST components...................................................................................................... 87
Service Level compliance........................................................................................... 88
Local replication interoperability, FBA devices............................................................94
Local replication interoperability, CKD devices........................................................... 95
SnapVX targetless snapshots..................................................................................... 97
SnapVX cascaded snapshots...................................................................................... 98
zDP operation.............................................................................................................99
Concurrent SRDF topology....................................................................................... 108
Cascaded SRDF topology..........................................................................................109
Concurrent SRDF/Star..............................................................................................110
Concurrent SRDF/Star with R22 devices...................................................................111
Cascaded SRDF/Star................................................................................................ 112
R22 devices in cascaded SRDF/Star......................................................................... 112
Four-site SRDF.......................................................................................................... 114
R1 and R2 devices ..................................................................................................... 117
R11 device in concurrent SRDF.................................................................................. 118
R21 device in cascaded SRDF.................................................................................... 119
R22 devices in cascaded and concurrent SRDF/Star.................................................119
Host interface view and SRDF view of states............................................................120
Write I/O flow: simple synchronous SRDF.................................................................127
SRDF/A SSC cycle switching – multi-cycle mode.....................................................129
SRDF/A SSC cycle switching – legacy mode............................................................130
SRDF/A MSC cycle switching – multi-cycle mode.................................................... 131
Write commands to R21 devices................................................................................132
Planned failover: before personality swap................................................................. 137
Planned failover: after personality swap....................................................................138
Failover to Site B, Site A and production host unavailable......................................... 138
Migrating data and removing the original secondary array (R2)................................ 142
Migrating data and replacing the original primary array (R1)..................................... 143
Migrating data and replacing the original primary (R1) and secondary (R2) arrays....144
SRDF/Metro............................................................................................................. 146
SRDF/Metro life cycle.............................................................................................. 148
SRDF/Metro Array Witness and groups.................................................................... 151
SRDF/Metro vWitness vApp and connections.......................................................... 152
SRDF/Metro Witness single failure scenarios........................................................... 153
SRDF/Metro Witness multiple failure scenarios........................................................ 154
SRDF/AR 2-site solution........................................................................................... 161
SRDF/AR 3-site solution...........................................................................................162
Non-Disruptive Migration zoning ..............................................................................167
Open Replicator hot (or live) pull............................................................................... 171
Open Replicator cold (or point-in-time) pull.............................................................. 172
Migrating data and removing the original secondary array (R2)................................ 173
Migrating data and replacing the original primary array (R1)..................................... 174
Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS 7
Page 8
FIGURES
54 55 56 57 58 59
60 61 62
Migrating data and replacing the original primary (R1) and secondary (R2) arrays....175
z/OS volume migration..............................................................................................177
z/OS Migrator dataset migration.............................................................................. 178
z/OS IEA480E acute alert error message format (call home failure).........................184
z/OS IEA480E service alert error message format (Disk Adapter failure)................. 184
z/OS IEA480E service alert error message format (SRDF Group lost/SIM presented
against unrelated resource).......................................................................................184
z/OS IEA480E service alert error message format (mirror-2 resynchronization)......185
z/OS IEA480E service alert error message format (mirror-1 resynchronization)...... 185
eLicensing process.................................................................................................... 188
8 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 9

TABLES

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
Typographical conventions used in this content..........................................................16
Revision history...........................................................................................................18
Engine specifications.................................................................................................. 24
Cache specifications...................................................................................................24
Vault specifications.....................................................................................................24
Front end I/O modules ...............................................................................................25
eNAS I/O modules......................................................................................................25
eNAS Software Data Movers .....................................................................................25
Capacity, drives .........................................................................................................25
Drive specifications ................................................................................................... 26
System configuration types ....................................................................................... 26
Disk Array Enclosures ................................................................................................ 27
Cabinet configurations ...............................................................................................27
Dispersion specifications............................................................................................ 27
Preconfiguration ........................................................................................................ 27
Host support ..............................................................................................................27
Hardware compression support option (SRDF) ......................................................... 28
VMAX3 Family connectivity .......................................................................................29
2.5" disk drives...........................................................................................................30
3.5" disk drives...........................................................................................................30
Power consumption and heat dissipation.................................................................... 31
Physical specifications................................................................................................ 31
Power options............................................................................................................ 32
Input power requirements - single-phase, North American, International, Australian
................................................................................................................................... 32
Input power requirements - three-phase, North American, International, Australian
................................................................................................................................... 32
Minimum distance from RF emitting devices.............................................................. 33
HYPERMAX OS emulations........................................................................................ 35
eManagement resource requirements........................................................................ 36
eNAS configurations by array .................................................................................... 38
Unisphere tasks.......................................................................................................... 48
ProtectPoint connections........................................................................................... 61
VVol architecture component management capability................................................ 69
VVol-specific scalability ............................................................................................. 70
Logical control unit maximum values...........................................................................75
Maximum LPARs per port...........................................................................................76
RAID options.............................................................................................................. 85
Service Level compliance legend................................................................................ 88
Service Levels............................................................................................................ 89
SRDF 2-site solutions................................................................................................103
SRDF multi-site solutions .........................................................................................105
SRDF features by hardware platform/operating environment................................... 115
R1 device accessibility................................................................................................121
R2 device accessibility.............................................................................................. 122
Limitations of the migration-only mode .................................................................... 144
Limitations of the migration-only mode .................................................................... 176
SIM severity alerts.....................................................................................................181
Environmental errors reported as SIM messages....................................................... 181
VMAX3 product title capacity types .........................................................................189
VMAX3 license suites for open systems environment................................................ 191
Individual licenses for open systems environment..................................................... 196
Individual licenses for open systems environment..................................................... 197
Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS 9
Page 10
TABLES
License suites for mainframe environment................................................................ 19852
10 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 11

Preface

Note
As part of an effort to improve its product lines, EMC periodically releases revisions of its software and hardware. Therefore, some functions described in this document might not be supported by all versions of the software or hardware currently in use. The product release notes provide the most up-to-date information on product features.
Contact your EMC representative if a product does not function properly or does not function as described in this document.
This document was accurate at publication time. New versions of this document might be released on EMC Online Support (https://support.emc.com). Check to ensure that you are using the latest version of this document.
Purpose
This document outlines the offerings supported on EMC® VMAX3™ Family (100K, 200K and 400K) arrays running HYPERMAX OS 5977.
Audience
This document is intended for use by customers and EMC representatives.
Related documentation
The following documentation portfolios contain documents related to the hardware platform and manuals needed to manage your software and storage system configuration. Also listed are documents for external components which interact with your VMAX3 Family array.
EMC VMAX3 Family Site Planning Guide for VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Provides planning information regarding the purchase and installation of a VMAX3 Family 100K, 200K, 400K.
EMC VMAX Best Practices Guide for AC Power Connections
Describes the best practices to assure fault-tolerant power to a VMAX3 Family array or VMAX All Flash array.
EMC VMAX Power-down/Power-up Procedure
Describes how to power-down and power-up a VMAX3 Family array or VMAX All Flash array.
EMC VMAX Securing Kit Installation Guide
Describes how to install the securing kit on a VMAX3 Family array or VMAX All Flash array.
E-Lab™ Interoperability Navigator (ELN)
Provides a web-based interoperability and solution search portal. You can find the ELN at https://elabnavigator.EMC.com.
Preface
11
Page 12
Preface
SRDF Interfamily Connectivity Information
Defines the versions of HYPERMAX OS and Enginuity that can make up valid SRDF replication and SRDF/Metro configurations, and can participate in Non­Disruptive Migration (NDM).
EMC Unisphere for VMAX Release Notes
Describes new features and any known limitations for Unisphere for VMAX .
EMC Unisphere for VMAX Installation Guide
Provides installation instructions for Unisphere for VMAX.
EMC Unisphere for VMAX Online Help
Describes the Unisphere for VMAX concepts and functions.
EMC Unisphere for VMAX Performance Viewer Online Help
Describes the Unisphere for VMAX Performance Viewer concepts and functions.
EMC Unisphere for VMAX Performance Viewer Installation Guide
Provides installation instructions for Unisphere for VMAX Performance Viewer.
EMC Unisphere for VMAX REST API Concepts and Programmer's Guide
Describes the Unisphere for VMAX REST API concepts and functions.
EMC Unisphere for VMAX Database Storage Analyzer Online Help
Describes the Unisphere for VMAX Database Storage Analyzer concepts and functions.
EMC Unisphere 360 for VMAX Release Notes
Describes new features and any known limitations for Unisphere 360 for VMAX.
EMC Unisphere 360 for VMAX Installation Guide
Provides installation instructions for Unisphere 360 for VMAX.
EMC Unisphere 360 for VMAX Online Help
Describes the Unisphere 360 for VMAX concepts and functions.
EMC Solutions Enabler, VSS Provider, and SMI-S Provider Release Notes
Describes new features and any known limitations.
EMC Solutions Enabler Installation and Configuration Guide
Provides host-specific installation instructions.
EMC Solutions Enabler CLI Reference Guide
Documents the SYMCLI commands, daemons, error codes and option file parameters provided with the Solutions Enabler man pages.
EMC Solutions Enabler Array Controls and Management for HYPERMAX OS CLI User Guide
Describes how to configure array control, management, and migration operations using SYMCLI commands for arrays running HYPERMAX OS.
EMC Solutions Enabler Array Controls and Management CLI User Guide
Describes how to configure array control, management, and migration operations using SYMCLI commands.
12 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 13
EMC Solutions Enabler SRDF Family CLI User Guide
Describes how to configure and manage SRDF environments using SYMCLI commands.
SRDF Interfamily Connectivity Information
Defines the versions of HYPERMAX OS and Enginuity that can make up valid SRDF replication and SRDF/Metro configurations, and can participate in Non­Disruptive Migration (NDM).
EMC Solutions Enabler TimeFinder SnapVX for HYPERMAX OS CLI User Guide
Describes how to configure and manage TimeFinder SnapVX environments using SYMCLI commands.
EMC Solutions Enabler SRM CLI User Guide
Provides Storage Resource Management (SRM) information related to various data objects and data handling facilities.
EMC SRDF/Metro vWitness Configuration Guide
Describes how to install, configure and manage SRDF/Metro using vWitness.
VMAX Management Software Events and Alerts Guide
Documents the SYMAPI daemon messages, asynchronous errors and message events, and SYMCLI return codes.
Preface
EMC ProtectPoint Implementation Guide
Describes how to implement ProtectPoint.
EMC ProtectPoint Solutions Guide
Provides ProtectPoint information related to various data objects and data handling facilities.
EMC ProtectPoint File System Agent Command Reference
Documents the commands, error codes, and options.
EMC ProtectPoint Release Notes
Describes new features and any known limitations.
EMC Mainframe Enablers Installation and Customization Guide
Describes how to install and configure Mainframe Enablers software.
EMC Mainframe Enablers Release Notes
Describes new features and any known limitations.
EMC Mainframe Enablers Message Guide
Describes the status, warning, and error messages generated by Mainframe Enablers software.
EMC Mainframe Enablers ResourcePak Base for z/OS Product Guide
Describes how to configure VMAX system control and management using the EMC Symmetrix Control Facility (EMCSCF).
EMC Mainframe Enablers AutoSwap for z/OS Product Guide
Describes how to use AutoSwap to perform automatic workload swaps between VMAX systems when the software detects a planned or unplanned outage.
13
Page 14
Preface
EMC Mainframe Enablers Consistency Groups for z/OS Product Guide
Describes how to use Consistency Groups for z/OS (ConGroup) to ensure the consistency of data remotely copied by SRDF in the event of a rolling disaster.
EMC Mainframe Enablers SRDF Host Component for z/OS Product Guide
Describes how to use SRDF Host Component to control and monitor remote data replication processes.
EMC Mainframe Enablers TimeFinder SnapVX and zDP Product Guide
Describes how to use TimeFinder SnapVX and zDP to create and manage space­efficient targetless snaps.
EMC Mainframe Enablers TimeFinder/Clone Mainframe Snap Facility Product Guide
Describes how to use TimeFinder/Clone, TimeFinder/Snap, and TimeFinder/CG to control and monitor local data replication processes.
EMC Mainframe Enablers TimeFinder/Mirror for z/OS Product Guide
Describes how to use TimeFinder/Mirror to create Business Continuance Volumes (BCVs) which can then be established, split, re-established and restored from the source logical volumes for backup, restore, decision support, or application testing.
EMC Mainframe Enablers TimeFinder Utility for z/OS Product Guide
Describes how to use the TimeFinder Utility to condition volumes and devices.
EMC GDDR for SRDF/S with ConGroup Product Guide
Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery following both planned outages and disaster situations.
EMC GDDR for SRDF/S with AutoSwap Product Guide
Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery following both planned outages and disaster situations.
EMC GDDR for SRDF/Star Product Guide
Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery following both planned outages and disaster situations.
EMC GDDR for SRDF/Star with AutoSwap Product Guide
Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery following both planned outages and disaster situations.
EMC GDDR for SRDF/SQAR with AutoSwap Product Guide
Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery following both planned outages and disaster situations.
EMC GDDR for SRDF/A Product Guide
Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery following both planned outages and disaster situations.
14 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 15
EMC GDDR Message Guide
DANGER
WARNING
CAUTION
NOTICE
Describes the status, warning, and error messages generated by GDDR.
EMC GDDR Release Notes
Describes new features and any known limitations.
EMC z/OS Migrator Product Guide
Describes how to use z/OS Migrator to perform volume mirror and migrator functions as well as logical migration functions.
EMC z/OS Migrator Message Guide
Describes the status, warning, and error messages generated by z/OS Migrator.
EMC z/OS Migrator Release Notes
Describes new features and any known limitations.
EMC ResourcePak for z/TPF Product Guide
Describes how to configure VMAX system control and management in the z/TPF operating environment.
EMC SRDF Controls for z/TPF Product Guide
Describes how to perform remote replication operations in the z/TPF operating environment.
Preface
EMC TimeFinder Controls for z/TPF Product Guide
Describes how to perform local replication operations in the z/TPF operating environment.
EMC z/TPF Suite Release Notes
Describes new features and any known limitations.
Special notice conventions used in this document
EMC uses the following conventions for special notices:
Indicates a hazardous situation which, if not avoided, will result in death or serious injury.
Indicates a hazardous situation which, if not avoided, could result in death or serious injury.
Indicates a hazardous situation which, if not avoided, could result in minor or moderate injury.
Addresses practices not related to personal injury.
15
Page 16
Note
Preface
Presents information that is important, but not hazard-related.
Typographical conventions
EMC uses the following type style conventions in this document:
Table 1 Typographical conventions used in this content
Bold
Used for names of interface elements, such as names of windows, dialog boxes, buttons, fields, tab names, key names, and menu paths (what the user specifically selects or clicks)
Italic
Monospace
Monospace italic
Monospace bold
Used for full titles of publications referenced in text
Used for:
l
System code
l
System output, such as an error message or script
l
Pathnames, filenames, prompts, and syntax
l
Commands and options
Used for variables
Used for user input
[ ] Square brackets enclose optional values
| Vertical bar indicates alternate selections - the bar means “or”
{ } Braces enclose content that the user must specify, such as x or y or
z
... Ellipses indicate nonessential information omitted from the example
16 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 17
Where to get help
EMC support, product, and licensing information can be obtained as follows:
Product information
EMC technical support, documentation, release notes, software updates, or information about EMC products can be obtained on the https://
support.emc.com site (registration required).
Technical support
To open a service request through the https://support.emc.com site, you must have a valid support agreement. Contact your EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account.
Additional support options
l Support by Product — EMC offers consolidated, product-specific information
on the Web at: https://support.EMC.com/products The Support by Product web pages offer quick links to Documentation, White Papers, Advisories (such as frequently used Knowledgebase articles), and Downloads, as well as more dynamic content, such as presentations, discussion, relevant Customer Support Forum entries, and a link to EMC Live Chat.
l EMC Live Chat — Open a Chat or instant message session with an EMC
Support Engineer.
Preface
eLicensing support
To activate your entitlements and obtain your VMAX license files, visit the Service Center on https://support.EMC.com, as directed on your License Authorization Code (LAC) letter emailed to you.
l For help with missing or incorrect entitlements after activation (that is,
expected functionality remains unavailable because it is not licensed), contact your EMC Account Representative or Authorized Reseller.
l For help with any errors applying license files through Solutions Enabler,
contact the EMC Customer Support Center.
l If you are missing a LAC letter, or require further instructions on activating
your licenses through the Online Support site, contact EMC's worldwide Licensing team at licensing@emc.com or call:
n North America, Latin America, APJK, Australia, New Zealand: SVC4EMC
(800-782-4362) and follow the voice prompts.
n EMEA: +353 (0) 21 4879862 and follow the voice prompts.
Your comments
Your suggestions help us improve the accuracy, organization, and overall quality of the documentation. Send your comments and feedback to:
VMAXContentFeedback@emc.com
17
Page 18
Preface

Revision history

The following table lists the revision history of this document.
Table 2 Revision history
Revision Description and/or change Operating
system
6.5 New content:
l
RecoverPoint on page 156
l
Secure snaps on page 96
l
Data at Rest Encryption on page 39
6.4 Revised content:
l
SRDF/Metro array witness overview
6.3 New content:
l
Virtual Witness (vWitness) on page 151
l
Non-disruptive-migration
HYPERMAX OS 5977 Q2 2017 SR
HYPERMAX OS
5997.952. 892
HYPERMAX OS
5997.952. 892
6.2 Adding z/DP support (Mainframe SnapVX and zDP on page 98) HYPERMAX OS
5997.945.890
6.1 Updated Licensing appendix. HYPERMAX
5997.811.784
6.0 New content:
l
HYPERMAX OS support for mainframe on page 74
l
VMware Virtual Volumes on page 69
l
Unisphere 360 on page 49
HYPERMAX
5977.810.784
5.4 Updated VMAX3 Family power consumption and heat dissipation table: See: VMAX3 Family specifications on page 24
l
For a 200K, dual engine system: Max heat dissipation changed from 30,975 to 28,912
HYPERMAX
5977.691.684
Btu/Hr.
l
Added note to Power and heat dissipation topic.
5.3 Changed Data Encryption Key PKCS#12 to PKCS#5. HYPERMAX
5977.691.684
5.2 Revised content: Number of CPUs required to support eManagement. HYPERMAX
5977.691.684
5.1 Revised content:
l
In SRDF/Metro, changed terminology from quorum to Witness.
5.0 New content:
l
New feature for FAST.X
l
SRDF/Metro on page 145
HYPERMAX
5977.691.684
HYPERMAX
5977.691.684
Revised content:
18 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 19
Table 2 Revision history (continued)
Revision Description and/or change Operating
system
l
VMAX3 Family specifications on page 24
Removed content:
l
Legacy TimeFinder write operation details
l
EMC XtremSW Cache
Preface
4.0 New content: External provisioning with FAST.X on page 90
a
HYPERMAX OS
5977.596.583 plus Q2 Service Pack (ePack)
3.0 New content:
l
Data at Rest Encryption on page 39
l
Data erasure on page 43
l
Cascaded SRDF solutions on page 108
l
SRDF/Star solutions on page 109
HYPERMAX OS
5977.596.583
2.0 New content: Embedded NAS (eNAS). HYPERMAX OS
5977.497.471
1.0 First release of the VMAX 100K, 200K, and 400K arrays with EMC HYPERMAX OS 5977. HYPERMAX OS
5977.250.189
a.
FAST.X requires Solutions Enabler/Unisphere for VMAX version 8.0.3.
Revision history 19
Page 20
Preface
20 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 21
CHAPTER 1

VMAX3 with HYPERMAX OS

This chapter summarizes VMAX3 Family specifications and describes the features of HYPERMAX OS. Topics include:
l
Introduction to VMAX3 with HYPERMAX OS.....................................................22
l
VMAX3 Family 100K, 200K, 400K arrays............................................................23
l
HYPERMAX OS................................................................................................. 34
VMAX3 with HYPERMAX OS 21
Page 22
VMAX3 with HYPERMAX OS

Introduction to VMAX3 with HYPERMAX OS

The EMC VMAX3 family storage arrays deliver tier-1 scale-out multi-controller architecture with unmatched consolidation and efficiency for the enterprise. The VMAX3 family includes three models:
l
VMAX 100K - 2 to 4 controllers, 48 cores, 2TB cache, 1440 2.5" drives, 64 ports,
1.1 PBu
l
VMAX 200K - 2 to 8 controllers, 128 cores, 8TB cache, 2880 2.5" drives, 128 ports, 2.3 PBu
l
VMAX 400K - 2 to 16 controllers, 384 cores, 16TB cache, 5760 2.5" drives, 256 ports, 4.3 PBu
VMAX3 arrays provide unprecedented performance and scale, and a radically new architecture for enterprise storage that separates software data services from the underlying platform. The combination of VMAX3 hardware and software provides:
l
Open system and mainframe connectivity
l
Dramatic increase in floor tile density by consolidating high capacity disk enclosures for both 2.5" and 3.5" drives and engines in the same system bay
l
Support for either hybrid or all flash configurations
l
Unified block and file support through Embedded NAS (eNAS), eliminating the physical hardware
l
Data at Rest Encryption for those applications that demand the highest level of security
l
Service Level (SL) provisioning with FAST.X for external arrays (XtremIO, Cloud Array, and other supported 3rd-party storage)
l
FICON, iSCSI, Fibre Channel, and FCoE front end protocols
l
Simplified management at scale through Service Levels, reducing time to provision by up to 95% to less than 30 seconds
l
Extended tiering to the cloud with EMC CloudArray integration for extreme scalability and up to 40% lower storage costs
HYPERMAX OS is an industry-leading open storage and hypervisor converged operating system. HYPERMAX OS combines industry-leading high availability, I/O management, quality of service, data integrity validation, storage tiering, and data security with an open application platform.
HYPERMAX OS features the first real-time, non-disruptive storage hypervisor that manages and protects embedded services by extending VMAX3 high availability to services that traditionally would have run external to the array. It also provides direct access to hardware resources to maximize performance.
22 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 23

VMAX3 Family 100K, 200K, 400K arrays

VMAX3 arrays range in size from single up to two (100K), four (200K) or eight engine systems (400K). Engines (consisting of two controllers) and high-capacity disk enclosures (for both 2.5" and 3.5" drives) are consolidated in the same system bay, providing a dramatic increase in floor tile density.
VMAX3 arrays come fully pre-configured from the factory, significantly reducing time to first I/O at installation.
VMAX3 array features include:
l
Hybrid (mix of traditional/regular hard drives and solid state/flash drives) or all flash configurations
l
System bay dispersion of up to 82 feet (25 meters) from the first system bay
l
Each system bay can house either one or two engines and up to six high-density disk array enclosures (DAEs) per engine.
n
Single-engine configurations: up to 720 6 Gb/s SAS 2.5” drives, 360 3.5” drives, or a mix of both drive types
n
Dual-engine configurations: up to 480 6 Gb/s SAS 2.5” drives, 240 3.5” drives, or a mix of both drive types
l
Third-party racking (optional)
VMAX3 with HYPERMAX OS
VMAX3 Family 100K, 200K, 400K arrays 23
Page 24
VMAX3 with HYPERMAX OS

VMAX3 Family specifications

The following tables list specifications for each model in the VMAX3 Family.
Table 3 Engine specifications
Feature VMAX 100K VMAX 200K VMAX 400K
Number of engines supported 1 to 2 1 to 4 1 to 8
Engine enclosure 4U 4U 4U
CPU Intel Xeon E5-2620-v2
2.1 GHz 6 core
Dynamic Virtual Matrix BW 700GB/s 700GB/s 1400GB/s
# Cores per CPU/per engine/per system
Dynamic Virtual Matrix Interconnect
6/24/48 8/32/128 12/48/384
InfiniBand Dual Redundant Fabric: 56Gbps per port
Table 4 Cache specifications
Intel Xeon E5-2650-v2
2.6 GHz 8 core
InfiniBand Dual Redundant Fabric: 56Gbps per port
Intel Xeon E5-2697-v2
2.7 GHz 12 core
InfiniBand Dual Redundant Fabric: 56Gbps per port
Feature VMAX 100K VMAX 200K VMAX 400K
Cache-System Min (raw) 512GB 512GB 512GB
Cache-System Max (raw) 2TBr (with 1024GB engine) 8TBr (with 2048GB engine) 16TBr (with 2048GB engine)
Cache-per engine options 512GB, 1024GB 512GB, 1024GB, 2048GB 512GB, 1024GB, 2048GB
Table 5 Vault specifications
Feature VMAX 100K VMAX 200K VMAX 400K
Vault strategy Vault to Flash Vault to Flash Vault to Flash
Vault implementation 2 to 4 Flash I/O modules per
Engine
24 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
2 to 8 Flash I/O modules per Engine
2 to 8 Flash I/O modules per Engine
Page 25
VMAX3 with HYPERMAX OS
Table 6 Front end I/O modules
Feature VMAX 100K VMAX 200K VMAX 400K
Max front-end I/O modules/ engine
Front-end I/O modules and protocols supported
8 8 8
FC: 4 x 8Gbs (FC, SRDF)
FC: 4 x 16Gbs (FC, SRDF)
FICON: 4 x 16Gbs (FICON)
FCoE: 4 x 10GbE (FCoE)
iSCSI: 4 x 10 GbE (iSCSI)
GbE: 2/2 Opt/Cu (SRDF)
10GbE: 2 x 10GbE (SRDF)
Table 7 eNAS I/O modules
FC: 4 x 8Gbs (FC, SRDF)
FC: 4 x 16Gbs (FC, SRDF)
FICON: 4 x 16Gbs (FICON)
FCoE: 4 x 10GbE (FCoE)
iSCSI: 4 x 10 GbE (iSCSI)
GbE: 2/2 Opt/Cu (SRDF)
10GbE: 2 x 10GbE (SRDF)
FC: 4 x 8Gbs (FC, SRDF)
FC: 4 x 16Gbs (FC, SRDF)
FICON: 4 x 16Gbs (FICON)
FCoE: 4 x 10GbE (FCoE)
iSCSI: 4 x 10 GbE (iSCSI)
GbE: 2/2 Opt/Cu (SRDF)
10GbE: 2 x 10GbE (SRDF)
Feature VMAX 100K VMAX 200K VMAX 400K
Max eNAS I/O modules/ Software Data Mover
eNAS I/O modules supported GbE: 4 x 1GbE Cu
2 (min of 1 Ethernet I/O module required)
10GbE: 2 x 10GbE Cu
10GbE: 2 x 10GbE Opt
FC: 4 x 8 Gbs (NDMP back­up)
(max. 1 FC NDMP/Software Data Mover)
3 (min of 1 Ethernet I/O module required)
GbE: 4 x 1GbE Cu
10GbE: 2 x 10GbE Cu
10GbE: 2 x 10GbE Opt
FC: 4 x 8 Gbs (NDMP back­up)
(max. 1 FC NDMP/Software Data Mover)
3 (min of 1 Ethernet I/O module required)
GbE: 4 x 1GbE Cu
10GbE: 2 x 10GbE Cu
10GbE: 2 x 10GbE Opt
FC: 4 x 8 Gbs (NDMP back­up)
(max. 1 FC NDMP/Software Data Mover)
Table 8 eNAS Software Data Movers
Feature VMAX 100K VMAX 200K VMAX 400K
Max Software Data Movers 2 (1 Active + 1 Standby) 4 (3 Active + 1 Standby) 8 (7 Active + 1 Standby)
Max NAS capacity/array (Terabytes usable)
256 1536 3584
Table 9 Capacity, drives
Feature VMAX 100K VMAX 200K VMAX 400K
Max capacity per array .54PBu 2.31PBu 4.35PBu
Max drives per system 1440 2880 5760
Max drives per system bay 720 720 720
Min spares per system 1 1 1
Min drive count (1 engine) 4 + 1 spare 4 + 1 spare 4 + 1 spare
VMAX3 Family specifications 25
Page 26
VMAX3 with HYPERMAX OS
Table 9 Capacity, drives (continued)
Table 10 Drive specifications
VMAX 100K VMAX 200K VMAX 400K
3.5" SAS Drives
10K RPM SAS 300GBa, 600GBa, 1.2TB
10K RPM
a
300GBa, 600GBa, 1.2TB 10K RPM
15K RPM SAS 300GBa 15K RPM 300GBa 15K RPM 300GBa 15K RPM
a
300GBa, 600GBa, 1.2TB 10K RPM
a
7.2K RPM SAS 2TB 7.2K RPMa, 4TB 7.2K
a
RPM
Flash SAS 200GB
800GB
a,b
, 400GB
a,b
, 1.6TB
a,b
a,b
Flash
,
2TB 7.2K RPMa, 4TB 7.2K
a
RPM
200GB 800GB
a,b
, 400GB
a,b
, 1.6TB
a,b
a,b
Flash
,
2TB 7.2K RPMa, 4TB 7.2K
a
RPM
a,b
200GB 800GB
, 400GB
a,b
, 1.6TB
2.5" SAS Drives
10K RPM SAS 300GBc, 600GBc, 1.2TB
10K RPM
c
300GBc, 600GBc, 1.2TB 10K RPM
15K RPM SAS 300GBa 15K RPM 300GBa 15K RPM 300GBa 15K RPM
Flash SAS 200GB
800GB
Flash SAS 960GB
a,b
, 400GB
a,b
, 1.6TB
c,b
, 1.92TB
a,b
,
a,b
Flash
c,b
Flash 960GB
200GB 800GB
a,b
, 400GB
a,b
, 1.6TB
c,b
, 1.92TB
a,b
BE interface 6Gbps SAS 6Gbps SAS 6Gbps SAS
RAID options (all drives)
RAID 1
RAID 5 (3+1)
RAID 5 (7+1)
RAID 6 (6+2)
RAID 6 (14+2)
RAID 1
RAID 5 (3+1)
RAID 5 (7+1)
RAID 6 (6+2)
RAID 6 (14+2)
c
a,b
,
Flash
c,b
Flash 960GB
300GBc, 600GBc, 1.2TB 10K RPM
a,b
200GB 800GB
, 400GB
a,b
, 1.6TB
c,b
, 1.92TB
RAID 1
RAID 5 (3+1)
RAID 5 (7+1)
RAID 6 (6+2)
RAID 6 (14+2)
a,b
a,b
a,b
Flash
a,b
Flash
c,b
,
c
,
Flash
a.
Capacity points and drive formats available for upgrades.
b.
Mixing of 200GB, 400GB, 800GB, or 1.6TB Flash capacities with 960GB, or 1.92TB Flash capacities on the same array is not currently supported.
c.
Capacity points and drive formats available on new systems and upgrades
Table 11 System configuration types
Feature VMAX 100K VMAX 200K VMAX 400K
All 2.5" DAE configurations 2 bays 1440 drives 4 bays 2880 drives 8 bays 5760 drives
All 3.5" DAE configurations 2 bays 720 drives 4 bays 1440 drives 8 bays 2880 drives
Mixed configurations 2 bays 1320 drives 4 bays 2640 drives 8 bays 5280 drives
26 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 27
VMAX3 with HYPERMAX OS
Table 12 Disk Array Enclosures
Feature VMAX 100K VMAX 200K VMAX 400K
120 x 2.5" drive DAE Yes Yes Yes
60 x 3.5" drive DAE Yes Yes Yes
Table 13 Cabinet configurations
Feature VMAX 100K VMAX 200K VMAX 400K
Standard 19" bays Yes Yes Yes
Single bay system configuration
Dual-engine system bay configuration
Third party rack mount option Yes Yes Yes
Yes Yes Yes
Yes Yes Yes
Table 14 Dispersion specifications
Feature VMAX 100K VMAX 200K VMAX 400K
System bay dispersion Up to 82 feet (25m) between
System Bay 1 and System Bay 2
Table 15 Preconfiguration
Up to 82 feet (25m) between System Bay 1 and any other System Bay
Up to 82 feet (25m) between System Bay 1 and any other System Bay
Feature VMAX 100K VMAX 200K VMAX 400K
100% Thin Provisioned Yes Yes Yes
Preconfigured at the factory Yes Yes Yes
Table 16 Host support
Feature VMAX 100K VMAX 200K VMAX 400K
Open systems Yes Yes Yes
Mainframe (CKD 3380 and 3390 emulation)
IBM i Series support (D910 only)
Yes Yes Yes
Yes Yes Yes
VMAX3 Family specifications 27
Page 28
VMAX3 with HYPERMAX OS
Table 17 Hardware compression support option (SRDF)
Feature VMAX 100K VMAX 200K VMAX 400K
GbE / 10 GbE Yes Yes Yes
8Gb/s FC Yes Yes Yes
16Gb/s FC Yes Yes Yes
28 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 29
VMAX3 with HYPERMAX OS
Table 18 VMAX3 Family connectivity
I/O protocols VMAX 100K VMAX 200K VMAX 400K
8 Gb/s FC Host/SRDF ports
Maximum/engine 32 32 32
Maximum/array 64 128 256
16 Gb/s FC Host/SRDF ports
Maximum/engine 32 32 32
Maximum/array 64 128 256
16 Gb/s FICON ports
Maximum/engine 32 32 32
Maximum/array 64 128 256
10 GbE iSCSI ports
Maximum/engine 32 32 32
Maximum/array 64 128 256
10 GbE FCoE ports
Maximum/engine 32 32 32
Maximum/array 64 128 256
10 GbE SRDF ports
Maximum/engine 16 16 16
Maximum/array 32 64 128
GbE SRDF ports
Maximum/engine 32 32 32
Maximum/array 64 128 256
Embedded NAS ports
GbE Ports
Maximum ports/Software Data Mover 8 12 12
Maximum ports/array 16 48 96
10 GbE (Cu or Optical) ports
Maximum ports/Software Data Mover 4 6 6
Maximum ports/array 8 24 48
8 Gb/s FC NDMP back-up ports
Maximum ports/Software Data Mover 1 1 1
Maximum ports/array 2 4 8
VMAX3 Family specifications 29
Page 30
VMAX3 with HYPERMAX OS
Disk drive support
The VMAX 100K, 200K, and 400K support the latest 6Gb/s dual-ported native SAS drives. All drive families (Enterprise Flash, 10K, 15K and 7.2K RPM) support two independent I/O channels with automatic failover and fault isolation. Configurations with mixed-drive capacities and speeds are allowed depending upon the configuration. All capacities are based on 1 GB = 1,000,000,000 bytes. Actual usable capacity may vary depending upon configuration.
Table 19 2.5" disk drives
Platform Support VMAX 100K, 200K, 400K
Nominal capacity (GB) 200
a ,c
Speed (RPM) Flash Flash Flash Flash Flash Flash 15K 10K 10K 10K
400
a ,c
800
a ,c
960
b,c
1600
a ,c
1920
b ,c
300
a
300
b
600
b
1200
b
Average seek time
N/A N/A N/A N/A N/A N/A 2.8/3.33.7/4.2 3.7/4.2 3.7/4.2
(read/write ms)
Raw capacity (GB) 200 400 800 960 1600 1920 292.6 292.6 585.4 1200.2
Open systems
196.86 393.72 787.44 939.38 1574.88 1880.08 288.02 288.02 576.05 1181.16 formatted capacity (GB)
Mainframe formatted
191.53 393.64 787.27 939.29 1574.55 1879.75 287.86 287.86 575.72 1180.91 capacity (GB)
a.
Capacity points and drive formats available for upgrades.
b.
Capacity points and drive formats available on new systems and upgrades.
c.
Mixing of 200GB, 400GB, 800GB, or 1.6TB Flash capacities with 960GB, or 1.92TB Flash capacities on the same array is not currently supported.
Table 20 3.5" disk drives
Platform Support VMAX 100K, 200K, 400K
Nominal capacity (GB) 200
a,b
Speed (RPM) Flash Flash Flash Flash 15K 10K 10K 10K 7.2K 7.2K
Average seek time
N/A N/A N/A N/A 2.8/3.33.7/4.2 3.7/4.2 3.7/4.2 8.2/9.28.2/9.
(read/write ms)
400
a,b
800
a,b
1600
a,b
300
a
300
a
600
a
1200a2000a4000
a
2
Raw capacity (GB) 200 400 800 1600 292.6 292.6 585.4 1200.2 1912.1 4000
Open systems
196.86 393.72 787.44 1574.88 288.02 288.02 576.05 1181.16 1968.6 3938.5 formatted capacity (GB)
Mainframe formatted capacity (GB)
a.
Capacity points and drive formats available for upgrades.
b.
Mixing of 200GB, 400GB, 800GB, or 1.6TB Flash capacities with 960GB, or 1.92TB Flash capacities on the same array is not currently supported.
30 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
196.53 393.64 787.27 1574.55 287.86 287.86 575.72 1180.91 1968.183938.1
0
Page 31
Table 21 Power consumption and heat dissipation
VMAX 100K VMAX 200K VMAX 400K
VMAX3 with HYPERMAX OS
Maximum power and heat dissipation at <26°C and
a
>35°C
System bay 1 Single engine
System bay 2 Single engine
b
System bay 1 Dual engine
System bay 2 Dual engine
b
a.
Power values and heat dissipations shown at >35°C reflect the higher power levels associated with both the battery recharge cycle, and the initiation of high ambient temperature adaptive cooling algorithms. Values at <26°C are reflective of more steady state maximum values during normal operation.
b.
Power values for system bay 2 and all subsequent system bays where applicable.
Maximum total power consumption <26°C / >35°C (kVA)
8.27 / 10.8 28,201 /
8.13 / 10.4 27,723 /
6.44 / 8.8 21,960 /
Maximum heat dissipation <26°C / >35°C (Btu/Hr)
36,828
35,464
30,008
Maximum total power consumption <26°C / >35°C (kVA)
Maximum heat dissipation <26°C / >35°C (Btu/Hr)
8.37 / 10.9 28,542 / 37,169
8.33 / 10.6 28,405 / 36,146
6.74 / 9.1 22,983 / 31,031
N/A 6.7 / 8.8 22,847 /
30,008
Table 22 Physical specifications
Maximum total power consumption <26°C / >35°C (kVA)
Maximum heat dissipation <26°C / >35°C (Btu/Hr)
8.57 / 11.1 29,224 / 37,851
8.43 / 10.7 28,746 / 36,487
7.04 / 9.4 24,006 / 32,054
6.9 / 9 23,529 / 30,690
Bay configurations
a
Height (in/cm)
b
c
Width (in/cm)
Depth (in/cm)
d
Weight (max lbs/kg)
System bay, single-engine 75/190 24/61 47/119 2065/937
System bay, dual-engine 75/190 24/61 47/119 1860/844
a.
Clearance for service/airflow is the front at 42 in (106.7 cm) front and the rear at 30 in (76.2 cm).
b.
An additional 18 in (45.7 cm) is recommended for ceiling/top clearance.
c.
Measurement includes .25 in. (0.6 cm) gap between bays.
d.
Includes front and rear doors.
VMAX3 Family specifications 31
Page 32
VMAX3 with HYPERMAX OS
Input Power Requirements
Table 23 Power options
Feature VMAX 100K VMAX 200K VMAX 400K
Power Single or Three Phase Delta or
Wye
Single or Three Phase Delta or Wye
Single or Three Phase Delta or Wye
Table 24 Input power requirements - single-phase, North American, International, Australian
Specification North American 3-wire connection
(2 L & 1 G)
a
International and Australian 3-wire connection (1 L & 1 N & 1 G)
a
Input nominal voltage 200–240 VAC ± 10% L- L nom 220–240 VAC ± 10% L- N nom
Frequency 50–60 Hz 50–60 Hz
Circuit breakers 30 A 32 A
Power zones Two Two
Minimum power requirements at customer site
l
Three 30 A, single-phase drops per zone.
l
Two power zones require 6 drops, each drop rated for 30 A.
l
PDU A and PDU B require three separate single-phase 30 A drops for each PDU.
a.
L = line or phase, N = neutral, G = ground
Table 25 Input power requirements - three-phase, North American, International, Australian
Specification North American 4-wire connection
a
Input voltage
(3 L & 1 G)
b
200–240 VAC ± 10% L- L nom 220–240 VAC ± 10% L- N nom
International 5-wire connection (3 L & 1 N & 1 G)
Frequency 50–60 Hz 50–60 Hz
Circuit breakers 50 A 32 A
Power zones Two Two
Minimum power requirements at customer site
l
Two 50 A, three-phase drops per bay.
l
PDU A and PDU B require one
Two 32 A, three-phase drops per bay.
separate three-phase Delta 50 A drops for each.
a.
L = line or phase, N = neutral, G = ground
b.
An imbalance of AC input currents may exist on the three-phase power source feeding the array, depending on the configuration. The customer's electrician must be alerted to this possible condition to balance the phase-by-phase loading conditions within the customer's data center.
a
32 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 33
VMAX3 with HYPERMAX OS
Radio frequency interference specifications
Electro-magnetic fields, which include radio frequencies can interfere with the operation of electronic equipment. EMC Corporation products have been certified to withstand radio frequency interference (RFI) in accordance with standard EN61000-4-3. In data centers that employ intentional radiators, such as cell phone repeaters, the maximum ambient RF field strength should not exceed 3 Volts /meter.
Table 26 Minimum distance from RF emitting devices
Repeater power level
1 Watt 9.84 ft (3 m)
2 Watt 13.12 ft (4 m)
5 Watt 19.69 ft (6 m)
7 Watt 22.97 ft (7 m)
10 Watt 26.25 ft (8 m)
12 Watt 29.53 ft (9 m)
15 Watt 32.81 ft (10 m)
a.
Effective Radiated Power (ERP)
a
Recommended minimum distance
VMAX3 Family specifications 33
Page 34
VMAX3 with HYPERMAX OS

HYPERMAX OS

This section highlights the features of the HYPERMAX OS.

What's new in HYPERMAX OS 5977 Q2 2017

This section describes new functionality and features provided by HYPERMAX OS 5977 Q2 2017 for VMAX 100K, 200K, and 400K arrays.
RecoverPoint
HYPERMAX OS 5977 Q2 2017 SR introduces support for RecoverPoint on VMAX storage arrays. RecoverPoint is a comprehensive data protection solution designed to provide production data integrity at local and remote sites. RecoverPoint also provides the ability to recover data from any point in time using journaling technology.
RecoverPoint on page 156 provides more information.
Secure snaps
Secure snaps is an enhancement to the current snapshot technology. Secure snaps prevent administrators or other high-level users from intentionally or unintentionally deleting snapshot data. In addition, secure snaps are also immune to automatic failure resulting from running out of Storage Resource Pool (SRP) or Replication Data Pointer (RDP) space on the array.
Secure snaps on page 96 provides more information.
Data at Rest Encryption
Data at Rest Encryption (D@RE) now supports the OASIS Key Management Interoperability Protocol (KMIP) and can integrate with external servers that also support this protocol. This release has been validated to interoperate with the following KMIP-based key managers:
l
Gemalto SafeNet KeySecure
l
IBM Security Key Lifecycle Manager
Data at Rest Encryption on page 39 provides more information.
34 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 35

HYPERMAX OS emulations

HYPERMAX OS provides emulations (executables) that perform specific data service and control functions in the HYPERMAX environment. The following table lists the available emulations.
Table 27 HYPERMAX OS emulations
VMAX3 with HYPERMAX OS
Area Emulation Description Protocol Speed
Back-end DS Back-end connection in the
array that communicates with the drives, DS is also known as an internal drive controller.
DX Back-end connections that
are not used to connect to hosts. Used by ProtectPoint, Cloud Array, XtremIO and other arrays.
ProtectPoint leverages FAST.X to link Data Domain to the array. DX ports must be configured for FC protocol.
Management IM Separates infrastructure
tasks and emulations. By separating these tasks, emulations can focus on I/O­specific work only, while IM manages and executes common infrastructure tasks, such as environmental monitoring, Field Replacement Unit (FRU) monitoring, and vaulting.
SAS 6 Gb/s
FC 16 or 8 Gb/s
N/A
a
ED Middle layer used to separate
Host connectivity FA - Fibre Channel
SE - iSCSI
FE - FCoE
EF - FICON
b
front-end and back-end I/O processing. It acts as a translation layer between the front-end, which is what the host knows about, and the back-end, which is the layer that reads, writes, and communicates with physical storage in the array.
Front-end emulation that:
l
Receives data from the host (network) and commits it to the array
l
Sends data from the array to the host/network
HYPERMAX OS emulations 35
N/A
FC - 16 or 8 Gb/s
SE and FE - 10 Gb/s
EF - 16 Gb/s
Page 36
VMAX3 with HYPERMAX OS
Table 27 HYPERMAX OS emulations (continued)
Area Emulation Description Protocol Speed
Remote replication RF - Fibre Channel
RE - GbE
a.
The 8 Gb/s module auto-negotiates to 2/4/8 Gb/s and the 16 Gb/s module auto-negotiates to 16/8/4 Gb/s using optical SFP and OM2/OM3/OM4 cabling.
b.
Only on VMAX 450F, 850F, and 950F arrays.
Interconnects arrays for Symmetrix Remote Data Facility (SRDF).
RF - 8 or 16 Gb/s FC SRDF
RE - 1 GbE SRDF
RE - 10 GbE SRDF

Container applications

HYPERMAX OS provides an open application platform for running data services. HYPERMAX OS includes a light-weight hypervisor that enables multiple operating environments to run as virtual machines on the storage array.
Application containers are virtual machines that provide embedded applications on the storage array. Each container virtualizes hardware resources required by the embedded application, including:
l
Hardware needed to run the software and embedded application (processor, memory, PCI devices, power management)
l
VM ports, to which LUNs are provisioned
l
Access to necessary drives (boot, root, swap, persist, shared)
a
Embedded Management
The eManagement container application embeds management software (Solutions Enabler, SMI-S, Unisphere for VMAX) on the storage array, enabling you to manage the array without requiring a dedicated management host.
With eManagement, you can manage a single storage array and any SRDF attached arrays. To manage multiple storage arrays with a single control pane, use the traditional host-based management interfaces, Unisphere for VMAX and Solutions Enabler. To this end, eManagement allows you to link-and-launch a host-based instance of Unisphere for VMAX.
eManagement is typically pre-configured and enabled at the EMC factory, thereby eliminating the need for you to install and configure the application. However, starting with HYPERMAX OS 5977.945.890, eManagement can be added to VMAX arrays in the field. Contact your EMC representative for more information.
Embedded applications require system memory. The following table lists the amount of memory unavailable to other data services.
Table 28
eManagement resource requirements
VMAX3 model CPUs Memory Devices supported
VMAX3 100K 4 12 GB 64K
VMAX3 200K 4 16 GB 128K
VMAX3 400K 4 20 GB 256K
36 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 37
Virtual Machine ports
Virtual machine (VM) ports are associated with virtual machines to avoid contention with physical connectivity. VM ports are addressed as ports 32-63 per director FA emulation.
LUNs are provisioned on VM ports using the same methods as provisioning physical ports.
A VM port can be mapped to one and only one VM.
A VM can be mapped to more than one port.
Embedded Network Attached Storage
Embedded Network Attached Storage (eNAS) is fully integrated into the VMAX3 array. eNAS provides flexible and secure multi-protocol file sharings (NFS 2.0, 3.0,
4.0/4.1), CIFS/SMB 3.0) and multiple file server identities (CIFS and NFS serves). eNAS enables:
l
File server consolidation/multi-tenancy
l
Built-in asynchronous file level remote replication (File Replicator)
l
Built-in Network Data Management Protocol (NDMP)
l
VDM Synchronous replication with SRDF/S and optional automatic failover manager File Auto Recovery (FAR) with optional File Auto Recover Manager (FARM)
l
FAST.X in external provisioning mode
l
Anti-virus
eNAS provides file data services that enable customers to:
l
Consolidate block and file storage in one infrastructure
l
Eliminate the gateway hardware, reducing complexity and costs
l
Simplify management
Consolidated block and file storage reduces costs and complexity while increasing business agility. Customers can leverage rich data services across block and file storage including FAST, service level provisioning, dynamic Host I/O Limits, and Data at Rest Encryption.
VMAX3 with HYPERMAX OS
eNAS solutions and implementation
The eNAS solution runs on standard array hardware and is typically pre-configured at the factory. In this scenario, EMC provides a one-time setup of the Control Station and Data Movers, containers, control devices, and required masking views as part of the factory eNAS pre-configuration. Additional front-end I/O modules are required to implement eNAS. However, starting with HYPERMAX OS 5977.945.890, eNAS can be added to VMAX arrays in the field. Contact your EMC representative for more information.
eNAS uses the HYPERMAX OS hypervisor to create virtual instances of NAS Data Movers and Control Stations on VMAX3 controllers. Control Stations and Data Movers are distributed within the VMAX3 based upon the number of engines and their associated mirrored pair.
By default, VMAX3 arrays are configured with:
l
Two Control Station virtual machines
Container applications 37
Page 38
Note
VMAX3 with HYPERMAX OS
l
Data Mover virtual machines. The number of Data Movers varies by array size:
n
VMAX 100K = Two (default and maximum)
n
VMAX 200K = Two (default), or four (maximum)
n
VMAX 400K = Two (default), four, six, or eight (maximum)
All configurations include one standby Data Mover.
eNAS configurations
The storage capacity required for arrays supporting eNAS is the same (~ 680 GB).
The following table lists eNAS configurations and front-end I/O modules.
Table 29 eNAS configurations by array
Component Description VMAX 100K VMAX 200K VMAX 400K
Data mover virtual machine
a
Maximum
2 4 8
number
Max
256 TB 512 TB 512 TB
capacity/DM
Logical cores 8 12/24 16/32/48/64
Memory (GB) 12 48/96 48/96/144/192
Two Control Station virtual machines
NAS Capacity/
Front-end I/O modules
bc
Logical cores 2 2 2
Memory (GB) 8 8 8
Maximum 256 TB 1.5 PB 3.5 PB
4 12 24
Array
a.
Data movers are added in pairs and must support the same configuration.
b.
One I/O module per eNAS instance per standard block configuration.
c.
Backup to tape is optional and does not count as a possibility for the one I/O module requirement.
Replication using eNAS
The following replication methods are available for eNAS file systems:
l
Asynchronous file system level replication using VNX Replicator for File. Refer to
l
Synchronous replication with SRDF/S using File Auto Recovery (FAR) with the optional File Auto Recover Manager (FARM).
l
Checkpoint (point-in-time, logical images of a production file system) creation and management using VNX SnapSure. Refer to
eNAS replication is available as part of the Remote Replication Suite and Local Replication Suite.
Using VNX Replicator 8.x
Using VNX SnapSure 8.x
.
.
SRDF/A, SRDF/Metro, and TimeFinder are not available with eNAS.
38 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 39
eNAS management interface
Note
eNAS block and file storage is managed using the Unisphere for VMAX File Dashboard. Link and launch enables you to run the block and file management GUI within the same session.
The configuration wizard helps you create storage groups (automatically provisioned to the Data Movers) quickly and easily. Creating a storage group creates a storage pool in Unisphere for VNX that can be used for file level provisioning tasks.

Data protection and integrity

HYPERMAX OS provides a suite of integrity checks, RAID options, and vaulting capabilities to ensure data integrity and to protect data in the event of a system failure or power outage.
Data at Rest Encryption
Securing sensitive data is one of the greatest challenges faced by many enterprises. Increasing regulatory and legislative demands and the constantly changing threat landscape have brought data security to the forefront of IT issues. Several of the most important data security threats are related to protection of the storage environment. Drive loss and theft are primary risk factors. EMC® Data at Rest Encryption (D@RE) protects data confidentiality by adding back-end encryption to the entire array.
D@RE provides hardware-based, on-array, back-end encryption for VMAX arrays by using SAS I/O modules that incorporate AES-XTS inline data encryption. These modules encrypt and decrypt data as it is being written to or read from disk, thus protecting your information from unauthorized access even when disk drives are removed from the array.
D@RE supports either an internal embedded key manager, or an external, enterprise­grade key manager accessible through Key Management Interoperability Protocol (KMIP). The following external key managers are supported:
l
SafeNet KeySecure by Gemalto
l
IBM Security Key Lifecycle Manager
VMAX3 with HYPERMAX OS
For supported external key manager and HYPERMAX OS versions, refer to the EMC E-Lab Interoperability Matrix (https://www.emc.com/products/interoperability/
elab.htm).
When D@RE is enabled, all configured drives are encrypted, including data drives, spares, and drives with no provisioned volumes. Vault data is encrypted on Flash I/O modules.
D@RE enables:
l
Secure replacement for failed drives that cannot be erased. For some types of drive failures, data erasure is not possible. Without D@RE, if the failed drive is repaired, data on the drive may be at risk. With D@RE, simply delete the applicable keys, and the data on the failed drive is unreadable.
l
Protection against stolen drives. When a drive is removed from the array, the key stays behind, making data on the drive unreadable.
l
Faster drive sparing.
Data protection and integrity 39
Page 40
VMAX3 with HYPERMAX OS
The drive replacement script destroys the keys associated with the removed drive, quickly making all data on that drive unreadable.
l
Secure array retirement. Simply delete all copies of keys on the array, and all remaining data is unreadable.
D@RE is compatible with all array features and all supported drive types or volume emulations. Encryption is a powerful tool for enforcing your security policies. D@RE delivers encryption without degrading performance or disrupting your existing applications and infrastructure.
Enabling D@RE
D@RE is a licensed feature, and is pre-configured and installed at the factory. The process to upgrade an existing array to use D@RE is disruptive and requires re­installing the array, and may involve a full data back up and restore. Before you upgrade, you must plan how to manage any data already on the array. EMC Professional Services offers services to help you upgrade to D@RE.
D@RE components
Embedded D@RE (Figure 1 on page 41) uses the following components, all of which reside on the primary Management Module Control Station (MMCS):
l
RSA Embedded Data Protection Manager (eDPM)— Embedded key management platform, which provides onboard encryption key management functions, such as secure key generation, storage, distribution, and audit.
l
RSA BSAFE® cryptographic libraries— Provides security functionality for RSA eDPM Server (embedded key management) and the EMC KTP client (external key management).
l
Common Security Toolkit (CST) Lockbox— Hardware- and software-specific encrypted repository that securely stores passwords and other sensitive key manager configuration information. The lockbox binds to a specific MMCS.
External D@RE (Figure 2 on page 41) uses the same components as embedded, and adds the following:
l
EMC Key Trust Platform (KTP)— Also known as the KMIP Client, this component resides on the MMCS and communicates via the OASIS Key Management Interoperability Protocol (KMIP) with external key managers to manage encryption keys.
l
External Key Manager— Provides centralized encryption key management capabilities such as secure key generation, storage, distribution, audit, and enabling Federal Information Processing Standard (FIPS) 140-2 level 3 validation with High Security Module (HSM).
l
Cluster/Replication Group— Multiple external key managers sharing configuration settings and encryption keys. Configuration and key lifecycle changes made to one node are replicated to all members within the same cluster or replication group.
40 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 41
Figure 1 D@RE architecture, embedded
Director
Host
IO
Module
IO
Module
RSA
eDPM Server
IO
ModuleIOModule
Director
SAN
Storage
Configuration
Management
RSA
eDPM Client
Unencrypted data Management traffic Encrypted data
IP
Unique key per physical drive
Director
Host
IO
Module
IO
Module
IO
ModuleIOModule
Director
SAN
Storage
Configuration
Management
Unencrypted data Management traffic Encrypted data
External
(KMIP)
Key Manager
IP
Unique key per physical drive
Key management
KMIP Client
MMCS
Key Trust Platform (KTP)
TLS-authenticated KMIP traffic
VMAX3 with HYPERMAX OS
Figure 2 D@RE architecture, external
External Key Managers
D@RE's external, enterprise-grade key management is provided by Gemalto SafeNet KeySecure and IBM Security Key Lifecycle Manager. Keys are generated and distributed using the best practices as defined by industry standards (NIST 800-57 and ISO 11770). With D@RE, there is no need to replicate keys across volume
Data protection and integrity 41
Page 42
VMAX3 with HYPERMAX OS
snapshots or remote sites. D@RE external key managers can be used with either a FIPS 140-2 level 3 validated HSM, in the case of Gemalto SafeNet KeySecure, or FIPS 140-2 level 1 validated software, in the case of IBM Security Key Lifecycle Manager.
Encryption keys must be both highly available when they are needed, and tightly secured. Keys, and the information required to use keys (during decryption), must be preserved for the lifetime of the data. This is critical for encrypted data that is kept for many years.
Encryption keys must be accessible. Key accessibility is vital in high-availability environments. D@RE caches the keys locally so that connection to the Key Manager is required only for operations such as the initial installation of the array, replacement of a drive, or drive upgrades.
Key lifecycle events (generation and destruction) are recorded in the VMAX Audit Log.
Key protection
The local keystore file is encrypted with a 256-bit AES key derived from a randomly generated password, and stored in the Common Security Toolkit (CST) Lockbox, which leverages RSA’s BSAFE technology. The Lockbox is protected using MMCS­specific stable system values of the primary MMCS. These are the same SSVs that protect Secure Service Credentials (SSC).
Compromising the MMCS’s drive or copying Lockbox/keystore files off the array causes the SSV tests to fail. Compromising the entire MMCS only gives an attacker access if they also successfully compromise SSC.
There are no backdoor keys or passwords to bypass D@RE security.
Key operations
D@RE provides a separate, unique Data Encryption Key (DEK) for each drive in the array, including spare drives. The following operations ensure that D@RE uses the correct key for a given drive:
l
DEKs stored in the VMAX array include a unique key tag and key metadata when they are wrapped (encrypted) for use by the array. This information is included with the key material when the DEK is wrapped (encrypted) for use in the array.
l
During encryption I/O, the expected key tag associated with the drive is supplied separately from the wrapped key.
l
During key unwrap, the encryption hardware checks that the key unwrapped properly and that it matches the supplied key tag.
l
Information in a reserved system LBA (Physical Information Block, or PHIB) verifies the key used to encrypt the drive and ensures the drive is in the correct location.
l
During initialization, the hardware performs self-tests to ensure that the encryption/decryption logic is intact. The self-test prevents silent data corruption due to encryption hardware failures.
Audit logs
The audit log records major activities on the VMAX3 array, including:
l
Host-initiated actions
l
Physical component changes
l
Actions on the MMCS
l
D@RE key management events
42 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 43
Data erasure
VMAX3 with HYPERMAX OS
l
Attempts blocked by security controls (Access Controls)
The Audit Log is secure and tamper-proof. Event contents cannot be altered. Users with the Auditor access can view, but not modify, the log.
EMC Data Erasure uses specialized software to erase information on arrays. Data erasure mitigates the risk of information dissemination, and helps secure information at the end of the information lifecycle. Data erasure:
l
Protects data from unauthorized access
l
Ensures secure data migration by making data on the source array unreadable
l
Supports compliance with internal policies and regulatory requirements
Data Erasure overwrites data at the lowest application-addressable level to drives. The number of overwrites is configurable from 3x (the default) to 7x with a combination of random patterns on the selected arrays.
Overwrite is supported on both SAS and Flash drives. An optional certification service is available to provide a certificate of erasure. Drives that fail erasure are delivered to customers for final disposition.
For individual Flash drives, Secure Erase operations erase all physical flash areas on the drive which may contain user data.
EMC offers the following data erasure services:
l
EMC Data Erasure for Full Arrays — Overwrites data on all drives in the system when replacing, retiring or re-purposing an array.
l
EMC Data Erasure/Single Drives — Overwrites data on individual SAS and Flash drives.
l
EMC Disk Retention — Enables organizations that must retain all media to retain failed drives.
l
EMC Assessment Service for Storage Security — Assesses your information protection policies and suggests a comprehensive security strategy.
All erasure services are performed on-site in the security of the customer’s data center and include a Data Erasure Certificate and report of erasure results.
Block CRC error checks
Data integrity checks
HYPERMAX OS supports and provide:
l
Industry standard T10 Data Integrity Field (DIF) block cyclic redundancy code (CRC) for track formats. For open systems, this enables host-generated DIF CRCs to be stored with user data by the arrays and used for end-to-end data integrity validation.
l
Additional protections for address/control fault modes for increased levels of protection against faults. These protections are defined in user-definable blocks supported by the T10 standard.
l
Address and write status information in the extra bytes in the application tag and reference tag portion of the block CRC.
HYPERMAX OS validates the integrity of data they hold at every possible point during the lifetime of the data. From the point at which data enters an array, the data is continuously protected by error detection metadata. This protection metadata is checked by hardware and software mechanisms any time data is moved within the
Data protection and integrity 43
Page 44
VMAX3 with HYPERMAX OS
array subsystem, allowing the array to provide true end-to-end integrity checking and protection against hardware or software faults.
The protection metadata is appended to the data stream, and contains information describing the expected data location as well as CRC representation of the actual data contents. The expected values to be found in protection metadata are stored persistently in an area separate from the data stream. The protection metadata is used to validate the logical correctness of data being moved within the array any time the data transitions between protocol chips, internal buffers, internal data fabric endpoints, system cache, and system drives.
Drive monitoring and correction
HYPERMAX OS monitors medium defects by both examining the result of each disk data transfer and proactively scanning the entire disk during idle time. If a block on the disk is determined to be bad, the director:
1. Rebuilds the data in the physical storage, if necessary.
2. Remaps the defect block to another area on the drive set aside for this purpose.
3. Rewrites the data from physical storage back to the remapped block on the drive.
4. Rewrites the data in physical storage, if necessary.
The director maps around any bad block(s) detected, thereby avoiding defects in the media. The director also keeps track of each bad block detected on a drive. If the number of bad blocks exceeds a predefined threshold, the VMAX array invokes a sparing operation to replace the defective drive and then automatically alerts EMC Customer Support to arrange for corrective action, if necessary. With the deferred service sparing model, often times immediate action is not required.
Physical memory error correction and error verification
HYPERMAX OS corrects single-bit errors and report an error code once the single-bit errors reach a predefined threshold. In the unlikely event that physical memory replacement is required, the array notifies EMC support, and a replacement is ordered.
Drive sparing and direct member sparing
When HYPERMAX OS 5977 detects a drive is about to fail or has failed, a direct member sparing (DMS) process is initiated. Direct member sparing looks for available spares within the same engine that are of the same block size, capacity and speed, with the best available spare always used.
With direct member sparing, the invoked spare is added as another member of the RAID group. During a drive rebuild, the option to directly copy the data from the failing drive to the invoked spare drive is supported. The failing drive is removed only when the copy process is finished. Direct member sparing is automatically initiated upon detection of drive-error conditions.
Direct member sparing provides the following benefits:
l
The array can copy the data from the failing RAID member (if available), removing the need to read the data from all of the members and doing the rebuild. Copying to the new RAID member is less CPU intensive.
l
If a failure occurs in another member, the array can still recover the data automatically from the failing member (if available).
l
More than one spare for a RAID group is supported at the same time.
44 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 45
Vault to flash
VMAX3 with HYPERMAX OS
VMAX3 arrays initiate a vault operation if the system is powered down, transitions offline, or if environmental conditions, such as the loss of a data center due to an air conditioning failure occur.
Each array comes with Standby Power Supply (SPS) modules. If you lose power, the array uses the SPS power to write the system mirrored cache onto flash storage. Vaulted images are fully redundant; the contents of the system mirrored cache are saved twice to independent flash storage.
The vault operation
When a vault operation is initiated:
l
During the save part of the vault operation, the VMAX3 array stops all I/O. When the system mirrored cache reaches a consistent state, directors write the contents to the vault devices, saving two copies of the data. The array then completes the power down, or, if power down is not required, remains in the offline state.
l
During the restore part of the operation, the array startup program initializes the hardware and the environmental system, and restores the system mirrored cache contents from the saved data (while checking data integrity).
The system resumes normal operation when the SPSes are sufficiently recharged to support another vault. If any condition is not safe, the system does not resume operation and notifies Customer Support for diagnosis and repair. This allows Customer Support to communicate with the array and restore normal system operations.
Vault configuration considerations
The following configuration considerations apply:
l
To support vault to flash, the VMAX3 arrays require the following number of flash I/O modules:
n
VMAX 100K two to four per engine
n
VMAX 200K and 400K two to eight per engine
l
The size of the flash module is determined by the amount system cache and metadata required for the configuration. For the number of supported Flash I/O module, refer to Table 5 on page 24.
l
The vault space is for internal use only and cannot be used for any other purpose when the system is online.
l
The total capacity of all vault flash partitions will be sufficient to keep two logical copies of the persistent portion of the system mirrored cache.
Data protection and integrity 45
Page 46
VMAX3 with HYPERMAX OS
46 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 47
CHAPTER 2

Management Interfaces

This chapter provides an overview of interfaces to manage arrays. Topics include:
l
Management interface versions......................................................................... 48
l
Unisphere for VMAX.......................................................................................... 48
l
Unisphere 360....................................................................................................49
l
Solutions Enabler............................................................................................... 49
l
Mainframe Enablers...........................................................................................50
l
Geographically Dispersed Disaster Restart (GDDR)........................................... 51
l
SMI-S Provider...................................................................................................51
l
VASA Provider....................................................................................................51
l
eNAS management interface ............................................................................ 52
l
ViPR suite.......................................................................................................... 52
l
vStorage APIs for Array Integration...................................................................53
l
SRDF Adapter for VMware® vCenter™ Site Recovery Manager........................ 54
l
SRDF/Cluster Enabler .......................................................................................54
l
EMC Product Suite for z/TPF........................................................................... 54
l
SRDF/TimeFinder Manager for IBM i.................................................................55
l
AppSync............................................................................................................ 55
Management Interfaces
47
Page 48
Management Interfaces

Management interface versions

The following management software supports HYPERMAX OS 5977 Q2 2017 SR:
l
Unisphere for VMAX V8.4
l
Solutions Enabler V8.4
l
Mainframe Enablers V8.1
l
GDDR V5.0
l
Migrator V8.0
l
SMI-S V8.4
l
SRDF/CE V4.2.1
l
SRA V6.3
l
VASA Provider V8.4

Unisphere for VMAX

EMC Unisphere for VMAX is a web-based application that allows you to quickly and easily provision, manage, and monitor arrays.
Unisphere allows you to perform the following tasks:
Table 30
Unisphere tasks
Section Allows you to:
Home Perform viewing and management functions
such as array usage, alert settings, authentication options, system preferences, user authorizations, and link and launch client registrations.
Storage View and manage storage groups and storage
tiers.
Hosts View and manage initiators, masking views,
initiator groups, array host aliases, and port groups.
Data Protection View and manage local replication, monitor
and manage replication pools, create and view device groups, and monitor and manage migration sessions.
Performance Monitor and manage array dashboards,
Databases Troubleshoot database and storage issues,
System View and display dashboards, active jobs,
Support View online help for Unisphere tasks.
48 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
perform trend analysis for future capacity planning, and analyze data.
and launch Database Storage Analyzer.
alerts, array attributes and licenses.
Page 49

Workload Planner

Management Interfaces
Table 30 Unisphere tasks (continued)
Unisphere for VMAX is also available as Representational State Transfer (REST) API. This robust API allows you to access performance and configuration information, and to provision storage arrays. It can be used in any of the programming environments that support standard REST clients, such as web browsers and programming platforms that can issue HTTP requests.
Workload Planner displays performance metrics for applications. Use Workload Planner to model the impact of migrating a workload from one storage system to another.
Use Workload Planner to:
l
Model proposed new workloads.
l
Assess the impact of moving one or more workloads off of a given array running HYPERMAX OS.
l
Determine current and future resource shortfalls that require action to maintain the requested workloads.

FAST Array Advisor

Unisphere 360

The FAST Array Advisor wizard guides you through the steps to determine the impact on performance of migrating a workload from one array to another.
If the wizard determines that the target array can absorb the added workload, it automatically creates all the auto-provisioning groups required to duplicate the source workload on the target array.
Unisphere 360 is an on-premise management solution that provides a single window across arrays running HYPERMAX OS at a single site. It allows you to:
l
Add a Unisphere server to Unisphere 360 to allow for data collection and reporting of Unisphere management storage system data.
l
View the system health, capacity, alerts and capacity trends for your Data Center.
l
View all storage systems from all enrolled Unisphere instances in one place.
l
View details on performance and capacity.
l
Link and launch to Unisphere instances running v8.2 or higher.
l
Manage Unisphere 360 users and configure authentication and authorization rules.
l
View details of visible storage arrays, including current and target storage

Solutions Enabler

Solutions Enabler provides a comprehensive command line interface (SYMCLI) to manage your storage environment.
SYMCLI commands are invoked from the host, either interactively on the command line, or using scripts.
Workload Planner 49
Page 50
Management Interfaces
SYMCLI is built on functions that use system calls to generate low-level I/O SCSI commands. Configuration and status information is maintained in a host database file, reducing the number of inquiries from the host to the arrays.
Use SYMCLI to:
l
Configure array software (For example, TimeFinder, SRDF, Open Replicator)
l
Monitor device configuration and status
l
Perform control operations on devices and data objects
Solutions Enabler is also available as a Representational State Transfer (REST) API. This robust API allows you to access performance and configuration information, and to provision storage arrays. It can be used in any of the programming environments that support standard REST clients, such as web browsers and programming platforms that can issue HTTP requests.

Mainframe Enablers

The EMC Mainframe Enablers are a suite of software components that allow you to monitor and manage arrays running HYPERMAX OS. The following components are distributed and installed as a single package:
l
ResourcePak Base for z/OS Enables communication between mainframe-based applications (provided by EMC or independent software vendors) and arrays.
l
SRDF Host Component for z/OS Monitors and controls SRDF processes through commands executed from a host. SRDF maintains a real-time copy of data at the logical volume level in multiple arrays located in physically separate sites.
l
EMC Consistency Groups for z/OS Ensures the consistency of data remotely copied by SRDF feature in the event of a rolling disaster.
l
AutoSwap for z/OS Handles automatic workload swaps between arrays when an unplanned outage or problem is detected.
l
TimeFinder SnapVX With Mainframe Enablers V8.0 and higher, SnapVX creates point-in-time copies directly in the Storage Resource Pool (SRP) of the source device, eliminating the concepts of target devices and source/target pairing. SnapVX point-in-time copies are accessible to the host via a link mechanism that presents the copy on another device. TimeFinder SnapVX and HYPERMAX OS support backward compatibility to traditional TimeFinder products, including TimeFinder/Clone, TimeFinder VP Snap, and TimeFinder/Mirror.
l
Data Protector for z Systems (zDP™) With Mainframe Enablers V8.0 and higher, zDP is deployed on top of SnapVX. zDP provides a granular level of application recovery from unintended changes to data. zDP achieves this by providing automated, consistent point-in-time copies of data from which an application-level recovery can be conducted.
l
TimeFinder/Clone Mainframe Snap Facility Produces point-in-time copies of full volumes or of individual datasets. TimeFinder/Clone operations involve full volumes or datasets where the amount of data at the source is the same as the amount of data at the target. TimeFinder VP Snap leverages clone technology to create space-efficient snaps for thin devices.
50 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 51
l
TimeFinder/Mirror for z/OS Allows the creation of Business Continuance Volumes (BCVs) and provides the ability to ESTABLISH, SPLIT, RE-ESTABLISH and RESTORE from the source logical volumes.
l
TimeFinder Utility Conditions SPLIT BCVs by relabeling volumes and (optionally) renaming and recataloging datasets. This allows BCVs to be mounted and used.

Geographically Dispersed Disaster Restart (GDDR)

GDDR automates business recovery following both planned outages and disaster situations, including the total loss of a data center. Leveraging the VMAX architecture and the foundation of SRDF and TimeFinder replication families, GDDR eliminates any single point of failure for disaster restart plans in mainframe environments. GDDR intelligence automatically adjusts disaster restart plans based on triggered events.
GDDR does not provide replication and recovery services itself, but rather monitors and automates the services provided by other EMC products, as well as third-party products, required for continuous operations or business restart. GDDR facilitates business continuity by generating scripts that can be run on demand; for example, restart business applications following a major data center incident, or resume replication to provide ongoing data protection following unplanned link outages.
Scripts are customized when invoked by an expert system that tailors the steps based on the configuration and the event that GDDR is managing. Through automatic event detection and end-to-end automation of managed technologies, GDDR removes human error from the recovery process and allows it to complete in the shortest time possible.
The GDDR expert system is also invoked to automatically generate planned procedures, such as moving compute operations from one data center to another. This is the gold standard for high availability compute operations, to be able to move from scheduled DR test weekend activities to regularly scheduled data center swaps without disrupting application workloads.
Management Interfaces

SMI-S Provider

VASA Provider

EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. This initiative has developed a standard management interface that resulted in a comprehensive specification (SMI­Specification or SMI-S).
SMI-S defines the open storage management interface, to enable the interoperability of storage management technologies from multiple vendors. These technologies are used to monitor and control storage resources in multivendor or SAN topologies.
Solutions Enabler components required for SMI-S Provider operations are included as part of the SMI-S Provider installation.
The VASA Provider enables VMAX management software to inform vCenter of how VMFS storage, including VVols, is configured and protected. These capabilities are defined by EMC and include characteristics such as disk type, thin or thick provisioning, storage tiering and remote replication status. This allows vSphere administrators to make quick, intelligent, and informed decisions as to virtual machine
Geographically Dispersed Disaster Restart (GDDR) 51
Page 52
Management Interfaces
placement. VASA offers the ability for vSphere administrators to complement their use of plugins and other tools to track how VMAX devices hosting VMFS volume are configured to meet performance and availability needs.

eNAS management interface

eNAS block and file storage is managed using the Unisphere for VMAX File Dashboard. Link and launch enables you to run the block and file management GUI within the same session.
The configuration wizard helps you create storage groups (automatically provisioned to the Data Movers) quickly and easily. Creating a storage group creates a storage pool in Unisphere for VNX that can be used for file level provisioning tasks.

ViPR suite

The EMC ViPR® Suite delivers storage automation and management insights across multi-vendor storage. It helps to improve efficiency and optimize storage resources, while meeting service levels. The ViPR Suite provides self-service access to speed service delivery, reducing dependencies on IT, and providing an easy to use cloud experience.

ViPR Controller

ViPR Controller provides a single control plane for heterogeneous storage systems. ViPR makes a multi-vendor storage environment look like one virtual array.
ViPR uses software adapters that connect to the underlying arrays. ViPR exposes the APIs so any vendor, partner, or customer can build new adapters to add new arrays. This creates an extensible “plug and play” storage environment that can automatically connect to, discover and map arrays, hosts, and SAN fabrics.
ViPR enables the software-defined data center by helping users:
l
Automate storage for multi-vendor block and file storage environments (control plane, or ViPR Controller)
l
Manage and analyze data objects (ViPR Object and HDFS Services) to create a unified pool of data across file shares and commodity servers
l
Create scalable, dynamic, commodity-based block storage (ViPR Block Service)
l
Manage multiple data centers in different locations with single sign-on data access from any data center
l
Protect against data center failures using active-active functionality to replicate data between geographically dispersed data centers
l
Integrate with VMware and Microsoft compute stacks
l
Migrate non-ViPR volumes into the ViPR environment (ViPR Migration Services Host Migration Utility)
For ViPR Controller requirements, refer to the
EMC ViPR Controller Support Matrix
on
the EMC Online Support website.

ViPR Storage Resource Management

EMC ViPR SRM provides comprehensive monitoring, reporting, and analysis for heterogeneous block, file, and virtualized storage environments.
52 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 53
Management Interfaces
Use ViPR SRM to:
l
Visualize applications to storage dependencies
l
Monitor and analyze configurations and capacity growth
l
Optimize your environment to improve return on investment
Virtualization enables businesses of all sizes to simplify management, control costs, and guarantee uptime. However, virtualized environments also add layers of complexity to the IT infrastructure that reduce visibility and can complicate the management of storage resources. ViPR SRM addresses these layers by providing visibility into the physical and virtual relationships to ensure consistent service levels.
As you build out your cloud infrastructure, ViPR SRM helps you ensure storage service levels while optimizing IT resources — both key attributes of successful cloud deployments.
ViPR SRM is designed for use in heterogeneous environments containing multi-vendor networks, hosts, and storage devices. The information it collects and the functionality it manages can reside on technologically disparate devices in geographically diverse locations. ViPR SRM moves a step beyond storage management and provides a platform for cross-domain correlation of device information and resource topology, and enables a broader view of your storage environment and enterprise data center.
ViPR SRM provides a dashboard view of the storage capacity at an enterprise level through Watch4net. The Watch4net dashboard view displays information to support decisions regarding storage capacity.
The Watch4net dashboard consolidates data from multiple ProSphere instances spread across multiple locations. It gives you a quick overview of the overall capacity status in your environment, raw capacity usage, usable capacity, used capacity by purpose, usable capacity by pools, and service levels.
The
EMC ViPR SRM Product Documentation Index
provides links to related ViPR
documentation.

vStorage APIs for Array Integration

VMware vStorage APIs for Array Integration (VAAI) optimize server performance by offloading virtual machine operations to arrays running HYPERMAX OS.
The storage array performs the select storage tasks, freeing host resources for application processing and other tasks.
In VMware environments, storage arrays supports the following VAAI components:
l
Full Copy — (Hardware Accelerated Copy) Faster virtual machine deployments, clones, snapshots, and VMware Storage vMotion® operations by offloading replication to the storage array.
l
Block Zero — (Hardware Accelerated Zeroing) Initializes file system block and virtual drive space more rapidly.
l
Hardware-Assisted Locking — (Atomic Test and Set) Enables more efficient meta data updates and assists virtual desktop deployments.
l
UNMAP — Enables more efficient space usage for virtual machines by reclaiming space on datastores that is unused and returns it to the thin provisioning pool from which it was originally drawn.
l
VMware vSphere Storage APIs for Storage Awareness (VASA).
vStorage APIs for Array Integration 53
Page 54
Management Interfaces
VAAI is native in HYPERMAX OS and does not require additional software, unless eNAS is also implemented. If eNAS is implemented on the array, support for VAAI requires the VAAI plug-in for NAS. The plug-in is downloadable from EMC support.

SRDF Adapter for VMware® vCenter™ Site Recovery Manager

EMC SRDF Adapter is a Storage Replication Adapter (SRA) that extends the disaster restart management functionality of VMware vCenter Site Recovery Manager 5.x to arrays running HYPERMAX OS.
SRA allows Site Recovery Manager to automate storage-based disaster restart operations on storage arrays in an SRDF configuration.

SRDF/Cluster Enabler

Cluster Enabler (CE) for Microsoft Failover Clusters is a software extension of failover clusters functionality. Cluster Enabler allows Windows Server 2008 (including R2), and Windows Server 2012 (including R2) Standard and Datacenter editions running Microsoft Failover Clusters to operate across multiple connected storage arrays in geographically distributed clusters.
SRDF/Cluster Enabler (SRDF/CE) is a software plug-in module to EMC Cluster Enabler for Microsoft Failover Clusters software. The Cluster Enabler plug-in architecture consists of a CE base module component and separately available plug-in modules, which provide your chosen storage replication technology.
SRDF/CE supports:
l
Synchronous mode on page 123
l
Asynchronous mode on page 123
l
Concurrent SRDF solutions on page 107
l
Cascaded SRDF solutions on page 108

EMC Product Suite for z/TPF

The EMC Product Suite for z/TPF is a suite of components that monitor and manage arrays running HYPERMAX OS from a z/TPF host. z/TPF is an IBM mainframe operating system characterized by high-volume transaction rates with significant communications content. The following software components are distributed separately and can be installed individually or in any combination:
l
SRDF Controls for z/TPF Monitors and controls SRDF processes with functional entries entered at the z/TPF Prime CRAS (computer room agent set).
l
TimeFinder Controls for z/TPF Provides a business continuance solution consisting of TimeFinder SnapVX, TimeFinder/Clone, and TimeFinder/Mirror.
l
ResourcePak for z/TPF Provides VMAX configuration and statistical reporting and extended features for SRDF Controls for z/TPF and TimeFinder Controls for z/TPF.
54 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 55

SRDF/TimeFinder Manager for IBM i

EMC SRDF/TimeFinder Manager for IBM i is a set of host-based utilities that provides an IBM i interface to EMC's SRDF and TimeFinder.
This feature allows you to configure and control SRDF or TimeFinder operations on arrays attached to IBM i hosts, including:
l
SRDF:
n
Configure, establish and split SRDF devices, including:
– SRDF/A
– SRDF/S
– Concurrent SRDF/A
– Concurrent SRDF/S
l
TimeFinder:
n
Configure, establish and split TimeFinder BCV devices.
n
Create point-in-time copies of full volumes or individual data sets.
n
Create point-in-time snaphots of images.
l
FAST
Management Interfaces

AppSync

Extended features
EMC SRDF/TimeFinder Manager for IBM i extended features provides support for the IBM independent ASP (IASP) functionality.
IASPs are sets of switchable or private auxiliary disk pools (up to 223) that can be brought online/offline on an IBM i host without affecting the rest of the system.
When combined with SRDF/TimeFinder Manager for IBM i, IASPs let you control SRDF or TimeFinder operations on arrays attached to IBM i hosts, including:
l
Display and assign TimeFinder SnapVX devices.
l
Execute SRDF or TimeFinder commands to establish and split SRDF or TimeFinder devices.
l
Present one or more target devices containing an IASP image to another host for business continuance (BC) processes.
Access to extended features control operations include:
l
From the SRDF/TimeFinder Manager menu-driven interface.
l
From the command line using SRDF/TimeFinder Manager commands and associated IBM i commands.
EMC AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring, and cloning critical Microsoft and Oracle applications and VMware environments. After defining service plans, application owners can protect, restore, and clone production data quickly with item-level granularity by using the underlying EMC replication technologies. AppSync also provides an application protection monitoring service that generates alerts when the SLAs are not met.
AppSync supports the following applications and storage arrays:
SRDF/TimeFinder Manager for IBM i 55
Page 56
Management Interfaces
l
Applications — Oracle, Microsoft SQL Server, Microsoft Exchange, and VMware VMFS and NFS datastores and File systems.
l
Replication Technologies—SRDF, SnapVX, VNX Advanced Snapshots, VNXe Unified Snapshot, RecoverPoint, XtremIO Snapshot, and ViPR Snapshot.
56 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 57
CHAPTER 3

Open systems features

This chapter describes open systems-specific functionality provided with VMAX3 arrays.
l
HYPERMAX OS support for open systems........................................................ 58
l
Backup and restore to external arrays................................................................59
l
VMware Virtual Volumes....................................................................................69
Open systems features 57
Page 58
Open systems features

HYPERMAX OS support for open systems

HYPERMAX OS supports FBA device emulations for open systems and D910 for IBM i.
Any logical device manager software installed on a host can be used with the storage devices.
HYPERMAX OS increases scalability limits from previous generations of arrays, including:
l
Maximum device size is 64TB
l
Maximum host addressable devices is 64,000/array
l
Maximum storage groups, port groups, and masking views is 64,000/array
l
Maximum devices addressable through each port is 4,000 HYPERMAX OS does not support meta devices, thus it is much more difficult to reach this limit.
For more information on provisioning storage in an open systems environment, refer to
Open Systems-specific provisioning on page 79.
For the most recent information, consult the EMC Support Matrix in the E-Lab Interoperability Navigator at http://elabnavigator.emc.com.
58 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 59

Backup and restore to external arrays

EMC ProtectPoint integrates primary storage on storage arrays running HYPERMAX OS and protection storage for backups on an EMC Data Domain system.
ProtectPoint provides block movement of the data on application source LUNs to encapsulated Data Domain LUNs for incremental backups.
Application administrators can use the ProtectPoint workflow to protect database applications and associated application data.
The ProtectPoint solution uses Data Domain and HYPERMAX OS features to provide protection:
On the Data Domain system:
l
vdisk services
l
FastCopy
On the storage array:
l
FAST.X (tiered storage)
l
SnapVX
The combination of ProtectPoint and the storage array-to-Data Domain workflow enables the Application Administrator to:
l
Back up and protect data
l
Retain and replicate copies
l
Restore data
l
Recover applications
Open systems features

Data movement

The following image shows the data movement in a typical ProtectPoint solution. Data moves from the Application/Recovery (AR) Host to the primary array, and then to the Data Domain system.
Backup and restore to external arrays 59
Page 60
Application/Recovery Host
Application
File system
Operating system
Solutions Enabler
Primary storage
Data Domain
SnapVX
Backup
device
Source Device
SnapVX
Link
Copy
Production
device
vDisk
Static-image
Note
Open systems features
Figure 3 ProtectPoint data movement

Typical site topology

The Storage administrator configures the underlying storage resources on the primary storage array and the Data Domain system. With this storage configuration information, the Application administrator triggers the workflow to protect the application.
Before triggering the workflow, the Application administrator must put the application in hot back-up mode. This ensures that an application-consistent snapshot is preserved on the Data Domain system.
Application administrators can select a specific backup when restoring data, and make that backup available on a selected set of primary storage devices.
Operations to restore the data and make the recovery or restore devices available to the recovery host must be performed manually on the primary storage through EMC Solutions Enabler. The ProtectPoint workflow provides a copy of the data, but not any application intelligence.
The ProtectPoint solution requires both IP network (LAN or WAN) and Fibre Channel (FC) Storage Area Network (SAN) connectivity.
The following image shows a typical primary site topology.
60 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 61
Figure 4 Typical RecoverPoint backup/recovery topology
Production
host
0001A
0001B
000BA
000BB
0001C
0001D
000BC
Production
Devices
Backup Devices
(Encapsulated
)
Restore Devices
Recovery
Devices
(Encapsulated)
vdisk-dev0
vdisk-dev1
vdisk-dev2
vdisk-dev3
Primary Storage
Data Domain
Recovery
host
vdisk provides
storage
vdisk provides
storage
vdisk provides
storage
vd
is
k
pro
vid
e
s
storage
Open systems features

ProtectPoint solution components

This section describes the connections, hosts, devices in a typical ProtectPoint solution.
The following table lists requirements for connecting components in the ProtectPoint solution.
Table 31
Connected Components Connection Type
Primary Application Host to primary VMAX array FC SAN
Primary Application Host to primary Data Domain system IP LAN
Primary Recovery Host to primary VMAX array FC SAN
Primary Recovery Host to primary Data Domain system IP LAN
Primary VMAX array to primary Data Domain system FC SAN
Secondary Recovery Host to secondary VMAX array (optional)
Secondary Recovery Host to secondary Data Domain system (optional)
ProtectPoint connections
FC SAN
IP LAN
ProtectPoint solution components 61
Page 62
Open systems features
Table 31 ProtectPoint connections (continued)
Connected Components Connection Type
Secondary VMAX array to secondary Data Domain system (optional)
Primary Application Host to secondary Data Domain system (optional)
Primary Data Domain system to secondary Data Domain system (optional)
FC SAN
IP WAN
IP WAN
The following list describes the hosts and devices in a ProtectPoint solution:
Production Host
The host running the production database application. The production host sees only the production VMAX3 devices.
Recovery Host
The host available for database recovery operations. The recovery host can include direct access to:
l A backup on the recovery devices (vDisk devices encapsulated through
FAST.X), or
l Access to a backup copy of the database on the restore devices (native
VMAX3 devices).
Production Devices
Host devices available to the production host where the database instance resides. Production devices are the source devices for the TimeFinder/SnapVX operations that copy the production data to the backup devices for transfer to the Data Domain.
Restore Devices
Native VMAX3 devices used for full LUN-level copy of a backup to a new set of devices is desired. Restore devices are masked to the recovery host.
Backup Devices
Targets of the TimeFinder/SnapVX snapshots from the production devices. Backup devices are VMAX3 thin devices created when the Data Domain vDisk backup LUNs are encapsulated.
Recovery Devices
VMAX3 devices created when the Data Domain vDisk recovery LUNs are encapsulated. Recovery devices are presented to the recovery host when the Application administrator performs an object-level restore of specific database objects.

ProtectPoint and traditional backup

The ProtectPoint workflow can provide data protection in situations where more traditional approaches cannot successfully meet the business requirements. This is often due to small or non-existent backup windows, demanding recovery time objective (RTO) or recovery point objective (RPO) requirements, or a combination of both.
62 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 63
Unlike traditional backup and recovery, ProtectPoint does not rely on a separate
Note
process to discover the backup data and additional actions to move that data to backup storage. Instead of using dedicated hardware and network resources, ProtectPoint uses existing application and storage capabilities to create point-in-time copies of large data sets. The copies are transported across a storage area network (SAN) to Data Domain systems to protect the copies while providing deduplication to maximize storage efficiency.
ProtectPoint minimizes the time required to protect large data sets, and allows backups to fit into the smallest of backup windows to meet demanding RTO or RPO requirements.

Basic backup workflow

In the basic backup workflow, data is transferred from the primary storage array to the Data Domain system. ProtectPoint manages the data flow. The actual movement of the data is done by SnapVX.
The ProtectPoint solution enables the Application Administrator to take the snapshot on the primary storage array with minimal disruption to the application.
Open systems features
The Application Administrator must ensure that the application is in an appropriate state before initiating the backup operation. This ensures that the copy or backup is application-consistent.
In a typical operation:
l
The Application Administrator uses ProtectPoint to create a snapshot.
l
ProtectPoint moves the data to the Data Domain system.
l
The primary storage array keeps track of the data that has changed since the last update to the Data Domain system, and only copies the changed data.
l
Once all the data captured in the snapshot has been sent to the Data Domain system, the Application Administrator can create a static-image of the data that reflects the application-consistent copy initially created on the primary storage array.
This static-image and its metadata are managed separately from the snapshot on the primary storage array, and can used as the source for additional copies of the backup. Static-images that are complete with metadata are called backup images. ProtectPoint creates one backup image for every protected LUN. Backup images can be combined into backup sets that represent an entire application point-in-time backup.
The following image illustrates the basic backup workflow.
Basic backup workflow 63
Page 64
Production
host
0001A
0001B
000BA
000BB
0001C
0001D
000BC
000BD
Production
Devices
Backup
Devices
(Encapsulated
)
Restore
Devices
Recovery
Devices
(Encapsulated)
vdisk-dev0
vdisk-dev1
vdisk-dev2
vdisk-dev3
Primary Storage
Data Domain
Recovery
host
vdisk provides
vdisk provides
storage
k
provides
vd
i
s
k
provides
stor
age
vdisk provides
storage
vdisk provides
storage
Open systems features
Figure 5 Basic backup workflow
1. On the Application Host, the Application Administrator puts the database in hot
2. On the primary storage array, ProtectPoint creates a snapshot of the storage
3. The primary storage array analyzes the data and uses FAST.X to copy the changed
4. The Data Domain creates and stores a backup image of the snapshot.
backup mode.
device. The application can be taken out of hot backup mode when this step is complete.
data to an encapsulated Data Domain storage device.
64 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 65

Basic restore workflow

There are two types of restoration:
Object-level restoration
Full-application rollback restoration
For either type of restoration, the Application Administrator selects the backup image to restore from the Data Domain system.
Object-level restoration
For object-level restoration, the Application Administrator:
l
l
The Storage Administrator masks the recovery devices to the AR Host for an object­level restore.
The following image shows the object-level restoration workflow.
Open systems features
One or more database objects are restored from a snapshot.
The application is restored to a previous point-in-time. There are two types of recovery operations:
l A restore to the production database devices seen by the production host.
l A restore to the restore devices which can be made available to the recovery
host.
Selects the backup image on the Data Domain system
Performs a restore of a database image to the recovery devices,
Basic restore workflow 65
Page 66
Production
host
0001A
0001B
000BA
000BB
0001C
0001D
000BC
000BD
Production
Devices
Backup
Devices
(Encapsulated
)
Restore
Devices
Recovery
Devices
(Encapsulated)
vdisk-dev0
vdisk-dev1
vdisk-dev2
vdisk-dev3
Primary Storage
Data Domain
Recovery
host
vdisk provides
storage
vdisk p
ro
vides
storage
vdisk provides
storage
storage
vdisk provides
vdisk provides
storage
Open systems features
Figure 6 Object-level restoration workflow
1. The Data Domain system writes the backup image to the encapsulated storage device, making it available on the primary storage array.
2. The Application Administrator mounts the encapsulated storage device to the recovery host, and uses OS- and application-specific tools and commands to restore specific objects.
Full-application rollback restoration
For a full-application rollback restoration, after selecting the backup image on the Data Domain system, the Storage Administrator performs a restore to the primary storage restore or production devices, depending on which devices need a restore of the full database image from the chosen point in time. Unlike object-level restoration, full-application rollback restoration requires manual SnapVX operations to complete the restore process. To make the backup image available on the primary storage array, the Storage Administrator must create a snapshot between the encapsulated Data Domain recovery devices and the restore/production devices, and then initiate the link copy operation.
The following image shows the full application rollback restoration workflow.
66 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 67
Figure 7 Full-application rollback restoration workflow
Production
host
0001A
0001B
000BA
000BB
0001C
0001D
000BC
000BD
Production
Devices
Backup
Devices
(Encapsulated
)
Restore Devices
Recovery
Devices
(Encapsulated)
vdisk-dev0
vdisk-dev1
vdisk-dev2
vdisk-dev3
Primary Storage
Data Domain
Recovery
host
vdisk provides
storage
v
di
sk
provides
storage
vdisk provides
storage
vdisk provides
vdisk provides
storage
storage
Open systems features
1. The Data Domain system writes the backup image to the encapsulated storage device, making it available on the primary storage array.
2. The Application Administrator creates a SnapVX snapshot of the encapsulated storage device and performs a link copy to the primary storage device, overwriting the existing data on the primary storage.
3. The restored data is presented to the Application Host.
The following image shows a full database recovery to product devices workflow. The workflow is the same as a full-application rollback restoration with the difference being the link copy targets.
Basic restore workflow 67
Page 68
Production
host
0001A
0001B
000BA
000BB
0001C
0001D
000BC
000BD
Production
Devices
Backup
Devices
(Encapsulated
)
Restore
Devices
Recovery
Devices
(Encapsulated)
vdisk-dev0
vdisk-dev1
vdisk-dev2
vdisk-dev3
Primary Storage
Data Domain
vdisk provides
vdisk provides
storage
vdisk provides
e
storag
vdisk provides
storage
v
d
isk pro
vid
es
s
torag
e
Open systems features
Figure 8 Full database recovery to production devices
68 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 69

VMware Virtual Volumes

Storage arrays running HYPERMAX OS support VMware Virtual Volumes (VVols). VVols are a new storage object developed by VMware to simplify management and provisioning in virtualized environments. With VVols, the management process moves from the LUN (data store) level to the virtual machine (VM) level. This level of granularity allows VMware and cloud administrators to assign specific storage attributes to each VM, according to its performance and storage requirements.

VVol components

To support management capabilities of VVols, the storage/vCenter environment requires the following:
l
EMC VMAX VASA Provider – The VASA Provider (VP) is a software plug-in that uses a set of out-of-band management APIs (VASA version 2.0). The VASA Provider exports storage array capabilities and presents them to vSphere through the VASA APIs. VVols are managed by way of vSphere through the VASA Provider APIs (create/delete) and not with the Unisphere for VMAX user interface or Solutions Enabler CLI. After VVols are setup on the array, Unisphere and Solutions Enabler only support VVol monitoring and reporting.
l
Storage Containers (SC) – Storage containers are chunks of physical storage used to logically group VVols. SCs are based on the grouping of Virtual Machine Disks (VMDKs) into specific Service Levels. SC capacity is limited only by hardware capacity. At least one SC per storage system is required, but multiple SCs per array are allowed. SCs are created and managed on the array by the Storage Administrator. Unisphere and Solutions Enabler CLI support management of SCs.
l
Protocol Endpoints (PE) – Protocol endpoints are the access points from the hosts to the array by the Storage Administrator. PEs are compliant with FC and replace the use of LUNs and mount points. VVols are "bound" to a PE, and the bind and unbind operations are managed through the VP APIs, not with the Solutions Enabler CLI. Existing multi-path policies and NFS topology requirements can be applied to the PE. PEs are created and managed on the array by the Storage Administrator. Unisphere and Solutions Enabler CLI support management of PEs.
Open systems features
Table 32
VVol architecture component management capability
Functionality Component
VVol device management (create, delete) VASA Provider APIs / Solutions Enabler APIs
VVol bind management (bind, unbind)
Protocol Endpoint device management (create, delete)
Protocol Endpoint-VVol reporting (list, show)
Storage Container management (create, delete, modify)
Storage container reporting (list, show)
Unisphere/Solutions Enabler CLI
VMware Virtual Volumes 69
Page 70
Open systems features

VVol scalability

The following details the VVol scalability limits:
Table 33 VVol-specific scalability
Requirement Value
Number of VVols/Array 64,000
Number of Snapshots/Virtual Machine
Number of Storage Containers/Array 16
Number of Protocol Endpoints/Array 1/ESXi Host
a
12

VVol workflow

Maximum number of Protocol Endpoints/ Array
Number of arrays supported /VP 1
Number of vCenters/VP 2
Maximum device size 16 TB
a.
VVol Snapshots can only be managed through vSphere. They cannot be created through Unisphere or Solutions Enabler.
1,024
Before you begin
Install and configure following EMC applications:
l
Unisphere for VMAX V8.2 or higher
l
Solutions Enabler CLI V8.2 or higher
l
VASA Provider V8.2 or higher
For instructions on installing Unisphere and Solutions Enabler, refer to their respective installation guides. For instructions on installing the VASA Provider, refer to the
VMAX VASA Provider Release Notes
.
EMC
The steps required to create a VVol-based virtual machine are broken up by role:
Procedure
1. The VMAX Storage Administrator, uses either Unisphere for VMAX or Solutions Enabler to create and present the storage to the VMware environment:
a. Create one or more storage containers on the storage array. This step
defines how much storage and from which Service Level the VMware user can provision.
b. Create Protocol Endpoints and provision them to the ESXi hosts.
2. The VMware Administrator, uses the vSphere Web Client to deploy the VM on the storage array:
a. Add the VASA Provider to the vCenter. This allows vCenter to communicate
with the storage array.
b. Create VVol datastore from the storage container.
70 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 71
Open systems features
c. Create the VM Storage policies.
d. Create the VM in the VVol datastore, selecting one of the VM storage
policies.
VVol workflow 71
Page 72
Open systems features
72 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 73
CHAPTER 4

Mainframe Features

This chapter describes mainframe-specific functionality provided with VMAX arrays.
l
HYPERMAX OS support for mainframe..............................................................74
l
IBM z Systems functionality support..................................................................74
l
IBM 2107 support...............................................................................................75
l
Logical control unit capabilities.......................................................................... 75
l
Disk drive emulations..........................................................................................76
l
Cascading configurations...................................................................................76
Mainframe Features 73
Page 74
Mainframe Features

HYPERMAX OS support for mainframe

VMAX 100K, 200K, 400K arrays with HYPERMAX OS supports both mainframe-only and mixed mainframe/open systems environments.
VMAX arrays provide the following mainframe support for CKD:
l
Support for 64, 128, 256 FICON single and multi mode ports, respectively
l
Support for CKD 3380/3390 and FBA devices
l
Mainframe (FICON) and OS FC/iSCSI/FCoE connectivity
l
High capacity flash drives
l
16 Gb/s FICON host connectivity
l
Support for Forward Error Correction, Query Host Access, and FICON Dynamic Routing
l
T10 DIF protection for CKD data along the data path (in cache and on disk) to improve performance for multi-record operations
l
D@RE external key managers:
n
Gemalto SafeNet KeySecure
n
IBM Security Key Lifecycle Manager
Data at Rest Encryption on page 39 provides more information.

IBM z Systems functionality support

VMAX arrays support the latest IBM z Systems enhancements, ensuring that the VMAX can handle the most demanding mainframe environments. VMAX arrays support:
l
zHPF, including support for single track, multi track, List Prefetch, bi-directional transfers, QSAM/BSAM access, and Format Writes
l
zHyperWrite
l
Non-Disruptive State Save (NDSS)
l
Compatible Native Flash (Flash Copy)
l
Concurrent Copy
l
Multi-subsystem Imaging
l
Parallel Access Volumes
l
Dynamic Channel Management (DCM)
l
Dynamic Parallel Access Volumes/Multiple Allegiance (PAV/MA)
l
Peer-to-Peer Remote Copy (PPRC) SoftFence
l
Extended Address Volumes (EAV)
l
Persistent IU Pacing (Extended Distance FICON)
l
HyperPAV
l
PDS Search Assist
l
Modified Indirect Data Address Word (MIDAW)
l
Multiple Allegiance (MA)
74 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 75
l
Note
Sequential Data Striping
l
Multi-Path Lock Facility
l
HyperSwap
VMAX can participate in a z/OS Global Mirror (XRC) configuration only as a secondary.

IBM 2107 support

When VMAX arrays emulate an IBM 2107, they externally represent the array serial number as an alphanumeric number in order to be compatible with IBM command output. Internally, VMAX arrays retain a numeric serial number for IBM 2107 emulations. HYPERMAX OS handles correlation between the alphanumeric and numeric serial numbers.

Logical control unit capabilities

Mainframe Features
The following table lists logical control unit (LCU) maximum values:
Table 34 Logical control unit maximum values
Capability Maximum value
LCUs per director slice (or port) 255 (within the range of 00 to FE)
LCUs per VMAX split
Splits per VMAX array 16 (0 to 15)
Devices per VMAX split 65,280
LCUs per VMAX array 512
Devices per LCU 256
Logical paths per port 2,048
Logical paths per LCU per port (see Table 35 on page 76)
VMAX system host address per VMAX array (base and alias)
I/O host connections per VMAX engine 32
a.
a
A VMAX split is a logical partition of the VMAX system, identified by unique devices, SSIDs, and host serial number. The maximum VMAX system host address per array is inclusive of all splits.
255
128
64K
The following table lists the maximum LPARs per port based on the number of LCUs with active paths:
IBM 2107 support 75
Page 76
Mainframe Features
Table 35 Maximum LPARs per port
LCUs with active paths per port
16 4K 128
32 8K 64
64 16K 32
128 32K 16
255 64K 8

Disk drive emulations

When VMAX arrays are configured to mainframe hosts, the data recording format is Extended CKD (ECKD). The supported CKD emulations are 3380 and 3390.

Cascading configurations

Cascading configurations greatly enhance FICON connectivity between local and remote sites by using switch-to-switch extensions of the CPU to the FICON network. These cascaded switches communicate over long distances using a small number of high-speed lines called interswitch links (ISLs). A maximum of two switches may be connected together within a path between the CPU and the VMAX array.
Use of the same switch vendors is required for a cascaded configuration. To support cascading, each switch vendor requires specific models, hardware features, software features, configuration settings, and restrictions. Specific IBM CPU models, operating system release levels, host hardware, and HYPERMAX levels are also required.
For the most up-to-date information about switch support, consult the EMC Support Matrix (ESM), available through E-Lab™ Interoperability Navigator (ELN) at http://
elabnavigator.emc.com.
Maximum volumes supported per port
VMAX maximum LPARs per port
76 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 77
CHAPTER 5

Provisioning

This chapter provides an overview of storage provisioning. Topics include:
l
Thin provisioning................................................................................................ 78
Provisioning 77
Page 78
Note
Note
Provisioning

Thin provisioning

VMAX3 arrays are pre-configured at the factory with thin provisioning pools ready for use. Thin provisioning improves capacity utilization and simplifies storage management. Thin provisioning enables storage to be allocated and accessed on demand from a pool of storage that services one or many applications. LUNs can be “grown” over time as space is added to the data pool with no impact to the host or application. Data is widely striped across physical storage (drives) to deliver better performance than standard provisioning.
DATA devices (TDATs) are provisioned/pre-configured/created while the host addressable storage devices TDEVs are created by either the customer or customer support, depending on the environment.
Thin provisioning increases capacity utilization and simplifies storage management by:
l
l
l
l
Thin provisioning allows you to:
l
l
l
When hosts write to TDEVs, the physical storage is automatically allocated from the default Storage Resource Pool.
Enabling more storage to be presented to a host than is physically consumed
Allocating storage only as needed from a shared thin provisioning pool
Making data layout easier through automated wide striping
Reducing the steps required to accommodate growth
Create host-addressable thin devices (TDEVs) using Unisphere for VMAX or Solutions Enabler
Add the TDEVs to a storage group
Run application workloads on the storage groups

Thin devices (TDEVs)

VMAX3 arrays support only thin devices.
Thin devices (TDEVs) have no storage allocated until the first write is issued to the device. Instead, the array allocates only a minimum allotment of physical storage from the pool, and maps that storage to a region of the thin device including the area targeted by the write.
These initial minimum allocations are performed in small units called thin device extents. The device extent for a thin device is 1 track (128 KB).
When a read is performed on a device, the data being read is retrieved from the appropriate data device to which the thin device extent is allocated. Reading an area of a thin device that has not been mapped does not trigger allocation operations. Reading an unmapped block returns a block in which each byte is equal to zero.
When more storage is required to service existing or future thin devices, data devices can be added to existing thin storage groups.
78 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 79

Thin CKD

If you are using HYPERMAX 5977 or higher, initialize and label thin devices using the ICKDSF INIT utility.

Thin device oversubscription

Provisioning
A thin device can be presented for host use capacity of the device.
The sum of the reported capacities of the thin devices using a given pool can exceed the available storage capacity of the pool. Thin devices whose capacity exceeds that of their associated pool are "oversubscribed".
Over-subscription allows presenting larger than needed devices to hosts and applications without having the physical drives to fully allocate the space represented by the thin devices.

Open Systems-specific provisioning

HYPERMAX host I/O limits for open systems
On open systems, you can define host I/O limits and associate a limit with a storage group. The I/O limit definitions contain the operating parameters of the input/output per second and/or bandwidth limitations.
When an I/O limit is associated with a storage group, the limit is equally divided among all the directors in the masking view associated with the storage group. All devices in that storage group share that limit.
When applications are configured, you can associate the limits with storage groups that contain a list of devices. A single storage group can only be associated with one limit and a device can only be in one storage group that has limits associated.
Up to 4096 host I/O limits can be defined.
Consider the following when using host I/O limits:
l
Cascaded host I/O limits controlling parent and child storage groups limits in a cascaded storage group configuration.
l
Offline and failed director redistribution of quota that supports all available quota to be available instead of losing quota allocations from offline and failed directors.
l
Dynamic host I/O limits support for dynamic redistribution of steady state unused director quota.
before
mapping all of the reported
Auto-provisioning groups on open systems
You can auto-provision groups on open systems to reduce complexity, execution time, labor cost, and the risk of error.
Auto-provisioning groups enables users to group initiators, front-end ports, and devices together, and to build masking views that associate the devices with the ports and initiators.
When a masking view is created, the necessary mapping and masking operations are performed automatically to provision storage.
Thin CKD 79
Page 80
Provisioning
After a masking view exists, any changes to its grouping of initiators, ports, or storage devices automatically propagate throughout the view, automatically updating the mapping and masking as required.
Auto-provisioning group components
The components of an auto-provisioning group are as follows:
Initiator group
A logical grouping of Fibre Channel initiators. An initiator group is limited to either a parent, which can contain other groups, or a child, which contains one initiator role. Mixing of initiators and child name in a group is not supported.
Port group
A logical grouping of Fibre Channel front-end director ports. The maximum ports in a port group is 32.
Storage group
A logical grouping of thin devices. LUN addresses are assigned to the devices within the storage group when the view is created if the group is either cascaded or stand alone.
Cascaded storage group
A parent storage group comprised of multiple storage groups (parent storage group members) that contain child storage groups comprised of devices. By assigning child storage groups to the parent storage group members and applying the masking view to the parent storage group, the masking view inherits all devices in the corresponding child storage groups.
Masking view
An association between one initiator group, one port group, and one storage group. When a masking view is created, the group within the view is a parent, the contents of the children are used. For example, the initiators from the children initiator groups and the devices from the children storage groups. Depending on the server and application requirements, each server or group of servers may have one or more masking views that associate a set of thin devices to an application, server, or cluster of servers.
80 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 81
Figure 9 Auto-provisioning groups
Masking view
dev
dev
devdev
dev
dev
VM 1 VM 2 VM 3 VM 4
HBA 1
HBA 2
HBA 4
HBA 3
ESX
2
VM 1 VM 2 VM 3 VM 4
HBA 1
HBA 1
HBA 2
HBA 2
HBA 4
HBA 4
HBA 3
HBA 3
ESX
1
dev
dev
devdev
dev
dev
Storage group
Devices
Port group
Host initiators
Initiator group
Ports
SYM-002353
Provisioning

Mainframe-specific provisioning

In Mainframe Enablers, the Thin Pool Capacity (THN) Monitor periodically examines the consumed capacity of data pools. It automatically checks user-defined space consumption thresholds and triggers an automated response tailored to the site requirements. You can specify multiple thresholds of space consumption. When the percentage of space consumption reaches the specified range, the appropriate action is taken.
Mainframe-specific provisioning 81
Page 82
Provisioning
82 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 83
CHAPTER 6

Storage Tiering

This chapter provides an overview of Fully Automated Storage Tiering™. Topics include:
l
Fully Automated Storage Tiering........................................................................84
l
Service Levels....................................................................................................88
l
FAST/SRDF coordination.................................................................................. 89
l
FAST/TimeFinder management......................................................................... 90
l
External provisioning with FAST.X.....................................................................90
Storage Tiering 83
Page 84
Storage Tiering

Fully Automated Storage Tiering

EMC Fully Automated Storage Tiering (FAST) provides automated management of VMAX3 array disk resources on behalf of thin devices. FAST automatically creates data pools according to each individual disk technology, capacity and RAID type.
FAST moves the most active parts of your workloads (hot data) to high-performance flash drives and the least-frequently accessed storage (cold data) to lower-cost drives. FAST:
l
Leverages the best performance and cost characteristics of each different drive type
l
Reduces acquisition, power, cooling, and footprint costs by delivering higher performance using fewer drives
l
Factors in RAID protections to ensure write heavy workloads go to RAID 1 and read heavy workloads go to RAID 6
l
Delivers variable performance levels using Service Levels.
The Service Level is set on the storage group to configure the performance expectations for the thin devices on the group. FAST monitors the storage group's performance relative to the Service Level and automatically provisions the appropriate disk resources to maintain a consistent performance level.
FAST is entirely automated and requires no user intervention.
The following image shows how FAST moves hot data to high-performance drives, and cold data to lower-cost drives
Figure 10
FAST data movement
84 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 85

Pre-configuration for FAST

VMAX3 arrays are custom-built and pre-configured with array-based software applications, including a factory pre-configuration for FAST that includes:
l
Data devices (TDAT)
thin devices.
l
Data pool
type, all of which reside on disks of the same technology type and speed. The disks in a data pool are from the same disk group.
l
Disk group
performance characteristics, which are determined by rotational speed (10K and
7.2K), technology (SAS, flash SAS), and capacity. RAID protection options are configured at the disk group level. EMC strongly recommends that you use one or more of the RAID data protection schemes for all data devices.
Table 36 RAID options
a
RAID
Storage Tiering
— an internal device that provides physical storage used by
— a collection of data devices of identical emulation and protection
— a collection of physical drives within the array that share the same
Provides the following Configuration
considerations
RAID 1 The highest level of
performance for all mission­critical and business-critical applications. Maintains a duplicate copy of a device on two drives. If a drive in the mirrored pair fails, the array automatically uses the mirrored partner without interruption of data availability.
RAID 5 Distributed parity and striped
data across all drives in the RAID group. Options include:
n
RAID 5 (3 + 1) — Consists of four drives with parity and data striped across each device.
n
RAID 5 (7 + 1) — Consists of eight drives with data and parity striped across each device.
n
Withstands failure of a single drive within the mirrored pair.
n
A drive rebuild is a simple copy from the remaining drive to the replaced drive.
n
The number of required drives is twice the amount required to store data (usable storage capacity of a mirrored system is 50%).
n
RAID 5 (3 + 1) provides 75% data storage capacity.
n
RAID 5 (7 + 1) provides
87.5% data storage capacity.
n
Withstands failure of a single drive within the RAID 5 group.
RAID 6 Striped drives with double
distributed parity (horizontal and diagonal). The highest level of availability options include:
Pre-configuration for FAST 85
n
RAID 6 (6 + 2) provides 75% data storage capacity.
Page 86
Note
Storage Tiering
Table 36 RAID options (continued)
a
RAID
Provides the following Configuration
considerations
n
RAID 6 (6 + 2) — Consists of eight drives with dual parity and data striped across each device.
n
RAID 6 (14 + 2) —
n
RAID 6 (14 + 2) provides
87.5% data storage capacity.
n
Withstands failure of two drives within the RAID 6
group. Consists of 16 drives with dual parity and data striped across each device.
a.
When the drive is (non-disruptively) replaced by a sparing operation, the array re­establishes the mirrored pair and automatically re-synchronizes the data with the drive. The sparing operation is available on all RAID types. The array can read from either mirror drive.
VMAX3 arrays do not support RAID 10. Designed for 100% virtually provisioned storage environments, the VMAX3 array features virtually provisioned (thin) volumes widely striped across RAID 1 disk pairs to provide both I/O concurrency and the RAID 1 protection level. The benefits are equal or superior to those provided by RAID 10 or striped meta volumes.
l
FAST Storage Resource Pools — one (default) FAST Storage Resource Pool is pre-configured on the array. This process is automatic and requires no setup. Depending on the storage environment, SRPs can consist of either FBA or CKD storage pools, or a mixture of both in mixed environments. FAST uses the same algorithms to maintain the specified SL, regardless of the environment. However, it can only move data between pools of the same type. FBA and CKD data will not be mixed in the same storage pool, but can be in the same disk group. You cannot modify FAST Storage Resource Pools, but you can list and display their configuration.
You can generate reports detailing the demand storage groups or Service Levels are placing on the Storage Resource Pools.
The following image shows FAST components that are pre-configured at the factory. Once installed, thin devices are created and added to the storage group.
86 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 87
Figure 11 FAST components
Service Levels
*
*
*
* Not supported on storage groups containing CKD volumes.
Storage Tiering

FAST allocation by storage resource pool

FAST manages the allocation of new data within the Storage Resource Pool by automatically selecting a Storage Resource Pool based on available disk technology, capacity and RAID type.
If a storage group has a Service Level (SL), FAST automatically changes the ranking of the Storage Resource Pools used for initial allocation. If the preferred drive technology is not available, allocation reverts to the default behavior and uses any available Storage Resource Pool for allocation.
FAST enforces SL compliance within the Storage Resource Pool by restricting the available technology allocations. For example, the Platinum SL cannot have allocations on 7K RPM disks within the Storage Resource Pool. This allows FAST to be more reactive to SL changes and ensure critical workloads are isolated from lower performance disks.
FAST allocation by storage resource pool 87
Page 88
Flash 15K 10K 7K
Diamond
Alloc
Planum* Alloc
Gold* Alloc
Silver* Alloc
Bronze Alloc
Opmized Alloc
*Not supported on storage groups containing CKD volumes.
Storage Tiering
Figure 12 Service Level compliance
Table 37 Service Level compliance legend
Green Data is allowed on this disk drive technology
Red Data not allowed on this disk drive technology
Alloc Preference for initial allocation. Data can

Service Levels

The performance requirements of application workloads vary widely within a single system. Workloads also fluctuate, sometimes within a very short period of time. Traditionally, storage administrators manage their storage by mapping available resources (flash or disk) to the workloads as best they can. Provisioning storage takes
88 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
multiple steps and careful calculations to meet the performance requirements for a workload or application.
Service Levels dramatically simplify this time-consuming and inexact process. Service Levels are pre-configured service definitions applied to VMAX3 storage groups, each designed to address the performance needs of a specific workload. VMAX3 arrays are delivered with different Service Levels ranging from Bronze (mostly 7.2K drives) to Diamond (mostly flash drives). The Service Level automatically monitors and adapts to the workload to meet the storage group's response time target.
When an administrator provisions an application's storage, the array (using FAST and Workload Planner), models the ability of the array to deliver that performance, and reports:
l
The expected performance range, in response time
l
Whether the array can deliver the requested service level
spillover to other pools to avoid failing allocation, even is the disk drive technology is Red.
Page 89
Storage Tiering
l
Where to move the workload if the array cannot meet the requested service level
If the workload type (OLTP, DSS, replication) is known, the storage administrator can optionally provide that information with a single mouse click. This additional information allows FAST to refine response expectations even more accurately.
To provision storage, administrators simply select devices sizes and appropriate Service Level.
Service Levels are applied to storage groups. If sub-storage groups are configured, Service Levels can apply to all LUNs/volumes in the storage group, or to a subset of LUNs (as long as the subset is in a storage group). The database of an application can use a different Service Level than the logs for that application.
Setting an explicit Service Level on a storage group is optional. If no Service Level is specified, LUNs/volumes in the storage group use the default: Optimized. Optimized manages the data to make the most efficient use of the disk resources.
The pre-configured Service Levels cannot be modified.
Table 38 on page 89 lists the VMAX3 Service Levels:
Table 38 Service Levels
Service Level Performance type Use case
Diamond Ultra high HPC, latency sensitive
Platinum
Gold
Silver
Bronze Cost optimized Backup, archive, file
Optimized (default) No Service Level is defined.
a
a
a
a.
Not supported on storage groups containing CKD volumes.

FAST/SRDF coordination

Symmetrix Remote Data Facility (SRDF) replicates data between 2, 3 or 4 arrays located in the same room, on the same campus or thousands of kilometers apart. The read workload is only on the production side of the link, and the remote side may not be ready to meet the Service Level of a storage group in an event of a failover. FAST/ SRDF coordination considers the entire workload profile at the production site, and provides the remote array all the information it needs in the event of a failover. This allows the remote site to better meet the Service Level of the storage group. SRDF
and EMC FAST coordination on page 163 provides more information.
Very high Mission critical, high rate
OLTP
High Very heavy I/O, Database
logs, data sets
Price/Performance Database data sets, virtual
applications
The most active data is placed on the highest performing storage. The least active data is placed on the most cost­effective storage.
FAST/SRDF coordination 89
Page 90
Storage Tiering

FAST/TimeFinder management

TimeFinder is a local replication solution that creates point-in-time copies of data (snapshots). FAST automatically manages snapshots in order to meet the performance objectives of the Storage Resource Pool and the Service Level. If a snapshot is created, but never accessed, it is moved to a lower performance drive. Snapshots that experience frequent high read workloads are promoted to flash to meet the Service Level. About TimeFinder on page 92 provides more information.

External provisioning with FAST.X

FAST.X allows qualified storage platforms to be used as physical storage space for VMAX3 arrays configured with FBA volumes. This allows enterprises to continue to leverage VMAX3 availability and reliability along with proven local and remote replication features while still using existing EMC or third-party storage. These features include VMAX3 Service Level Provisioning, which gives VMAX3 and FAST.X unparalleled ease-of-use along with proven and robust VMAX3 software and HYPERMAX OS features such as SRDF and SnapVX.
Benefits
FAST.X provides the following benefits:
l
Simplifies management of virtualized multi-vendor, or EMC storage by allowing features such as replication to be managed solely through the VMAX3 array.
l
Allows data mobility and migration between heterogeneous storage arrays and between heterogenous arrays and VMAX3.
l
Offers Virtual Provisioning benefits to external arrays.
l
Allows VMAX3 enterprise replication technologies, such as SRDF and SnapVX to be used to replicate storage that exists on an external array.
l
Extends the value of existing arrays by allow them to be used as an additional storage tier.
l
Dynamically determines a Service Level Expectation (SLE) for external arrays to align with a Service Level (SL).
Software and HYPERMAX OS version requirements
FAST.X requires the following host and array software versions:
l
HYPERMAX OS 5977.691.684 or higher
l
Solutions Enabler V8.1 or higher
Supported external arrays platforms
For details on the supported external arrays, refer to the FAST.X Simple Support Matrix on the E-Lab Interoperability Navigator page:
https://elabnavigator.emc.com
90 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 91
CHAPTER 7

Native local replication with TimeFinder

This chapter describes local replication features. Topics include:
l
About TimeFinder...............................................................................................92
l
Mainframe SnapVX and zDP.............................................................................. 98
Native local replication with TimeFinder 91
Page 92
Note
Native local replication with TimeFinder

About TimeFinder

EMC TimeFinder delivers point-in-time copies of volumes that can be used for backups, decision support, data warehouse refreshes, or any other process that requires parallel access to production data.
Previous VMAX families offered multiple TimeFinder products, each with their own characteristics and use cases. These traditional products required a target volume to retain snapshot or clone data.
Starting with HYPERMAX OS, TimeFinder introduced TimeFinder SnapVX which provides the best aspects of the traditional TimeFinder offerings, combined with increased scalability and ease-of-use.
TimeFinder SnapVX dramatically decreases the impact of snapshots and clones:
l
l
There is no need to specify a target device and source/target pairs. SnapVX supports up to 256 snapshots per volume. Users can assign names to individual snapshots and assign an automatic expiration date to each one.
With SnapVX, a snaphot can be accessed by (known as a target volume). Target volumes are standard VMAX3 TDEVs. Up to 1024 target volumes can be linked to the snapshots of the source volumes. The 1024 links can all be to the same snapshot of the source volume, or they can be multiple target volumes linked to multiple snapshots from the same source volume.
For snapshots, this is done by using redirect on write technology (ROW).
For clones, this is done by storing changed tracks (deltas) directly in the Storage Resource Pool of the source device - sharing tracks between snapshot versions and also with the source device, where possible.
linking
it to a host accessible volume
A target volume may be linked only to one snapshot at a time.
Snapshots can be cascaded from linked targets, and targets can be linked to snapshots of linked targets. There is no limit to the number of levels of cascading, and the cascade can be broken.
SnapVX links to targets in the following modes:
l
Nocopy Mode (Default): SnapVX does not copy data to the linked target volume but still makes the point-in-time image accessible through pointers to the snapshot. The point-in-time image will not be available after the target is unlinked because some target data may no longer be associated with the point-in-time image.
l
Copy Mode: SnapVX copies all relevant tracks from the snapshot's point-in-time image to the linked target volume to create a complete copy of the point-in-time image that will remain available after the target is unlinked.
If an application needs to find a particular point-in-time copy among a large set of snapshots, SnapVX enables you to link and relink until the correct snapshot is located.

Interoperability with legacy TimeFinder products

TimeFinder SnapVX and HYPERMAX OS provide backward compatibility to legacy replication products by emulating legacy TimeFinder and IBM FlashCopy replication products. You can run your legacy replication scripts/jobs on VMAX3 arrays running TimeFinder SnapVX and HYPERMAX OS without altering them.
92 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 93
Native local replication with TimeFinder
TimeFinder SnapVX emulates the following legacy replication products:
FBA devices Mainframe (CKD devices)
TimeFinder/Clone TimeFinder/Clone
TimeFinder/Mirror TimeFinder/Mirror
TimeFinder VP Snap TimeFinder Snap
EMC Dataset Snap
IBM FlashCopy (Full Volume and Extent Level)
Interoperability between TimeFinder SnapVX and legacy TimeFinder and IBM FlashCopy products depends on:
l
The device role in the local replication session. A CKD or FBA device can be the source or the target in a local replication session. Different rules apply to ensure data integrity when concurrent local replication sessions run on the same device.
l
The management software (Solutions Enabler/Unisphere for VMAX or Mainframe Enablers) used to control local replication.
n
Solutions Enabler and Unisphere for VMAX do not support interoperability between SnapVX and other local replication session on FBA or CKD devices.
Figure 13 on page 94 provides detailed local replication interoperability
support for FBA devices by using open systems management software (Solutions Enabler, Unisphere for VMAX).
n
Mainframe Enablers (MFE) support interoperability between SnapVX and other local replication sessions.
Figure 14 on page 95 provides detailed interoperability information for CKD
devices managed by using Mainframe Enablers.
Interoperability with legacy TimeFinder products 93
Page 94
Native local replication with TimeFinder
Figure 13 Local replication interoperability, FBA devices
94 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 95
Interoperability with legacy TimeFinder products 95
Figure 14 Local replication interoperability, CKD devices
Native local replication with TimeFinder
Page 96
Note
Native local replication with TimeFinder

Targetless snapshots

TimeFinder SnapVX management interfaces enable you to take a snapshot of an entire VMAX3 Storage Group with a single command. With this in mind, VMAX3 supports up to 16K storage groups, which is enough even in the most demanding environment for one per application. The storage group construct already exists in the majority of cases as they are created for masking views. Timefinder SnapVX is able to utilize this already existing structure reducing the administration required to maintain the application and its replication environment.
Creation of SnapVX snapshots does not require you to preconfigure any additional volumes, which reduces the cache footprint of SnapVX snapshots and simplifies implementation. Snapshot creation and automatic termination can easily be scripted.
In the following example, a snapshot is created with a 2 day retention. This command can be scheduled to run in as part of a script to create multiple versions of the snapshot, each one sharing tracks where possible with each other and the source devices. Use a cron job or scheduler to run the snapshot script on a schedule to create up to 256 snapshots of the source volumes; enough for a snapshot every 15 minutes with 2 days of retention:
symsnapvx -sid 001 -sg StorageGroup1 -name sg1_snap establish -ttl -
delta 2

Secure snaps

If a restore operation is required, any of the snapshots created by the example above can be specified.
When the storage group transitions to a restored state, the restore session can be terminated. The snapshot data is preserved during the restore process and can be used again should the snapshot data be required for a future restore.
Introduced with HYPERMAX OS 5977 Q2 2017 SR, secure snaps is an enhancement to the current snapshot technology. Secure snaps prevent administrators or other high­level users from intentionally or unintentionally deleting snapshot data. In addition, Secure snaps are also immune to automatic failure resulting from running out of Storage Recourse Pool (SRP) or Replication Data Pointer (RDP) space on the array.
When creating a secure snapshot, you assign it an expiration date/time either as a delta from the current date or as an absolute date. Once the expiration date passes, and if the snapshot has no links, HYPERMAX OS automatically deletes the snapshot. Prior to its expiration, Administrators can only extend the expiration date - they cannot shorten the date or delete the snapshot. If a secure snapshot expires, and it has a volume linked to it, or an active restore session, the snapshot is not deleted; however, it is no longer considered secure.
Secure snapshots may only be terminated after they expire or by customer-authorized EMC support. Refer to Knowledgebase article 498316 for additional information.

Provision multiple environments from a linked target

Use SnapVX to provision multiple test, development environments using linked snapshots. To access a point-in-time copy, create a link from the snapshot data to a host mapped target device.
96 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 97
Note
Native local replication with TimeFinder
Each linked storage group can access the same snapshot, or each can access a different snapshot version in either no copy or copy mode. Changes to the linked volumes do not affect the snapshot data. To roll back a test development environment to the original snapshot image, perform a relink operation.
Figure 15 SnapVX targetless snapshots
Target volumes must be unmounted before issuing the relink command to ensure that the host operating system does not cache any filesystem data. If accessing through VPLEX, ensure that you follow the procedure outlined in the technical note
VPLEX: LEVERAGING ARRAY BASED AND NATIVE COPY TECHNOLOGIES
support.emc.com
Once the relink is complete, volumes can be remounted.
Snapshot data is unchanged by the linked targets, so the snapshots can also be used to restore production data.

Cascading snapshots

Presenting sensitive data to test or development environments often requires that sensitive data be obfuscated before it is presented to any test or development hosts. Use cascaded snapshots to support obfuscation, as shown in the following image.
EMC
, available on
Cascading snapshots 97
Page 98
Note
Native local replication with TimeFinder
Figure 16 SnapVX cascaded snapshots
If no change to the data is required before presenting it to the test or development environments, there is no need to create a cascaded relationship.

Accessing point-in-time copies

To access a point-in time-copy, you must create a link from the snapshot data to a host mapped target device. The links may be created in Copy mode for a permanent copy on the target device, or in NoCopy mode for temporary use. Copy mode links create full-volume, full-copy clones of the data by copying it to the target device’s Storage Resource Pool. NoCopy mode links are space-saving snapshots that only consume space for the changed data that is stored in the source device’s Storage Resource Pool.
HYPERMAX OS supports up to 1,024 linked targets per source device.
When a target is first linked, all of the tracks are undefined. This means that the target does not know where in the Storage Resource Pool the track is located, and host access to the target must be derived from the SnapVX metadata. A background process eventually defines the tracks and updates the thin device to point directly to the track location in the source device’s Storage Resource Pool.

Mainframe SnapVX and zDP

Data Protector for z Systems (zDP) is a mainframe software solution that is deployed on top of SnapVX on VMAX3 arrays. zDP delivers the capability to recover from logical data corruption with minimal data loss. zDP achieves this by providing multiple, frequent, consistent point-in-time copies of data in an automated fashion from which an application level recovery can be conducted, or the environment restored to a point prior to the logical corruption.
By providing easy access to multiple different point-in-time copies of data (with a granularity of minutes), precise remediation of logical data corruption can be performed using application-based recovery procedure. zDP results in minimal data loss compared to the previous method of restoring data from daily or weekly backups.
98 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Page 99
Native local replication with TimeFinder
As shown in Figure 17 on page 99, zDP enables you to create and manage multiple point-in-time snapshots of volumes. A snapshot is a pointer-based, point-in-time image of a single volume. These point-in-time copies are created using the SnapVX feature of HYPERMAX OS. SnapVX is a space-efficient method for making volume level snapshots of thin devices and consuming additional storage capacity only when updates are made to the source volume. There is no need to copy each snapshot to a target volume as SnapVX separates the capturing of a point-in-time copy from its usage. Capturing a point-in-time copy does not require a target volume. Using a point­in-time copy from a host requires linking the snapshot to a target volume. You can make multiple snapshots (up to 256) of each source volume.
Figure 17 zDP operation
These snapshots share allocations to the same track image whenever possible while ensuring they each continue to represent a unique point-in-time image of the source volume. Despite the space efficiency achieved through shared allocation to unchanged data, additional capacity is required to preserve the pre-update images of changed tracks captured by each point-in-time snapshot.
zDP implementation is a two-stage process — the planning phase and the implementation phase.
l
The planning phase is done in conjunction with your EMC representative who has access to tools that can help size the capacity needed for zDP if you are currently a VMAX3 user.
l
The implementation phase utilizes the following methods for z/OS:
n
A batch interface that allows you to submit jobs to define and manage zDP.
n
A zDP run-time environment that executes under SCF to create snapsets.
For details on zDP usage, refer to the For details on zDP usage in z/TPF, refer to the
Product Guide
.
TimeFinder SnapVX and zDP Product Guide
.
TimeFinder Controls for z/TPF
Mainframe SnapVX and zDP 99
Page 100
Native local replication with TimeFinder
100 Product Guide VMAX 100K, VMAX 200K, VMAX 400K with HYPERMAX OS
Loading...