IBM Storwize V5010, Storwize V5020, Storwize V5030 User Manual

Front cover

Implementing the IBM Storwize V5000 Gen2 (including the Storwize V5010, V5020, and V5030) with IBM Spectrum Virtualize V8.1
Jon Tate Dharmesh Kamdar Hartmut Lonzer Gustavo Tinelli Martins
Redbooks
International Technical Support Organization
Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
March 2018
SG24-8162-03
Note: Before using this information and the product it supports, read the information in “Notices” on page xiii.
Fourth Edition (March 2018)
This edition applies to the IBM Storwize V5000 Gen2 and software V8.1.0. Note that since this book was produced, several panels might have changed.
© Copyright International Business Machines Corporation 2018. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Summary of changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
October 2017, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system. . . . . . . . . . . . . . . . . . . . 1
1.1 IBM Storwize V5000 Gen2 overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 IBM Storwize V5000 Gen2 terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 IBM Storwize V5000 Gen2 models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.1 IBM Storage Utility Offerings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 IBM Storwize V5000 Gen1 and Gen2 compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 IBM Storwize V5000 Gen2 hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.1 Control enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.2 Storwize V5010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.3 Storwize V5020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.4 Storwize V5030 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5.5 Expansion enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.5.6 Host interface cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.5.7 Disk drive types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6 IBM Storwize V5000 Gen2 terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.6.1 Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6.2 Node canister . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6.3 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.6.4 Clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.6.5 RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.6.6 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.6.7 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.6.8 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.6.9 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6.10 iSCSI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6.11 Serial-attached SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6.12 Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.7 IBM Storwize V5000 Gen2 features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.7.1 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.7.2 Thin Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.7.3 Real-time Compression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.7.4 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.7.5 Storage Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.7.6 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.7.7 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.7.8 IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.7.9 External virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
© Copyright IBM Corp. 2018. All rights reserved. iii
1.7.10 Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.8 Problem management and support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.8.1 IBM Support assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.8.2 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.8.3 SNMP traps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.8.4 Syslog messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.8.5 Call Home email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.9 More information resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
1.9.1 Useful IBM Storwize V5000 Gen2 websites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Chapter 2. Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.1 Hardware installation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.1.1 Procedure to install the SAS cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2 SAN configuration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3 FC direct-attach planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.4 SAS direct-attach planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5 LAN configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.5.1 Management IP address considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.5.2 Service IP address considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6 Host configuration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.7 Miscellaneous configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.8 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.8.1 Graphical user interface (GUI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.8.2 Command-line interface (CLI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.9 First-time setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.10 Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.10.1 Adding enclosures after the initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.10.2 Service Assistant Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Chapter 3. Graphical user interface overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.1 Overview of IBM Spectrum Virtualize management software . . . . . . . . . . . . . . . . . . . . 80
3.1.1 Access to the storage management software . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.1.2 System pane layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.1.3 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.1.4 Multiple selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.1.5 Status indicators area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.2 Overview pane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.3 Monitoring menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.3.1 System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.3.2 System details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.3.3 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.3.4 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.5 Background Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.4 Pools menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.4.1 Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.4.2 Child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
3.4.3 Volumes by pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.4.4 Internal storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.4.5 External storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.4.6 MDisks by pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.4.7 System migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.5 Volumes menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.5.1 All volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
iv Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
3.5.2 Volumes by pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.5.3 Volumes by host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.6 Hosts menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.6.1 Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.6.2 Host Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.6.3 Ports by host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.6.4 Host mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.6.5 Volumes by host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.7 Copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.7.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.7.2 Consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.7.3 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
3.7.4 Remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
3.7.5 Partnerships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.8 Access menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.8.1 Users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.8.2 Audit Log option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.9 Settings menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3.9.1 Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3.9.2 Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
3.9.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
3.9.4 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.9.5 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
3.9.6 GUI preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Chapter 4. Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.1 Working with internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.1.1 Internal Storage window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.1.2 Actions on internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.2 Working with storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.2.1 Creating storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.2.2 Actions on storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4.2.3 Child storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.3 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
4.3.1 Assigning managed disks to storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.3.2 RAID configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
4.3.3 Distributed RAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4.3.4 RAID configuration presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
4.3.5 Actions on arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
4.3.6 Actions on external MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
4.3.7 More actions on MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
4.4 Working with external storage controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.1 Host attachment overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5.2 Planning for direct-attached hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.2.1 Fibre Channel direct attachment to host systems . . . . . . . . . . . . . . . . . . . . . . . . 201
5.2.2 FC direct attachment between nodes in a Storwize V5000 system . . . . . . . . . . 201
5.3 Preparing the host operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.3.1 Windows 2008 R2 and 2012 R2: Preparing for Fibre Channel attachment . . . . 202
5.3.2 Windows 2008 R2 and Windows 2012 R2: Preparing for iSCSI attachment . . . 207
5.3.3 Windows 2012 R2: Preparing for SAS attachment . . . . . . . . . . . . . . . . . . . . . . . 214
5.3.4 VMware ESXi: Preparing for Fibre Channel attachment. . . . . . . . . . . . . . . . . . . 215
Contents v
5.3.5 VMware ESXi: Preparing for iSCSI attachment . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.3.6 VMware ESXi: Preparing for SAS attachment . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.4 N-Port Virtualization ID (NPIV) Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.4.1 NPIV Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
5.4.2 Enabling NPIV on a new system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5.4.3 Enabling NPIV on an existing system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5.5 Creating hosts by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5.5.1 Creating Fibre Channel hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5.5.2 Configuring the IBM Storwize V5000 for FC connectivity . . . . . . . . . . . . . . . . . . 246
5.5.3 Creating iSCSI hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5.5.4 Configuring the IBM Storwize V5000 for iSCSI host connectivity . . . . . . . . . . . . 251
5.5.5 Creating SAS hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
5.6 Host Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
5.6.1 Creating a host cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
5.6.2 Adding a member to a host cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
5.6.3 Listing a host cluster member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
5.6.4 Assigning a volume to a Host Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
5.6.5 Remove volume mapping from a host cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . 270
5.6.6 Removing a host cluster member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
5.6.7 Removing a host cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
5.6.8 I/O throttling for hosts and Host Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
5.7 Proactive Host Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Chapter 6. Volume configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
6.1 Introduction to volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
6.1.1 Image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
6.1.2 Managed mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
6.1.3 Cache mode for volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
6.1.4 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
6.1.5 Thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
6.1.6 Compressed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
6.1.7 Volumes for various topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
6.2 Create Volumes menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
6.3 Creating volumes using the Volume Creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
6.3.1 Creating Basic volumes using Volume Creation . . . . . . . . . . . . . . . . . . . . . . . . . 303
6.3.2 Creating Mirrored volumes using Volume Creation . . . . . . . . . . . . . . . . . . . . . . 305
6.4 Mapping a volume to the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
6.5 Creating Custom volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
6.5.1 Creating a custom thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
6.5.2 Creating Custom Compressed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
6.5.3 Custom Mirrored Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
6.6 HyperSwap and the mkvolume command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
6.6.1 Volume manipulation commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
6.7 Mapping Volumes to Host after volume creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
6.7.1 Mapping newly created volumes to the host using the wizard . . . . . . . . . . . . . . 327
6.8 Migrating a volume to another storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
6.9 Migrating volumes using the volume copy feature . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
6.10 I/O throttling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
6.10.1 Define throttle on a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
6.10.2 Remove a throttle from a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Chapter 7. Storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
7.1 Storage migration wizard overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
vi Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
7.2 Interoperation and compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
7.3 Storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
7.3.1 External virtualization capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
7.3.2 Model and adapter card considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
7.3.3 Overview of the storage migration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
7.3.4 Storage migration wizard tasks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Chapter 8. Advanced host and volume administration . . . . . . . . . . . . . . . . . . . . . . . . 373
8.1 Advanced host administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
8.1.1 Modifying volume mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
8.1.2 Unmapping volumes from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
8.1.3 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
8.1.4 Removing a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
8.1.5 Host properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
8.2 Adding and deleting host ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
8.2.1 Adding host port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
8.2.2 Deleting a host port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
8.3 Advanced volume administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
8.3.1 Advanced volume functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
8.3.2 Mapping a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
8.3.3 Unmapping volumes from all hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
8.3.4 Viewing which host is mapped to a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
8.3.5 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
8.3.6 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
8.3.7 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
8.3.8 Migrating a volume to another storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
8.3.9 Exporting to an image mode volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
8.3.10 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
8.3.11 Duplicating a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
8.3.12 Adding a volume copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
8.4 Volume properties and volume copy properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
8.5 Advanced volume copy functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
8.5.1 Volume copy: Make Primary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
8.5.2 Splitting into a new volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
8.5.3 Validate Volume Copies option. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
8.5.4 Delete volume copy option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
8.5.5 Migrating volumes by using the volume copy features . . . . . . . . . . . . . . . . . . . . 427
8.6 Volumes by storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
8.7 Volumes by host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Chapter 9. Advanced features for storage efficiency . . . . . . . . . . . . . . . . . . . . . . . . . 431
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
9.2 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
9.2.1 Easy Tier overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
9.2.2 Tiered storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
9.2.3 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
9.2.4 I/O Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
9.2.5 Data Placement Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
9.2.6 Data Migration Planner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
9.2.7 Data Migrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
9.2.8 Easy Tier accelerated mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
9.2.9 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
9.2.10 Easy Tier status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Contents vii
9.2.11 Storage Pool Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
9.2.12 Easy Tier rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
9.2.13 Creating multi-tiered pools: Enabling Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . 442
9.2.14 Downloading Easy Tier I/O measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
9.2.15 Easy Tier I/O Measurement through the command-line interface. . . . . . . . . . . 455
9.2.16 IBM Storage Tier Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
9.2.17 Processing heat log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
9.3 Thin provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
9.3.1 Configuring a thin provisioned volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
9.3.2 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
9.3.3 Limitations of virtual capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
9.4 Real-time Compression Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
9.4.1 Common use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
9.4.2 Real-time Compression concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
9.4.3 Random Access Compression Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
9.4.4 Random Access Compression Engine in IBM Spectrum Virtualize stack. . . . . . 473
9.4.5 Data write flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
9.4.6 Data read flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
9.4.7 Compression of existing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
9.4.8 Configuring compressed volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
9.4.9 Comprestimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Chapter 10. Copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
10.1 IBM FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
10.1.1 Business requirements for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
10.1.2 Backup improvements with FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
10.1.3 Restore with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
10.1.4 Moving and migrating data with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
10.1.5 Application testing with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
10.1.6 Host and application considerations to ensure FlashCopy integrity . . . . . . . . . 484
10.1.7 FlashCopy attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
10.1.8 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
10.1.9 IBM Spectrum Protect Snapshot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
10.2 FlashCopy functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
10.3 Implementing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
10.3.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
10.3.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
10.3.3 Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
10.3.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
10.3.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
10.3.6 Interaction and dependency between multiple target FlashCopy mappings. . . 494
10.3.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . 495
10.3.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
10.3.9 FlashCopy and image mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
10.3.10 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
10.3.11 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
10.3.12 Thin provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
10.3.13 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
10.3.14 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
10.3.15 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
10.3.16 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
10.3.17 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . 507
10.3.18 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
viii Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
10.4 Managing FlashCopy by using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
10.4.1 Creating a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
10.4.2 Single-click snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
10.4.3 Single-click clone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
10.4.4 Single-click backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
10.4.5 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
10.4.6 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 529
10.4.7 Showing related volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
10.4.8 Moving a FlashCopy mapping to a Consistency Group . . . . . . . . . . . . . . . . . . 535
10.4.9 Removing a FlashCopy mapping from a Consistency Group . . . . . . . . . . . . . . 536
10.4.10 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
10.4.11 Renaming FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
10.4.12 Renaming Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
10.4.13 Deleting FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
10.4.14 Deleting FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
10.4.15 Starting FlashCopy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
10.4.16 Stopping FlashCopy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
10.5 Volume mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
10.6 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
10.6.1 Native IP replication technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
10.6.2 IBM Storwize System Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
10.6.3 IP partnership limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
10.6.4 VLAN support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
10.6.5 IP partnership and terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
10.6.6 States of IP partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
10.6.7 Remote copy groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
10.7 Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
10.7.1 Multiple IBM Storwize V5000 system mirroring. . . . . . . . . . . . . . . . . . . . . . . . . 554
10.7.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
10.7.3 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
10.7.4 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
10.7.5 Synchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
10.7.6 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
10.7.7 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
10.7.8 Practical use of Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
10.7.9 Global Mirror Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
10.7.10 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
10.7.11 Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
10.7.12 Using Change Volumes with Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
10.7.13 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
10.7.14 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
10.7.15 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
10.7.16 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
10.7.17 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
10.7.18 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . 572
10.7.19 Remote Copy configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
10.7.20 Remote Copy states and events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
10.8 Consistency protection for Remote and Global mirror . . . . . . . . . . . . . . . . . . . . . . . 580
10.9 Remote Copy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
10.9.1 Remote Copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
10.9.2 Listing available system partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
10.9.3 Changing the system parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
10.9.4 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
Contents ix
10.9.5 Creating a Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 586
10.9.6 Creating a Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 586
10.9.7 Changing Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 586
10.9.8 Changing Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 587
10.9.9 Starting Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 587
10.9.10 Stopping Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 587
10.9.11 Starting Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . . 588
10.9.12 Stopping Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . . 588
10.9.13 Deleting Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 588
10.9.14 Deleting Metro Mirror/Global Mirror consistency group. . . . . . . . . . . . . . . . . . 589
10.9.15 Reversing Metro Mirror/Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 589
10.9.16 Reversing Metro Mirror/Global Mirror consistency group . . . . . . . . . . . . . . . . 589
10.10 Managing Remote Copy using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
10.10.1 Creating Fibre Channel partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
10.10.2 Creating stand-alone remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . 592
10.10.3 Creating Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
10.10.4 Renaming Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
10.10.5 Renaming remote copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
10.10.6 Moving stand-alone remote copy relationship to Consistency Group. . . . . . . 609
10.10.7 Removing remote copy relationship from Consistency Group . . . . . . . . . . . . 610
10.10.8 Starting remote copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
10.10.9 Starting remote copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
10.10.10 Switching copy direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
10.10.11 Switching the copy direction for a Consistency Group . . . . . . . . . . . . . . . . . 613
10.10.12 Stopping a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
10.10.13 Stopping Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
10.10.14 Deleting stand-alone remote copy relationships . . . . . . . . . . . . . . . . . . . . . . 617
10.10.15 Deleting Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
10.11 Troubleshooting remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
10.11.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
10.11.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
10.12 HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
10.12.1 Introduction to HyperSwap volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
10.12.2 Failure scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
10.12.3 Current HyperSwap limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
Chapter 11. External storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
11.1 Planning for external storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
11.1.1 License for external storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
11.1.2 SAN configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
11.1.3 External storage configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
11.1.4 Guidelines for virtualizing external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
11.2 Working with external storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
11.2.1 Adding external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
11.2.2 Importing image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
11.2.3 Managing external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
11.2.4 Removing external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
Chapter 12. RAS, monitoring, and troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . 653
12.1 Reliability, availability, and serviceability features. . . . . . . . . . . . . . . . . . . . . . . . . . . 654
12.2 System components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
12.2.1 Enclosure midplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
12.2.2 Node canisters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
x Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
12.2.3 Expansion canisters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
12.2.4 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
12.2.5 Power supply units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
12.3 Configuration backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
12.3.1 Generating a manual configuration backup by using the CLI . . . . . . . . . . . . . . 672
12.3.2 Downloading a configuration backup by using the GUI . . . . . . . . . . . . . . . . . . 673
12.4 System update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
12.4.1 Updating node canister software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
12.4.2 Updating the drive firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
12.5 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
12.5.1 Email notifications and Call Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
12.6 Audit log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
12.7 Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
12.7.1 Managing the event log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
12.7.2 Alert handling and recommended actions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
12.8 Support Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
12.8.1 Configuring support assistance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
12.8.2 Set up Support Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
12.8.3 Disable Support Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
12.9 Collecting support information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
12.9.1 Collecting support information by using the GUI. . . . . . . . . . . . . . . . . . . . . . . . 715
12.9.2 Automatic upload of Support Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
12.9.3 Manual upload of Support Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720
12.9.4 Collecting support information by using the SAT . . . . . . . . . . . . . . . . . . . . . . . 724
12.10 Powering off the system and shutting down the infrastructure . . . . . . . . . . . . . . . . 726
12.10.1 Powering off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
12.10.2 Shutting down and starting up the infrastructure. . . . . . . . . . . . . . . . . . . . . . . 730
Chapter 13. Encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
13.1 Planning for encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
13.2 Defining encryption of data at-rest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
13.2.1 Encryption methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
13.2.2 Encryption keys. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
13.2.3 Encryption licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
13.3 Activating encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
13.3.1 Obtaining an encryption license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
13.3.2 Start activation process during initial system setup . . . . . . . . . . . . . . . . . . . . . 738
13.3.3 Start activation process on a running system . . . . . . . . . . . . . . . . . . . . . . . . . . 740
13.3.4 Activate the license automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
13.3.5 Activate the license manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
13.4 Enabling encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
13.4.1 Starting the Enable Encryption wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
13.4.2 Enabling encryption using USB flash drives . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
13.4.3 Enabling encryption using key servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
13.4.4 Enabling encryption using both providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
13.5 Configuring additional providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
13.5.1 Adding SKLM as a second provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
13.5.2 Adding USB flash drives as a second provider . . . . . . . . . . . . . . . . . . . . . . . . . 767
13.6 Migrating between providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
13.6.1 Migration from USB flash drive provider to encryption key server . . . . . . . . . . 769
13.6.2 Migration from encryption key server to USB flash drive provider . . . . . . . . . . 769
13.7 Recovering from a provider loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
13.8 Using encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
Contents xi
13.8.1 Encrypted pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
13.8.2 Encrypted child pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
13.8.3 Encrypted arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
13.8.4 Encrypted MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
13.8.5 Encrypted volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
13.8.6 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
13.9 Rekeying an encryption-enabled system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
13.9.1 Rekeying using a key server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
13.9.2 Rekeying using USB flash drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
13.10 Migrating between key providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
13.11 Disabling encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
Appendix A. CLI setup and SAN Boot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
Command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
Basic setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
SAN Boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
Enabling SAN Boot for Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
Enabling SAN Boot for VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
Windows SAN Boot migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
Appendix B. Terminology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
IBM Storwize V5000 publications and support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
xii Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1

Notices

This information was developed for products and services offered in the US. This material might be available from IBM in other languages. However, you may be required to own a copy of the product or product version in that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to actual people or business enterprises is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs.
© Copyright IBM Corp. 2018. All rights reserved. xiii

Trademarks

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright and trademark information” at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks or registered trademarks of International Business Machines Corporation, and might also be trademarks or registered trademarks in other countries.
AIX® DB2® DS8000® Easy Tier® FlashCopy® Global Technology Services® HyperSwap® IBM®
IBM FlashSystem® IBM SmartCloud® IBM Spectrum™ IBM Spectrum Control™ IBM Spectrum Protect™ IBM Spectrum Scale™ IBM Spectrum Virtualize™ PowerHA®
Real-time Compression™ Redbooks® Redbooks (logo) ® Storwize® System Storage® Tivoli®
The following terms are trademarks of other companies:
SoftLayer, and The Planet are trademarks or registered trademarks of SoftLayer, Inc., an IBM Company.
Celeron, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
xiv Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1

Preface

Organizations of all sizes face the challenge of managing massive volumes of increasingly valuable data. But storing this data can be costly, and extracting value from the data is becoming more difficult. IT organizations have limited resources but must stay responsive to dynamic environments and act quickly to consolidate, simplify, and optimize their IT infrastructures. The IBM® Storwize® V5000 Gen2 system provides a smarter solution that is affordable, easy to use, and self-optimizing, which enables organizations to overcome these storage challenges.
The Storwize V5000 Gen2 delivers efficient, entry-level configurations that are designed to meet the needs of small and midsize businesses. Designed to provide organizations with the ability to consolidate and share data at an affordable price, the Storwize V5000 Gen2 offers advanced software capabilities that are found in more expensive systems.
This IBM Redbooks® publication is intended for pre-sales and post-sales technical support professionals and storage administrators.
It applies to the Storwize V5030, V5020, and V5010, and to IBM Spectrum Virtualize™ V8.1.

Authors

This book was produced by a team of specialists from around the world working at the International Technical Support Organization, San Jose Center.
Jon Tate is a Project Manager for IBM System Storage® SAN Solutions at the International Technical Support Organization (ITSO), San Jose Center. Before Jon joined the ITSO in 1999, he worked in the IBM Technical Support Center, providing Level 2 support for IBM storage products. Jon has 32 years of experience in storage software and management, services, and support. He is both an IBM Certified IT Specialist and an IBM SAN Certified Specialist. He is also the UK Chairman of the Storage Networking Industry Association.
Dharmesh Kamdar has been working in IBM Systems group for over 15 years as a Senior Software Engineer. He works in the Open Systems Lab (OSL), where he focuses on interoperability testing of a range of IBM storage products with various vendor products, including operating systems, clustering solutions, virtualization platforms, volume managers, and file systems.
© Copyright IBM Corp. 2018. All rights reserved. xv
Hartmut Lonzer is an OEM Alliance Manager for IBM Storage. Before this position, he was a Client Technical Specialist for IBM Germany. He works in the IBM Germany headquarters in Ehningen. His main focus is on the IBM SAN Volume Controller, IBM Storwize Family, and IBM VersaStack. His experience with the IBM SAN Volume Controller and Storwize products goes back to the beginning of these products. Hartmut has been with IBM in various technical roles for 40 years.
Gustavo Tinelli Martins is a Storage Technical Leader who works for IBM Global Technology Services® in Brazil. He is also an IBM Certified IT Specialist, member of IBM’s IT Specialist Advisory Board, responsible for evaluating candidates who wish to acquire the title of IBM IT specialist. Gustavo has eight years of professional experience, of which two of those years were dedicated to Customer’s Service Center and the other six years were dedicated to the Storage Service Line. Gustavo is certified in multiple IBM storage technologies and also in other vendor storage products.
Thanks to the following people for their contributions to this project:
򐂰 James Whitaker 򐂰 Catarina Castro 򐂰 Martyn Spink 򐂰 Djihed Afifi 򐂰 Vanessa Howling
IBM Manchester Lab
Thanks to the following authors of the previous edition of this book:
򐂰 Catarina Castro 򐂰 Uwe Dubberke 򐂰 Justin Heather 򐂰 Andrew Hickey 򐂰 Imran Imtiaz 򐂰 Nancy Kinney 򐂰 Hartmut Lonzer 򐂰 Adam Lyon-Jones 򐂰 Saiprasad Prabhakar Parkar 򐂰 Edward Seager 򐂰 Lee Sirett 򐂰 Chris Tapsell 򐂰 Paulo Tomiyoshi Takeda 򐂰 Dieter Utesch 򐂰 Thomas Vogel 򐂰 Mikhail Zakharov
xvi Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1

Now you can become a published author, too

Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time. Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome

Your comments are important to us.
We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form:
ibm.com/redbooks
򐂰 Send your comments in an email:
redbooks@us.ibm.com
򐂰 Mail your comments:
IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks

򐂰 Find us on Facebook:
http://www.facebook.com/IBMRedbooks
򐂰 Follow us on Twitter:
http://twitter.com/ibmredbooks
򐂰 Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
Preface xvii
xviii Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1

Summary of changes

This section describes the technical changes made in this edition of the book and in previous editions. This edition might also include minor corrections and editorial changes that are not identified.
Summary of Changes for SG24-8162-03 for Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1 as created or updated on March 20, 2018.

March 2018, Fourth Edition

This revision includes the following substantial new and changed information.
New information
򐂰 New GUI 򐂰 Encryption 򐂰 Storage migration
Changed information
򐂰 Screen captures for new GUI 򐂰 Encryption
© Copyright IBM Corp. 2018. All rights reserved. xix
xx Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Chapter 1. Overview of the IBM Storwize
1
V5000 Gen2 system
This chapter provides an overview of the IBM Storwize V5000 Gen2 architecture and includes a brief explanation of storage virtualization.
Specifically, this chapter provides information about the following topics:
򐂰 Overview 򐂰 Te r mi n ol o gy 򐂰 Models 򐂰 IBM Storwize V5000 Gen1 and Gen2 compatibility 򐂰 Hardware 򐂰 Te r ms 򐂰 Features 򐂰 Problem management and support 򐂰 More information resources
© Copyright IBM Corp. 2018. All rights reserved. 1

1.1 IBM Storwize V5000 Gen2 overview

The IBM Storwize V5000 Gen2 solution is a modular entry-level and midrange storage solution. The IBM Storwize V5000 Gen2 includes the capability to virtualize its own internal Redundant Array of Independent Disk (RAID) storage and existing external storage area network (SAN)-attached storage (the Storwize V5030 only).
The three IBM Storwize V5000 Gen2 models (Storwize V5010, Storwize V5020, and Storwize V5030) offer a range of performance scalability and functional capabilities. Table 1-1 shows a summary of the features of these models.
Table 1-1 IBM Storwize V5000 Gen2 models
Storwize V5010 Storwize V5020 Storwize V5030
CPU cores 2 2 6
Cache 16 GB Up to 32 GB Up to 64 GB
Supported expansion enclosures
External storage virtualization
Compression No No Yes
Encryption No Yes Yes
10 10 20
No No Yes
For a more detailed comparison, see Table 1-3 on page 6.
IBM Storwize V5000 Gen2 features the following benefits:
򐂰 Enterprise technology available to entry and midrange storage 򐂰 Expert administrators are not required 򐂰 Easy client setup and service 򐂰 Simple integration into the server environment 򐂰 Ability to grow the system incrementally as storage capacity and performance needs
change
The IBM Storwize V5000 Gen2 addresses the block storage requirements of small and midsize organizations. The IBM Storwize V5000 Gen2 consists of one 2U control enclosure and, optionally, up to ten 2U expansion enclosures on the Storwize V5010 and Storwize V5020 systems and up to twenty 2U expansion enclosures on the Storwize V5030 systems. The Storwize V5030 systems are connected by serial-attached Small Computer Systems Interface (SCSI) (SAS) cables that make up one system that is called an
I/O group.
With the Storwize V5030 systems, two I/O groups can be connected to form a cluster, providing a maximum of two control enclosures and 40 expansion enclosures. With the High Density expansion drawers, you are able to attach up to 16 expansion enclosures to a cluster.
The control and expansion enclosures are available in the following form factors, and they can be intermixed within an I/O group:
򐂰 12 x 3.5-inch (8.89-centimeter) drives in a 2U unit 򐂰 24 x 2.5-inch (6.35-centimeter) drives in a 2U unit 򐂰 92 x 2.5-inch in carriers or 3.5-inch drives in a 5U unit
2 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Two canisters are in each enclosure. Control enclosures contain two node canisters, and expansion enclosures contain two expansion canisters.
The IBM Storwize V5000 Gen2 supports up to 1,520 x 2.5 inch drives or 3.5 inch drives or a combination of both drive form factors for the internal storage in a two I/O group Storwize V5030 cluster.
SAS, Nearline (NL)-SAS, and flash drive types are supported.
The IBM Storwize V5000 Gen2 is designed to accommodate the most common storage network technologies to enable easy implementation and management. It can be attached to hosts through a Fibre Channel (FC) SAN fabric, an Internet Small Computer System Interface (iSCSI) infrastructure, or SAS. Hosts can be attached directly or through a network.
Important: For more information about supported environments, configurations, and restrictions, see the IBM System Storage Interoperation Center (SSIC):
https://ibm.biz/BdxQhe
For more information, see this website:
http://www.ibm.com/support/knowledgecenter/STHGUJ/
The IBM Storwize V5000 Gen2 is a virtualized storage solution that groups its internal drives into RAID arrays, which are called the Storwize V5030 systems by importing logical unit numbers (LUNs) from external FC SAN-attached storage. These MDisks are then grouped into created from these storage pools and provisioned out to hosts.
managed disks (MDisks). MDisks can also be created on
storage pools. Volumes are
Storage pools are normally created with MDisks of the same drive type and drive capacity.
Vo l um e s can be moved non-disruptively between storage pools with differing performance
characteristics. For example, a volume can be moved between a storage pool that is made up of NL-SAS drives to a storage pool that is made up of SAS drives to improve performance.
The IBM Storwize V5000 Gen2 system also provides several configuration options to simplify the implementation process. It also provides configuration presets and automated wizards that are called occur.
Included with an IBM Storwize V5000 Gen2 system is a simple and easy to use graphical user interface (GUI) to enable storage to be deployed quickly and efficiently. The GUI runs on any supported browser. The management GUI contains a series of preestablished configuration options that are called configure objects on the system. Presets are available for creating volumes and IBM FlashCopy® mappings and for setting up a RAID configuration.
You can also use the command-line interface (CLI) to set up or control the system.
Directed Maintenance Procedures (DMP) to help resolve any events that might
presets that use commonly used settings to quickly
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 3

1.2 IBM Storwize V5000 Gen2 terminology

The IBM Storwize V5000 Gen2 system uses terminology that is consistent with the entire IBM Storwize family and the IBM SAN Volume Controller. The terms are defined in Table 1-2. More terms can be found in Appendix B, “Terminology” on page 819.
Table 1-2 IBM Storwize V5000 Gen2 terminology
IBM Storwize V5000 Gen2 term Definition
Battery Each control enclosure node canister in an IBM Storwize V5000
Gen2 contains a battery.
Chain Each control enclosure has either one or two chains, which are
used to connect expansion enclosures to provide redundant connections to the inside drives.
Clone A copy of a volume on a server at a particular point. The
contents of the copy can be customized and the contents of the original volume are preserved.
Control enclosure A hardware unit that includes a chassis, node canisters, drives,
and power sources.
Data migration IBM Storwize V5000 Gen2 can migrate data from existing
external storage to its internal volumes.
Distributed RAID (DRAID) No dedicated spare drives are in an array. The spare capacity is
distributed across the array, which allows faster rebuild of the failed disk.
Drive IBM Storwize V5000 Gen2 supports a range of hard disk drives
(HDDs) and Flash Drives.
Event An occurrence that is significant to a task or system. Events can
include the completion or failure of an operation, a user action, or the change in the state of a process.
Expansion canister A hardware unit that includes the SAS interface hardware that
enables the control enclosure hardware to use the drives of the expansion enclosure. Each expansion enclosure has two expansion canisters.
Expansion enclosure A hardware unit that includes expansion canisters, drives, and
power supply units.
External storage MDisks that are SCSI logical units (LUs) that are presented by
storage systems that are attached to and managed by the clustered system.
Fibre Channel port Fibre Channel ports are connections for the hosts to get access
to the IBM Storwize V5000 Gen2.
Host mapping The process of controlling which hosts can access specific
volumes within an IBM Storwize V5000 Gen2.
Internal storage Array MDisks and drives that are held in enclosures that are part
of the IBM Storwize V5000 Gen2.
iSCSI (Internet Small Computer System Interface)
Internet Protocol (IP)-based storage networking standard for linking data storage facilities.
4 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
IBM Storwize V5000 Gen2 term Definition
Managed disk (MDisk) A component of a storage pool that is managed by a clustered
system. An MDisk is part of a RAID array of internal storage or a SCSI LU for external storage. An MDisk is not visible to a host system on the SAN.
Node canister A hardware unit that includes the node hardware, fabric, and
service interfaces, SAS expansion ports, and battery. Each control enclosure contains two node canisters.
PHY A single SAS lane. Four PHYs are in each SAS cable.
Power Supply Unit Each enclosure has two power supply units (PSU).
Quorum disk A disk that contains a reserved area that is used exclusively for
cluster management. The quorum disk is accessed when it is necessary to determine which half of the cluster continues to read and write data.
Serial-Attached SCSI (SAS) ports SAS ports are connections for expansion enclosures and direct
attachment of hosts to access the IBM Storwize V5000 Gen2.
Snapshot An image backup type that consists of a point-in-time view of a
volume.
Storage pool An amount of storage capacity that provides the capacity
requirements for a volume.
Strand The SAS connectivity of a set of drives within multiple
enclosures. The enclosures can be control enclosures or expansion enclosures.
Thin provisioning or thin provisioned
Traditional RAID (TRAID) Traditional RAID uses the standard RAID levels.
Volume A discrete unit of storage on disk, tape, or other data recording
Worldwide port names Each Fibre Channel port and SAS port is identified by its
The ability to define a storage unit (full system, storage pool, or volume) with a logical capacity size that is larger than the physical capacity that is assigned to that storage unit.
medium that supports a form of identifier and parameter list, such as a volume label or input/output control.
physical port number and worldwide port name (WWPN).
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 5

1.3 IBM Storwize V5000 Gen2 models

The IBM Storwize V5000 Gen2 platform consists of different models. Each model type supports a different set of features, as shown in Table 1-3.
Table 1-3 IBM Storwize V5000 feature comparison
Feature V5000 Gen1 V5010 V5020 V5030
Cache 16 GB 16 GB 16 GB or 32 GB 32 GB or 64 GB
CPU 4 - core
Ivy Bridge Xeon CPU 2 GHz
Compression None None None Licensed (with 64
D R A I D Yes Ye s Ye s Ye s
SAS HW Encryption
External Virtualization
IBM Easy Tier® Licensed Licensed Licensed Licensed
FlashCopy Licensed Licensed Licensed Licensed
Hyperswap Yes No No Yes
Remote Copy Licensed Licensed Licensed Licensed
Thin Provisioning Yes Yes Yes Yes
Traditional RAID Yes Yes Yes Yes
Volume Mirroring Yes Yes Yes Yes
None None Licensed Licensed
Licensed Data Migration
2- core Broadwell-DE Celeron CPU
1.2 GHz
Only
2- core Broadwell-DE Xeon CPU
2.2 GHz Hyper-threading
Data Migration Only
6 - core Broadwell-DE Xeon CPU
1.9 GHz Hyper-threading
GB cache only)
Licensed
VMware Virtual Volumes (VVols)
Ye s Ye s Yes Yes
More information: For more information about the features, benefits, and specifications of IBM Storwize V5000 Gen2 models, see the following website:
https://ibm.biz/BdrRjb
The information in this book is accurate at the time of writing. However, as the IBM Storwize V5000 Gen2 matures, expect to see new features and enhanced specifications.
6 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
The IBM Storwize V5000 Gen2 models are described in Table 1-4. All control enclosures have two node canisters. F models are expansion enclosures.
Table 1-4 IBM Storwize V5000 Gen2 models
Model Description Cache Drive Slots
One-year warranty
2077-112 IBM Storwize V5010
large form factor (LFF) Control Enclosure
2077-124 IBM Storwize V5010
small form factor (SFF) Control Enclosure
2077-212 IBM Storwize V5020
LFF Control Enclosure
2077-224 IBM Storwize V5020
SFF Control Enclosure
2077-312 IBM Storwize V5030
LFF Control Enclosure
2077-324 IBM Storwize V5030
SFF Control Enclosure
2077-AF3 IBM Storwize V5030F
All-Flash Array Control Enclosure
2077-12F IBM Storwize V5000
LFF Expansion Enclosure
2077-24F IBM Storwize V5000
SFF Expansion Enclosure
16 GB 12 x 3.5-inch
16 GB 24 x 2.5-inch
16 GB or 32 GB 12 x 3.5-inch
16 GB or 32 GB 24 x 2.5-inch
32 GB or 64 GB 12 x 3.5-inch
32 GB or 64 GB 24 x 2.5-inch
64GB 24 x 2.5-inch
N/A 12 x 3.5-inch
N/A 24 x 2.5-inch
2077-AFF IBM Storwize V5030F
SFF Expansion Enclosure
2077-A9F IBM Storwize V5030F
High Density LFF Expansion Enclosure
Three-year warranty
2078-112 IBM Storwize V5010
LFF Control Enclosure
2078-124 IBM Storwize V5010
SFF Control Enclosure
2078-212 IBM Storwize V5020
LFF Control Enclosure
2078-224 IBM Storwize V5020
SFF Control Enclosure
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 7
N/A 24 x 2.5-inch
N/A 92 x 3.5-inch
16 GB 12 x 3.5-inch
16 GB 24 x 2.5-inch
16 GB or 32 GB 12 x 3.5-inch
16 GB or 32 GB 24 x 2.5-inch
Model Description Cache Drive Slots
2078-312 IBM Storwize V5030
LFF Control Enclosure
2078-324 IBM Storwize V5030
SFF Control Enclosure
2078-AF3 IBM Storwize V5030F
All-Flash Array Control Enclosure
2078-12F IBM Storwize V5000
LFF Expansion Enclosure
2078-24F IBM Storwize V5000
SFF Expansion Enclosure
2078-AFF IBM Storwize V5030F
SFF Expansion Enclosure
2078-A9F IBM Storwize V5030F
High Density LFF Expansion Enclosure
32 GB or 64 GB 12 x 3.5-inch
32 GB or 64 GB 24 x 2.5-inch
64GB 24 x 2.5-inch
N/A 12 x 3.5-inch
N/A 24 x 2.5-inch
N/A 24 x 2.5-inch
N/A 92 x 3.5-inch
Storwize V5030F control enclosures support only the attachment of Storwize V5030F expansion enclosures (Models AFF and A9F). Storwize V5000 expansion enclosures (Models 12E, 24E, 12F, 24F, and 92F) are not supported with Storwize V5030F control enclosures.
Storwize V5030F expansion enclosures are only supported for attachment to Storwize V5030F control enclosures. Storwize V5000 control enclosures (Models 12C, 24C, 112, 124, 212, 224, 312, and 324) do not support the attachment of Storwize V5030F expansion enclosures.
Table 1-5 shows the 2U expansion enclosures and 5U expansion enclosure mix rules. Shown are the maximum # of Drive Slots per SAS expansion string without disks in the controller itself.
Table 1-5 2U expansion enclosures and 5U expansion enclosure mix rules
5U
2U 012345678910
0 000 024 048 072 096 120 144 168 192 216 240
1 092 116 140 164 188 212 236 260 - - -
2 184 208 232 256 280 304 - - - - -
3276300324--------
4368----------
The Storwize V5030 systems can be added to an existing IBM Storwize V5000 Gen1 cluster to form a two-I/O group configuration. This configuration can be used as a migration mechanism to upgrade from the IBM Storwize V5000 Gen1 to the IBM Storwize V5000 Gen2.
8 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
The IBM Storwize V5000 Gen1 models are described in Table 1-6 for completeness.
Table 1-6 IBM Storwize V5000 Gen1 models
Model Cache Drive slots
One-year warranty
2077-12C 16 GB 12 x 3.5-inch
2077-24C 16 GB 24 x 2.5-inch
2077-12E N/A 12 x 3.5-inch
2077-24E N/A 24 x 2.5-inch
Three-year warranty
2078-12C 16 GB 12 x 3.5-inch
2078-24C 16 GB 24 x 2.5-inch
2078-12E N/A 12 x 3.5-inch
2078-24E N/A 24 x 2.5-inch
Figure 1-1 shows the front view of the 2077/2078-12 and 12F enclosures.
Figure 1-1 IBM Storwize V5000 Gen2 front view for 2077/2078-12 and 12F enclosures
The drives are positioned in four columns of three horizontally mounted drive assemblies. The drive slots are numbered 1 - 12, starting at the upper left and moving left to right, top to bottom.
Figure 1-2 shows the front view of the 2077/2078-24 and 24F enclosures.
Figure 1-2 IBM Storwize V5000 Gen2 front view for 2077/2078-24 and 24F enclosure
The drives are positioned in one row of 24 vertically mounted drive assemblies. The drive slots are numbered 1 - 24, starting from the left. A vertical center drive bay molding is between slots 12 and 13.
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 9

1.3.1 IBM Storage Utility Offerings

The IBM 2078 Model U5A is the IBM Storwize V5030 with a three-year warranty, to be utilized in the Storage Utility Offering space. These models are physically and functionally identical to the Storwize V5030 model 324 with the exception of target configurations and variable capacity billing. The variable capacity billing uses IBM Spectrum Control™ Storage Insights to monitor the system usage, allowing allocated storage usage above a base subscription rate to be billed per TB, per month.
Allocated storage is identified as storage that is allocated to a specific host (and unusable to
other hosts), whether data is written or not. For thin-provisioning, the data that is actually written is considered used.
IBM Storage Utility Offerings include the IBM FlashSystem® 900 (9843-UF3), IBM Storwize V5030 (2078-U5A), and Storwize V7000 (2076-U7A) storage utility models that enable variable capacity usage and billing.
These models provide a fixed total capacity, with a base and variable usage subscription of that total capacity. IBM Spectrum Control Storage Insights is used to monitor the system capacity usage. It is used to report on capacity used beyond the base subscription capacity, referred to as variable usage. The variable capacity usage is billed on a quarterly basis. This enables customers to grow or shrink their usage, and only pay configured capacity.
IBM Storage Utility Offering models are provided for customers who can benefit from a variable capacity system, where billing is based on actually provisioned space above the base. The base subscription is covered by a three-year lease that entitles the customer to utilize the base capacity at no additional cost. If storage needs increase beyond the base capacity, usage is billed based on the average daily provisioned capacity per terabyte, per month, on a quarterly basis.
used. For thick provisioning, total allocated volume space is considered
Example
A customer has a Storwize V5030 utility model with 2 TB nearline disks, for a total system capacity of 48 TB. The base subscription for such a system is 16.8 TB. During the months where the average daily usage is below 16.8 TB, there is no additional billing.
The system monitors daily provisioned capacity and averages those daily usage rates over the month term. The result is the average daily usage for the month.
If a customer uses 25 TB, 42.6 TB, and 22.2 TB in three consecutive months, Storage Insights calculates the overage as follows (rounding to the nearest terabyte) as shown in Ta bl e 1 - 7 .
Table 1-7 Overage calculation
Average daily Base Overage To be billed
25.0 16.8 08.2 08
42.6 16.8 25.8 26
22.2 16.8 05.4 05
The capacity billed at the end of the quarter will be a total of 39 TB-months in this example.
Disk expansions (2076-24F for the Storwize V7000 and 2078-24F for the Storwize V5030) may be ordered with the system at the initial purchase, but may not be added through MES. The expansions must have like-type and capacity drives, and must be fully populated.
10 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
For example, on a Storwize V7000 utility model with twenty-four 7.68 TB flash drives in the controller, a 2076-24F with twenty-four 7.68 TB drives may be configured with the initial system. Expansion drawers do not apply to FlashSystem 900 (9843-UF3). Storwize V5030 and Storwize V7000 utility model systems support up to 760 drives in the system.
The usage data collected by Storage Insights is used by IBM to determine the actual physical data provisioned in the system. This data is compared to the base system capacity subscription, and any provisioned capacity beyond that base subscription is billed per terabyte, per month, on a quarterly basis. The calculated usage is based on the average use over a given month.
In a highly variable environment, such as managed or cloud service providers, this enables the system to be utilized only as much as is necessary during any given month. Usage can increase or decrease, and is billed accordingly. Provisioned capacity is considered capacity that is reserved by the system. In thick-provisioned environments (available on FlashSystem 900 and Storwize), this is the capacity that is allocated to a host whether it has data written.
For thin-provisioned environments (available on the Storwize system), this is the data that is actually written and used. This is because of the different ways in which thick and thin provisioning utilize disk space.
These systems are available worldwide, but there are specific client and program differences by location. Consult with your IBM Business Partner or sales person for specifics.

1.4 IBM Storwize V5000 Gen1 and Gen2 compatibility

The Storwize V5030 systems can be added into existing Storwize V5000 Gen1 clustered systems. All systems within a cluster must use the same version of Storwize V5000 software, which is version 7.6.1 or later.
Restriction: The Storwize V5010 and Storwize V5020 are not compatible with V5000 Gen1 systems because they are not able to join an existing I/O group.
A single Storwize V5030 control enclosure can be added to a single Storwize V5000 cluster to bring the total number of I/O groups to two. They can be clustered by using either Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE). The possible I/O group configuration options for all Storwize V5000 models are shown in Table 1-8.
Table 1-8 IBM Storwize V5000 I/O group configurations
I/O group 0 I/O group 1
V5010 N/A
V5020 N/A
V5030 N/A
V5030 V5030
V5030 V5000 Gen1
V5000 Gen 1 V5030
V5000 Gen1 N/A
V5000 Gen1 V5000 Gen1
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 11

1.5 IBM Storwize V5000 Gen2 hardware

The IBM Storwize V5000 Gen2 solution is a modular storage system that is built on a common enclosure platform that is shared by the control enclosures and expansion enclosures.
Figure 1-3 shows an overview of hardware components of the IBM Storwize V5000 Gen2 solution.
Figure 1-3 IBM Storwize V5000 Gen2 hardware components
Figure 1-4 shows the control enclosure rear view of an IBM Storwize V5000 Gen2 enclosure (the Storwize V5020).
Figure 1-4 Storwize V5020 control enclosure rear view
12 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
In Figure 1-4, you can see two power supply slots at the bottom of the enclosure. The power supplies are identical and exchangeable. Two canister slots are at the top of the chassis.
In Figure 1-5, you can see the rear view of an IBM Storwize V5000 Gen2 expansion enclosure.
Figure 1-5 IBM Storwize V5000 Gen2 expansion enclosure rear view
You can see that the only difference between the control enclosure and the expansion enclosure is the canister. The canisters of the expansion enclosure have only two SAS ports.
For more information about the expansion enclosure, see 1.5.5, “Expansion enclosure” on page 17.

1.5.1 Control enclosure

Each IBM Storwize V5000 Gen2 system has one control enclosure that contains two node canisters (nodes), disk drives, and two power supplies.
The two node canisters act as a single processing unit and form an I/O group that is attached to the SAN fabric, an iSCSI infrastructure, or that is directly attached to hosts through FC or SAS. The pair of nodes is responsible for serving I/O to a volume. The two nodes provide a highly available fault-tolerant controller so that if one node fails, the surviving node automatically takes over. Nodes are deployed in pairs that are called
One node is designated as the configuration node, but each node in the control enclosure holds a copy of the control enclosure state information.
The Storwize V5010 and Storwize V5020 support a single I/O group. The Storwize V5030 supports two I/O groups in a clustered system.
The terms
The battery is used if power is lost. The IBM Storwize V5000 Gen2 system uses this battery to power the canister while the cache data is written to the internal system flash. This memory dump is called a
Note: The batteries of the IBM Storwize V5000 Gen2 are able to process two fire hose memory dumps in a row. After this, you cannot power up the system immediately. There is a need to wait until the batteries are charged over a level that allows them to run the next fire hose memory dump.
After the system is up again, this data is loaded back to the cache for destaging to the disks.
node canister and node are used interchangeably throughout this book.
fire hose memory dump.
I/O groups.

1.5.2 Storwize V5010

Figure 1-6 shows a single Storwize V5010 node canister.
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 13
Figure 1-6 Storwize V5010 node canister
Each Storwize V5010 node canister contains the following hardware:
򐂰 Battery 򐂰 Memory: 8 GB 򐂰 One 12 Gbps SAS port for expansions 򐂰 Two 10/100/1000 Mbps Ethernet ports 򐂰 One USB 2.0 port that is used to gather system information 򐂰 System flash 򐂰 Host interface card (HIC) slot (different options are possible)
Figure 1-6 shows the following features that are provided by the Storwize V5010 node canister:
򐂰 Two 10/100/1000 Mbps Ethernet ports. Port 1 must be used for management, and port 2
can optionally be used for management. Port 2 serves as a technician port (as denoted by the white box with “T” in it) for system initialization and service.
Note: All three models use a technician port to perform initial setup. The implementation of the technician port varies between models: On Storwize V5010/20 the second 1 GbE port (labelled T) is initially enabled as a technician port. After cluster creation, this port is disabled and can then be used for I/O or management.
On Storwize V5030 the onboard 1 GbE port (labelled T) is permanently enabled as a technician port. Connecting the technician port to the LAN will disable the port. The Storwize V5010/20 technician port can be re-enabled after initial setup.
The following commands are used to enable or disable the technical port:
satask chserviceip -techport enable -force
satask chserviceip -techport disable
򐂰 Both ports can be used for iSCSI traffic and IP replication. For more information, see
Chapter 5, “Host configuration” on page 199 and Chapter 10, “Copy services” on page 485.
򐂰 One USB port for gathering system information.
System initialization: Unlike the Storwize V5000 Gen1, you must perform the system initialization of the Storwize V5010 by using the technician port instead of the USB port.
򐂰 One 12 Gbps serial-attached SCSI (SAS 3.0) port to connect to the optional expansion
enclosures. The Storwize V5010 supports up to 10 expansion enclosures.
14 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Important: The canister SAS port on the Storwize V5010 does not support SAS host
attachment. The Storwize V5010 supports SAS hosts by using an optional host interface card. For more information, see 1.5.6, “Host interface cards” on page 18.
Do not use the port that is marked with a wrench.

1.5.3 Storwize V5020

Figure 1-7 shows a single Storwize V5020 node canister.
Figure 1-7 Storwize V5020 node canister
Each node canister contains the following hardware:
򐂰 Battery 򐂰 Memory: 8 GB upgradable to 16 GB 򐂰 Three 12 Gbps SAS ports (two for Host attachment, one for expansions) 򐂰 Two 10/100/1000 Mbps Ethernet ports 򐂰 One USB 2.0 port that is used to gather system information 򐂰 System flash 򐂰 HIC slot (different options are possible)
This port is a service port only.
Figure 1-7 shows the following features that are provided by the Storwize V5020 node canister:
򐂰 Two 10/100/1000 Mbps Ethernet ports. Port 1 must be used for management, and port 2
can optionally be used for management. Port 2 serves as a technician port (as denoted by the white box with “T” in it) for system initialization and service.
Note: All three models use a technician port to perform initial setup. The implementation of the technician port varies between models: On Storwize V5010/20 the second 1 GbE port (labelled T) is initially enabled as a technician port. After cluster creation, this port is disabled and can then be used for I/O or management.
On Storwize V5030 the onboard 1 GbE port (labelled T) is permanently enabled as a technician port. Connecting the technician port to the LAN will disable the port. The Storwize V5010/20 technician port can be re-enabled after initial setup.
Commands used to enable or disable the techport:
satask chserviceip -techport enable -force satask chserviceip -techport disable
򐂰 Both ports can be used for iSCSI traffic and IP replication. For more information, see
Chapter 5, “Host configuration” on page 199 and Chapter 10, “Copy services” on page 485.
򐂰 One USB port for gathering system information.
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 15
System initialization: Unlike the Storwize V5000 Gen1, you must perform the system initialization of the Storwize V5020 by using the technician port instead of the USB port.
򐂰 Three 12 Gbps serial-attached SCSI (SAS 3.0) ports. The ports are numbered 1 - 3 from
left to right. Port 1 is used to connect to the optional expansion enclosures. Ports 2 and 3 can be used to connect directly to SAS hosts. (Both 6 Gb and 12 Gb hosts are supported.) The Storwize V5020 supports up to 10 expansion enclosures.
Service port: Do not use the port that is marked with a wrench. This port is a service port only.

1.5.4 Storwize V5030

Figure 1-8 shows a single Storwize V5030 node canister.
Figure 1-8 Storwize V5030 node canister
Each node canister contains the following hardware:
򐂰 Battery 򐂰 Memory: 16 GB upgradable to 32 GB 򐂰 Two 12 Gbps SAS ports for Expansions 򐂰 One 10/100/1000 Mbps Ethernet technician port 򐂰 Two 1/10 Gbps Ethernet ports 򐂰 One USB 2.0 port that is used to gather system information 򐂰 System flash 򐂰 HIC slot (different options are possible)
Figure 1-8 shows the following features that are provided by the Storwize V5030 node canister:
򐂰 One Ethernet technician port (as denoted by the white box with “T” in it). This port can be
used for system initialization and service only. For more information, see Chapter 1, “Overview of the IBM Storwize V5000 Gen2 system” on page 1. It cannot be used for anything else.
򐂰 Two 1/10 Gbps Ethernet ports. These ports are Copper 10GBASE-T with RJ45
connectors. Port 1 must be used for management. Port 2 can optionally be used for management. Both ports can be used for iSCSI traffic and IP replication. For more information, see Chapter 5, “Host configuration” on page 199 and Chapter 10, “Copy services” on page 485.
16 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Important: The 1/10 Gbps Ethernet ports do not support speeds less than 1 Gbps
(100 Mbps is not supported).
Ensure that you use the correct port connectors. The Storwize V5030 canister 10 Gbps connectors appear the same as the 1 Gbps connectors on the other Storwize V5000 models. These RJ45 connectors differ from the optical small form-factor plug able (SFP+) connectors on the optional 10 Gbps HIC. When you plan to implement the Storwize V5030, ensure that any network switches provide the correct connector type.
򐂰 One USB port to gather system information.
System initialization: Unlike the Storwize V5000 Gen1, you must perform the system initialization of the Storwize V5030 by using the technician port instead of the USB port.
򐂰 Two 12 Gbps serial-attached SCSI (SAS 3.0) ports. The ports are numbered 1 and 2 from
left to right to connect to the optional expansion enclosures. The Storwize V5030 supports up to 20 expansion enclosures. Ten expansion enclosures can be connected to each port.
Important: The canister SAS ports on the Storwize V5030 do not support SAS host attachment. The Storwize V5030 supports SAS hosts by using an HIC. See 1.5.6, “Host interface cards” on page 18.
Do not use the port that is marked with a wrench. This port is a service port only.

1.5.5 Expansion enclosure

The optional IBM Storwize V5000 Gen2 expansion enclosure contains two expansion canisters, disk drives, and two power supplies. Four types of expansion enclosures are available: large form factor 2u (LFF) Expansion Enclosure Model 12F, a small form factor 2U (SFF) Expansion Enclosure Model 24F, a small form factor 2U (SFF) Expansion Enclosure for flash drives Model AFF and the high density drawers 5U LFF Model 92F or the flash version A9F. They are available with one or three-year warranty.
Figure 1-9 shows the rear of the 2U expansion enclosure.
Figure 1-9 2u expansion enclosure of the IBM Storwize V5000 Gen2
The expansion enclosure power supplies are the same as the control enclosure power supplies. A single power lead connector is on each power supply unit.
Each expansion canister provides two SAS interfaces that are used to connect to the control enclosure and any further optional expansion enclosures. The ports are numbered 1 on the left and 2 on the right. SAS port 1 is the IN port, and SAS port 2 is the OUT port.
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 17
The use of SAS connector 1 is mandatory because the expansion enclosure must be attached to a control enclosure or another expansion enclosure further up in the chain. SAS connector 2 is optional because it is used to attach to further expansion enclosures down the chain.
The Storwize V5010 and Storwize V5020 support a single chain of up to 10 expansion enclosures that attach to the control enclosure. The Storwize V5030 supports up to 40 expansion enclosures in a configuration that consists of two control enclosures, which are each attached to 20 expansion enclosures in two separate chains.
Table 1-9 shows the maximum number of supported expansion enclosures and the drive limits for each model.
Table 1-9 Expansion enclosure and drive limits
V5010 V5020 V5030
Maximum number of supported expansion enclosures 010 010 0040
Maximum number of supported drives 392 392 1520
Each port includes two LEDs to show the status. The first LED indicates the link status and the second LED indicates the fault status.
For more information about LED and ports, see this website:
https://ibm.biz/BdjMhF
Restriction: The IBM Storwize V5000 Gen2 expansion enclosures can be used only with an IBM Storwize V5000 Gen2 control enclosure. The IBM Storwize V5000 Gen1 expansion enclosures cannot be used with an IBM Storwize V5000 Gen2 control enclosure.

1.5.6 Host interface cards

All IBM Storwize V5000 Gen2 support Ethernet ports as standard for iSCSI connectivity. For the Storwize V5010 and Storwize V5020, these Ethernet ports are 1 GbE ports. For the Storwize V5030, these Ethernet ports are 10 GbE ports. The Storwize V5020 also includes 12 Gb SAS ports for host connectivity as a standard.
Additional host connectivity options are available through an optional adapter card. Table 1-10 shows the available configurations for a single control enclosure.
Table 1-10 IBM Storwize V5000 Gen2 configurations available
1 Gb Ethernet (iSCSI)
V5030 8 ports (with
optional adapter card).
V5020 4 ports
(standard). Additional 8 ports (with optional adapter card)
10 Gb Ethernet Copper 10GBASE-T (iSCSI)
4 ports (standard).
N/A 4 ports
12 Gb SAS 16 Gb FC 10 Gb Ethernet
Optical SFP+ iSCSI/FCoE
8 ports (with optional adapter card).
(standard). Additional 8 ports (with optional adapter card).
8 ports (with optional adapter card).
8 ports (with optional adapter card).
8 ports (with optional adapter card).
8 ports (with optional adapter card).
18 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
V5010 4 ports
(standard). Additional 8 ports (with optional adapter card).
Optional adapter cards: Only one pair of identical adapter cards is allowed for each control enclosure.

1.5.7 Disk drive types

IBM Storwize V5000 Gen2 enclosures support Flash Drives, SAS, and Nearline SAS drive types. Each drive has two ports (two PHYs) to provide fully redundant access from each node canister. I/O can be issued down both paths simultaneously.
Table 1-11 shows the IBM Storwize V5000 Gen2 disk drive types, disk revolutions per minute (RPMs), and sizes that are available at the time of writing.
Table 1-11 IBM Storwize V5000 Gen2 disk drive types
Drive type RPM Size
2.5-inch form factor Flash Drive N/A 400 GB, 800 GB, 1.6 TB, and 3.2 TB
N/A 8 ports (with
optional adapter card).
8 ports (with optional adapter card).
8 ports (with optional adapter card).
2.5-inch form factor Read Intensive (RI) Flash Drive
2.5-inch form factor SAS 10,000 900 GB, 1.2 TB, and 1.8 TB
2.5-inch form factor SAS 15,000 300 GB, 600 GB, and 900 GB
2.5-inch form factor Nearline SAS 07,200 2 TB
3.5-inch form factor SAS 10,000 900 GB, 1.2 TB, and 1.8 TB
3.5-inch form factor SAS 15,000 300 GB, 600 GB, and 900 GB
3.5-inch form factor Nearline SAS 07,200 4 TB, 6 TB, 8 TB, and 10 TB
a. 2.5-inch drive in a 3.5-inch drive carrier
N/A 1,92 TB, 3.84 TB, and 7.68 TB

1.6 IBM Storwize V5000 Gen2 terms

In this section, we introduce the terms that are used for the IBM Storwize V5000 Gen2 throughout this book.

1.6.1 Hosts

A host system is a server that is connected to IBM Storwize V5000 Gen2 through a Fibre Channel connection, an iSCSI connection, or an SAS connection.
a
a
Hosts are defined on IBM Storwize V5000 Gen2 by identifying their WWPNs for Fibre Channel and SAS hosts. The iSCSI hosts are identified by using their iSCSI names. The iSCSI names can be iSCSI qualified names (IQNs) or extended unique identifiers (EUIs). For more information, see Chapter 5, “Host configuration” on page 199.
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 19
Hosts can be Fibre Channel-attached through an existing Fibre Channel network infrastructure or direct-attached, iSCSI-attached through an existing IP network, or directly attached through SAS.

1.6.2 Node canister

A node canister provides host interfaces, management interfaces, and SAS interfaces to the control enclosure. A node canister has the cache memory, the internal storage to store software and logs, and the processing power to run the IBM Storwize V5000 Gen2 virtualization and management software. A clustered system consists of one or two node pairs. Each node pair forms one I/O group. I/O groups are explained in 1.6.3, “I/O groups” on page 20.
One of the nodes within the system, which is known as the configuration activity for the clustered system. If this node fails, the system nominates the other node to become the configuration node.

1.6.3 I/O groups

Within IBM Storwize V5000 Gen2, one or two pairs of node canisters are known as I/O
groups
clustered system, which provides either one or two I/O groups, depending on the model. See Table 1-8 on page 11 for more details.
When a host server performs I/O to one of its volumes, all of the I/Os for a specific volume are directed to the I/O group. Also, under normal conditions, the I/Os for that specific volume are always processed by the same node within the I/O group.
When a host server performs I/O to one of its volumes, all of the I/O for that volume is directed to the I/O group where the volume was defined. Under normal conditions, these I/Os are also always processed by the same node within that I/O group.
Both nodes of the I/O group act as preferred nodes for their own specific subset of the total number of volumes that the I/O group presents to the host servers (a maximum of 2,048 volumes for each host). However, both nodes also act as a failover node for the partner node within the I/O group. Therefore, a node takes over the I/O workload from its partner node (if required) without affecting the server’s application.
configuration node, manages
. The IBM Storwize V5000 Gen2 supports either two-node or four-node canisters in a
In an IBM Storwize V5000 Gen2 environment (which uses active-active architecture), the I/O handling for a volume can be managed by both nodes of the I/O group. The I/O groups must be connected to the SAN so that all hosts can access all nodes. The hosts must use multipath device drivers to handle this capability.
Up to 256 host server objects can be defined to one-I/O group or 512 host server objects can be defined in a two-I/O group system. More information about I/O groups is in Chapter 6, “Volume configuration” on page 287.
Important: The active/active architecture provides the availability to process I/Os for both controller nodes and allows the application to continue to run smoothly, even if the server has only one access route or path to the storage controller. This type of architecture eliminates the path/LUN thrashing that is typical of an active/passive architecture.
20 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1

1.6.4 Clustered system

A clustered system consists of one or two pairs of node canisters. Each pair forms an I/O group. All configuration, monitoring, and service tasks are performed at the system level. The configuration settings are replicated across all node canisters in the clustered system. To facilitate these tasks, one or two management IP addresses are set for the clustered system. By using this configuration, you can manage the clustered system as a single entity.
A process exists to back up the system configuration data on to disk so that the clustered system can be restored in a disaster. This method does not back up application data. Only IBM Storwize V5000 Gen2 system configuration information is backed up.
System configuration backup: After the system configuration is backed up, save the backup data on to your local hard disk (or at the least outside of the SAN). If you are unable to access the IBM Storwize V5000 Gen2, you do not have access to the backup data if it is on the SAN. Perform this configuration backup after each configuration change to be safe.
The system can be configured by using the IBM Storwize V5000 Gen2 management software (GUI), CLI, or USB key.

1.6.5 RAID

The IBM Storwize V5000 Gen2 contains several internal drive objects, but these drives cannot be directly added to the storage pools. Drives need to be included in a Redundant Array of Independent Disks ( drives.
RAID) to provide protection against the failure of individual
These drives are referred to as levels provide various degrees of redundancy and performance. The maximum number of members in the array varies based on the RAID level.
Traditional RAID (TRAID) has the concept of hot spare drives. When an array member drive fails, the system automatically replaces the failed member with a hot spare drive and rebuilds the array to restore its redundancy. Candidate and spare drives can be manually exchanged with array members.
Apart from traditional disk arrays, IBM Spectrum™ Virtualize V7.6 introduced Distributed RAIDs. Distributed RAID improves recovery time of failed disk drives in an array by the distribution of spare capacity between primary disks, rather than dedicating a whole spare drive for replacement.
Details about traditional and distributed RAID arrays are described in Chapter 4, “Storage pools” on page 143.

1.6.6 Managed disks

A managed disk (MDisk) refers to the unit of storage that IBM Storwize V5000 Gen2 virtualizes. This unit can be a logical volume on an external storage array that is presented to the IBM Storwize V5000 Gen2 or a (traditional or distributed) RAID array that consists of internal drives. The IBM Storwize V5000 Gen2 can then allocate these MDisks into storage pools.
members of the array. Each array has a RAID level. RAID
An MDisk is invisible to a host system on the storage area network because it is internal to the IBM Storwize V5000 Gen2 system. An MDisk features the following modes:
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 21
򐂰 Array
Array mode MDisks are constructed from internal drives by using the RAID functionality. Array MDisks are always associated with storage pools.
򐂰 Unmanaged
LUNs that are presented by external storage systems to IBM Storwize V5000 Gen2 are discovered as unmanaged MDisks. The MDisk is not a member of any storage pools, which means that it is not used by the IBM Storwize V5000 Gen2 storage system.
򐂰 Managed
Managed MDisks are LUNs, which are presented by external storage systems to an IBM Storwize V5000 Gen2, that are assigned to a storage pool and provide extents so that volumes can use them. Any data that might be on these LUNs when they are added is lost.
򐂰 Image
Image MDisks are LUNs that are presented by external storage systems to an IBM
Storwize V5000 Gen2 and assigned directly to a volume with a one-to-one mapping of extents between the MDisk and the volume. For more information, see Chapter 6, “Volume configuration” on page 287.

1.6.7 Quorum disks

A quorum disk is an MDisk that contains a reserved area for use exclusively by the system. In the IBM Storwize V5000 Gen2, internal drives can be considered as quorum candidates. The clustered system uses quorum disks to break a tie when exactly half the nodes in the system remain after a SAN failure.
The clustered system automatically forms the quorum disk by taking a small amount of space from an MDisk. It allocates space from up to three different MDisks for redundancy, although only one quorum disk is active.
To avoid the possibility of losing all of the quorum disks because of a failure of a single storage system if the environment has multiple storage systems, you need to allocate the quorum disk on different storage systems. You can manage the quorum disks by using the CLI.
IP quorum base support provides an alternative for Storwize V5000 IBM HyperSwap® implementations. Instead of Fibre Channel storage on a third site, the IP network is used for communication between the IP quorum application and node canisters in the system to cope with tie-break situations if the inter-site link fails. The IP quorum application is a Java application that runs on a host at the third site. The IP quorum application enables the use of a lower-cost IP network-attached host as a quorum disk for simplified implementation and operation.
Note: IP Quorum allows the user to replace a third-site Fibre Channel-attached quorum disk with an IP Quorum application. The Java application runs on a Linux host and is used to resolve split-brain situations. Quorum disks are still required in sites 1 and 2 for cookie crumb and metadata. The application can also be used with clusters in a standard topology configuration, but the primary use case is a customer with a cluster split over two sites (stretched or HyperSwap).
You need Java to run the IP quorum. Your Network must provide as least < 80 ms round-trip latency. All nodes need a service IP address, and all service IP addresses must be pingable from the quorum host.
22 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1

1.6.8 Storage pools

A storage pool (up to 1028 per system) is a collection of MDisks (up to 128) that are grouped to provide capacity for volumes. All MDisks in the pool are split into extents of the same size. Volumes are then allocated out of the storage pool and are mapped to a host system.
MDisks can be added to a storage pool at any time to increase the capacity of the pool. MDisks can belong in only one storage pool. For more information, see Chapter 4, “Storage pools” on page 143.
Each MDisk in the storage pool is divided into a number of extents. The size of the extent is selected by the administrator when the storage pool is created and cannot be changed later. The size of the extent ranges from 16 MB - 8 GB.
Default extent size: The GUI of IBM Storwize V5000 Gen2 has a default extent size value of 1024 MB when you define a new storage pool.
The extent size directly affects the maximum volume size and storage capacity of the clustered system.
A system can manage 2^22 (4,194,304) extents. For example, with a 16 MB extent size, the system can manage up to 16 MB x 4,194,304 = 64 TB of storage.
The effect of extent size on the maximum volume and cluster size is shown in Table 1-12.
Table 1-12 Maximum volume and cluster capacity by extent size
Extent size (MB) Maximum volume capacity for
normal volumes (GB)
0016 002,048 (2 TB) 064 TB
0032 004,096 (4 TB) 128 TB
0064 008,192 (8 TB) 256 TB
0128 016,384 (16 TB) 512 TB
0256 032,768 (32 TB) 001 PB
0512 065,536 (64 TB) 002 PB
1024 131,072 (128 TB) 004 PB
2048 262,144 (256 TB) 0v8 PB
4096 262,144 (256 TB) 016 PB
8192 262,144 (256 TB) 032 PB
Maximum storage capacity of cluster
Use the same extent size for all storage pools in a clustered system. This rule is a prerequisite if you want to migrate a volume between two storage pools. If the storage pool extent sizes are not the same, you must use volume mirroring to copy volumes between storage pools, as described in Chapter 4, “Storage pools” on page 143.
You can set a threshold warning for a storage pool that automatically issues a warning alert when the used capacity of the storage pool exceeds the set limit.
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 23
Child storage pools
Instead of being created directly from MDisks, child pools are created from existing capacity that is allocated to a parent pool. As with parent pools, volumes can be created that specifically use the capacity that is allocated to the child pool. Parent pools grow automatically as more MDisks are allocated to them. However, child pools provide a fixed capacity pool of storage. You can use a child pool to manage a quota of storage for a particular purpose.
Child pools can be created by using the management GUI, CLI, or IBM Spectrum Control when you create VMware vSphere virtual volumes. For more information about child pools, see Chapter 4, “Storage pools” on page 143.
Single-tiered storage pool
MDisks that are used in a single-tiered storage pool must have the following characteristics to prevent performance problems and other problems:
򐂰 They must have the same hardware characteristics, for example, the same RAID type,
RAID array size, disk type, and disk revolutions per minute (RPMs).
򐂰 The disk subsystems that provide the MDisks must have similar characteristics, for
example, maximum input/output operations per second (IOPS), response time, cache, and throughput.
򐂰 You need to use MDisks of the same size and ensure that the MDisks provide the same
number of extents. If this configuration is not feasible, you must check the distribution of the volumes’ extents in that storage pool.
Multi-tiered storage pool
A multi-tiered storage pool has a mix of MDisks with more than one type of disk, for example, a storage pool that contains a mix of generic_hdd
A multi-tiered storage pool contains MDisks with different characteristics unlike the single-tiered storage pool. MDisks with similar characteristics then form the tiers within the pool. However, each tier needs to have MDisks of the same size and that provide the same number of extents.
A multi-tiered storage pool is used to enable automatic migration of extents between disk tiers by using the IBM Storwize V5000 Gen2 IBM Easy Tier function, as described in Chapter 9, “Advanced features for storage efficiency” on page 437.
This functionality can help improve the performance of host volumes on the IBM Storwize V5000.

1.6.9 Volumes

A volume is a logical disk that is presented to a host system by the clustered system. In our virtualized environment, the host system has a volume that is mapped to it by IBM Storwize V5000 Gen2. IBM Storwize V5000 Gen2 translates this volume into a number of extents, which are allocated across MDisks. The advantage with storage virtualization is that the host is decoupled from the underlying storage, so the virtualization appliance can move around the extents without affecting the host system.
and generic_ssd MDisks.
The host system cannot directly access the underlying MDisks in the same manner as it can access RAID arrays in a traditional storage environment.
The following types of volumes are available:
24 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
򐂰 Striped
A striped volume is allocated one extent in turn from each MDisk in the storage pool. This process continues until the space that is required for the volume is satisfied.
It also is possible to supply a list of MDisks to use. Figure 1-10 shows how a striped volume is allocated, assuming that 10 extents are
required.
Figure 1-10 Striped volume
򐂰 Sequential
A sequential volume is a volume in which the extents are allocated one after the other from one MDisk to the next MDisk, as shown in Figure 1-11.
Figure 1-11 Sequential volume
򐂰 Image mode
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 25
Image mode volumes are special volumes that have a direct relationship with one MDisk. They are used to migrate existing data into and out of the clustered system to or from external FC SAN-attached storage.
When the image mode volume is created, a direct mapping is made between extents that are on the MDisk and the extents that are on the volume. The logical block address (LBA)
x on the MDisk is the same as the LBA x on the volume, which ensures that the data on
the MDisk is preserved as it is brought into the clustered system, as shown in Figure 1-12.
Figure 1-12 Image mode volume
Certain virtualization functions are not available for image mode volumes, so it is often useful to migrate the volume into a new storage pool. After it is migrated, the MDisk becomes a managed MDisk.
If you want to migrate data from an existing storage subsystem, use the storage migration wizard, which guides you through the process.
For more information, see Chapter 7, “Storage migration” on page 349.
If you add an MDisk that contains data to a storage pool, any data on the MDisk is lost. If you are presenting externally virtualized LUNs that contain data to an IBM Storwize V5000 Gen2, import them as image mode volumes to ensure data integrity or use the migration wizard.

1.6.10 iSCSI

iSCSI is an alternative method of attaching hosts to the IBM Storwize V5000 Gen2. The iSCSI function is a software function that is provided by the IBM Storwize V5000 Gen2 code, not hardware. In the simplest terms, iSCSI allows the transport of SCSI commands and data over an Internet Protocol network that is based on IP routers and Ethernet switches.
26 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and uses an existing IP network instead of requiring FC host bus adapters (HBAs) and a SAN fabric infrastructure. Concepts of names and addresses are carefully separated in iSCSI.
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
and target name also refer to an iSCSI name.
name
An iSCSI address specifies the iSCSI name of an iSCSI node and a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned by way of Dynamic Host Configuration Protocol (DHCP). An IBM Storwize V5000 node represents an iSCSI node and provides statically allocated IP addresses.
Each iSCSI node, that is, an initiator or target, has a unique IQN, which can have a size of up to 255 bytes. The IQN is formed according to the rules that were adopted for Internet nodes. The IQNs can be abbreviated by using a descriptive name, which is known as an alias can be assigned to an initiator or a target.
For more information about configuring iSCSI, see Chapter 4, “Storage pools” on page 143.

1.6.11 Serial-attached SCSI

The serial-attached SCSI (SAS) standard is an alternative method of attaching hosts to the IBM Storwize V5000 Gen2. The IBM Storwize V5000 Gen2 supports direct SAS host attachment to address easy-to-use, affordable storage needs. Each SAS port device has a worldwide unique 64-bit SAS address and operates at 12 Gbps.
initiator
alias. An

1.6.12 Fibre Channel

Fibre Channel (FC) is the traditional method that is used for data center storage connectivity. The IBM Storwize V5000 Gen2 supports FC connectivity at speeds of 4, 8, and 16 Gbps. Fibre Channel Protocol is used to encapsulate SCSI commands over the FC network. Each device in the network has a unique 64-bit worldwide port name (WWPN). The IBM Storwize V5000 Gen2 supports FC connections directly to a host server or to external FC switched fabrics.

1.7 IBM Storwize V5000 Gen2 features

In this section, we describe the features of the IBM Storwize V5000 Gen2. Different models offer a different range of features. See Table 1-3 on page 6 for a comparison.

1.7.1 Mirrored volumes

IBM Storwize V5000 Gen2 provides a function that is called storage volume mirroring, which enables a volume to have two physical copies. Each volume copy can belong to a different storage pool and be on a different physical storage system to provide a high-availability (HA) solution. Each mirrored copy can be either a generic, thin-provisioned, or compressed volume copy.
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 27
When a host system issues a write to a mirrored volume, IBM Storwize V5000 Gen2 writes the data to both copies. When a host system issues a read to a mirrored volume, IBM Storwize V5000 Gen2 requests it from the primary copy.
If one of the mirrored volume copies is temporarily unavailable, the IBM Storwize V5000 Gen2 automatically uses the alternative copy without any outage for the host system. When the mirrored volume copy is repaired, IBM Storwize V5000 Gen2 synchronizes the data again.
A mirrored volume can be converted into a non-mirrored volume by deleting one copy or by splitting away one copy to create a non-mirrored volume.
The use of mirrored volumes can also assist with migrating volumes between storage pools that have different extent sizes. Mirrored volumes can also provide a mechanism to migrate fully allocated volumes to thin-provisioned or compressed volumes without any host outages.
The Volume Mirroring feature is included as part of the base software, and no license is required.

1.7.2 Thin provisioning

Volumes can be configured to be thin-provisioned or fully allocated. A thin-provisioned volume behaves as though it were a fully allocated volume in terms of read/write I/O. However, when a volume is created, the user specifies two capacities: the real capacity of the volume and its virtual capacity.
The
real capacity determines the quantity of MDisk extents that are allocated for the volume.
The
virtual capacity is the capacity of the volume that is reported to IBM Storwize V5000
Gen2 and to the host servers.
The real capacity is used to store the user data and the metadata for the thin-provisioned volume. The real capacity can be specified as an absolute value or a percentage of the virtual capacity.
The Thin Provisioning feature can be used on its own to create over-allocated volumes, or it can be used with FlashCopy. Thin-provisioned volumes can be used with the mirrored volume feature, also.
A thin-provisioned volume can be configured to V5000 Gen2 to automatically expand the real capacity of a thin-provisioned volume as it gets used. This feature prevents the volume from going offline. Auto expand attempts to maintain a fixed amount of unused real capacity on the volume. This amount is known as the
auto expand, which causes the IBM Storwize
contingency capacity.
When the thin-provisioned volume is initially created, the IBM Storwize V5000 Gen2 initially allocates only 2% of the virtual capacity in real physical storage. The contingency capacity and auto expand features seek to preserve this 2% of free space as the volume grows.
If the user modifies the real capacity, the contingency capacity is reset to be the difference between the used capacity and real capacity. In this way, the autoexpand feature does not cause the real capacity to grow much beyond the virtual capacity.
A volume that is created with a zero contingency capacity goes offline when it must expand. A volume with a non-zero contingency capacity stays online until it is used up.
To support the auto expansion of thin-provisioned volumes, the volumes themselves have a configurable warning capacity. When the used free capacity of the volume exceeds the warning capacity, a warning is logged.
28 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
For example, if a warning of 80% is specified, the warning is logged when 20% of the free capacity remains. This approach is similar to the capacity warning that is available on storage pools.
A thin-provisioned volume can be converted to a either a fully allocated volume or compressed volume by using volume mirroring (and vice versa).
The Thin Provisioning feature is included as part of the base software, and no license is required.

1.7.3 Real-time Compression

The Storwize V5030 model can create compressed volumes, allowing more data to be stored in the same physical space. IBM Real-time Compression™ (RtC) can be used for primary active volumes and with mirroring and replication (FlashCopy/Remote Copy). RtC is available on the Storwize V5030 model only.
Existing volumes can take advantage of Real-time Compression to result in an immediate capacity saving. An existing volume can be converted to a compressed volume by creating a compressed volume copy of the original volume followed by deleting the original volume.
No changes to the existing environment are required to take advantage of RtC. It is transparent to hosts while the compression occurs within the IBM Storwize V5000 Gen2 system.
Software-only compression: The use of RtC on the Storwize V5030 requires dedicated CPU resources from the node canisters. If more performance is required for deploying RtC, consider purchasing the Storwize V7000 system. The Storwize V7000 system uses dedicated hardware options for compression acceleration.
The Storwize V5030 model has the additional memory upgrade (32 GB for each node canister). When the first compressed volume is created 4 of the 6 CPU cores are allocated to RtC. Of the 32 GB of memory on each node canister roughly 9-10 GB is allocated to RtC. There are no hardware compression accelerators as in the Storwize V7000 Gen2. The actual LZ4 compression is done by the CPUs as was the case with the Storwize V7000 Gen1.
Table 1-13 shows how the cores are used with RtC.
Table 1-13 Cores usage with RtC
Compression Disabled Compression Enabled
Model Normal
Processing
V50102 coresNANANA
V50202 coresNANANA
V5030 6 cores 0 cores 2 cores 4 cores + HT
RtC Normal
Processing
RtC
The faster CPU with more cores, the extra memory and the hyper-threading capability of the Storwize V5030, as well as improvements to RtC software results in good performance for smaller customer configurations common to the market segment this product is intended to serve. The feature is licensed per enclosure. Conversely, Real-time Compression is not available on the Storwize V5010 or Storwize V5020 models.
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 29

1.7.4 Easy Tier

IBM Easy Tier provides a mechanism to seamlessly migrate extents to the most appropriate tier within the IBM Storwize V5000 Gen2 solution. This migration can be to different tiers of internal drives within IBM Storwize V5000 Gen2 or to external storage systems that are virtualized by IBM Storwize V5000 Gen2, for example, an IBM FlashSystem 900.
The Easy Tier function can be turned on or turned off at the storage pool and volume level.
You can demonstrate the potential benefit of Easy Tier in your environment before you install Flash drives by using the IBM Storage Advisor Tool. For more information about Easy Tier, see Chapter 9, “Advanced features for storage efficiency” on page 437.
The IBM Easy Tier feature is licensed per enclosure.

1.7.5 Storage Migration

By using the IBM Storwize V5000 Gen2 Storage Migration feature, you can easily move data from other existing Fibre Channel-attached external storage to the internal capacity of the IBM Storwize V5000 Gen2. You can migrate data from other storage to the IBM Storwize V5000 Gen2 storage system to realize the benefits of the IBM Storwize V5000 Gen2 with features, such as the easy-to-use GUI, internal virtualization, thin provisioning, and copy services.
The Storage Migration feature is included in the base software, and no license is required.

1.7.6 FlashCopy

The FlashCopy feature copies a source volume on to a target volume. The original contents of the target volume is lost. After the copy operation starts, the target volume has the contents of the source volume as it existed at a single point in time. Although the copy operation completes in the background, the resulting data at the target appears as though the copy was made instantaneously. FlashCopy is sometimes described as an instance of a copy or
FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the management operations to be coordinated so that a common single point in time is chosen for copying target volumes from their respective source volumes.
IBM Storwize V5000 Gen2 also permits multiple target volumes to be FlashCopies from the same source volume. This capability can be used to create images from separate points in time for the source volume, and to create multiple images from a source volume at a common point in time. Source and target volumes can be any volume type (generic, thin-provisioned, or compressed).
Reverse FlashCopy enables target volumes to become restore points for the source volume without breaking the FlashCopy relationship and without waiting for the original copy operation to complete. IBM Storwize V5000 Gen2 supports multiple targets and multiple rollback points.
The FlashCopy feature is licensed per enclosure.
time-zero (T0)
point-in-time (PiT) copy technology.
For more information about FlashCopy copy services, see Chapter 10, “Copy services” on page 485.
30 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1

1.7.7 Remote Copy

Remote Copy can be implemented in one of two modes, synchronous or asynchronous.
With the IBM Storwize V5000 Gen2, Metro Mirror and Global Mirror are the IBM branded terms for the functions that are synchronous Remote Copy and asynchronous Remote Copy.
By using the Metro Mirror and Global Mirror copy services features, you can set up a relationship between two volumes so that updates that are made by an application to one volume are mirrored on the other volume. The volumes can be in the same system or on two different systems.
For both Metro Mirror and Global Mirror copy types, one volume is designated as the primary and the other volume is designated as the secondary. Host applications write data to the primary volume, and updates to the primary volume are copied to the secondary volume. Normally, host applications do not perform I/O operations to the secondary volume.
The Metro Mirror feature provides a synchronous copy process. When a host writes to the primary volume, it does not receive confirmation of I/O completion until the write operation completes for the copy on the primary and secondary volumes. This design ensures that the secondary volume is always up-to-date with the primary volume if a failover operation must be performed.
The Global Mirror feature provides an asynchronous copy process. When a host writes to the primary volume, confirmation of I/O completion is received before the write operation completes for the copy on the secondary volume. If a failover operation is performed, the application must recover and apply any updates that were not committed to the secondary volume. If I/O operations on the primary volume are paused for a brief time, the secondary volume can become an exact match of the primary volume.
Global Mirror can operate with or without cycling. When it is operating without cycling, write operations are applied to the secondary volume as soon as possible after they are applied to the primary volume. The secondary volume is less than 1 second behind the primary volume, which minimizes the amount of data that must be recovered in a failover. However, this approach requires that a high-bandwidth link is provisioned between the two sites.
When Global Mirror operates with cycling mode, changes are tracked and where needed copied to intermediate change volumes. Changes are transmitted to the secondary site periodically. The secondary volumes are much further behind the primary volume, and more data must be recovered in a failover. Because the data transfer can be smoothed over a longer time period, lower bandwidth is required to provide an effective solution.
For more information about the IBM Storwize V5000 Gen2 copy services, see Chapter 10, “Copy services” on page 485.
The IBM Remote Copy feature is licensed for each enclosure.

1.7.8 IP replication

IP replication enables the use of lower-cost Ethernet connections for remote mirroring. The
capability is available as a chargeable option on all Storwize family systems.
The function is transparent to servers and applications in the same way that traditional Fibre Channel-based mirroring is transparent. All remote mirroring modes (Metro Mirror, Global Mirror, and Global Mirror with Change Volumes) are supported.
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 31
Configuration of the system is straightforward. The Storwize family systems normally find each other in the network, and they can be selected from the GUI.
IP replication includes Bridgeworks SANSlide network optimization technology, and it is available at no additional charge. Remember, Remote Mirror is a chargeable option but the price does not change with IP replication. Existing Remote Mirror users have access to the function at no additional charge.
IP connections that are used for replication can have long signal from one end to the other), which can be caused by distance or by many “hops” between switches and other appliances in the network. Traditional replication solutions transmit data, wait for a response, and then transmit more data, which can result in network utilization as low as 20% (based on IBM measurements). Also, this scenario gets worse the longer the latency.
Bridgeworks SANSlide technology that is integrated with the IBM Storwize family requires no separate appliances, no additional cost, and no configuration steps. It uses artificial intelligence (AI) technology to transmit multiple data streams in parallel, adjusting automatically to changing network environments and workloads.
SANSlide improves network bandwidth utilization up to 3x so clients can deploy a less costly network infrastructure or take advantage of faster data transfer to speed up replication cycles, improve remote data currency, and enjoy faster recovery.
IP replication can be configured to use any of the available 1 GbE or 10 GbE Ethernet ports (apart from the technician port) on the IBM Storwize V5000 Gen2. See Table 1-10 on page 18 for port configuration options.
latency (the time to transmit a
Copy services configuration limits
For the most up-to-date list of these limits, see the following website:
http://ibm.biz/BdjvHP

1.7.9 External virtualization

By using this feature, you can consolidate FC SAN-attached disk controllers from various vendors into pools of storage. In this way, the storage administrator can manage and provision storage to applications from a single user interface and use a common set of advanced functions across all of the storage systems under the control of the IBM Storwize V5000 Gen2. External virtualization is only available for the IBM Storwize V5030.
The External Virtualization feature is licensed per disk enclosure.

1.7.10 Encryption

IBM Storwize V5000 Gen2 provides optional encryption of data-at-rest functionality, which protects against the potential exposure of sensitive user data and user metadata that is stored on discarded, lost, or stolen storage devices. Encryption can be enabled and configured only on the Storwize V5020 and Storwize V5030 enclosures that support encryption. The Storwize V5010 does not offer encryption functionality.
Encryption is a licensed feature that requires a license key to enable it before it can be used.
32 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1

1.8 Problem management and support

In this section, we introduce problem management and support topics.

1.8.1 IBM Support assistance

To use IBM Support assistance, you must have access to the internet. Support assistance enables support personnel to access the system to complete troubleshooting and maintenance tasks. You can configure either local support assistance, where support personnel visit your site to fix problems with the system, or remote support assistance.
Both local and remote support assistance uses secure connections to protect data exchange between the support center and system. More access controls can be added by the system administrator. You can use the management GUI or the command-line interface to view support assistance settings.

1.8.2 Event notifications

IBM Storwize V5000 Gen2 can use Simple Network Management Protocol (SNMP) traps, syslog messages, and e-mail to notify you and the IBM Support Center when significant events are detected. Any combination of these notification methods can be used simultaneously.
You can configure IBM Storwize V5000 Gen2 to send different types of notification to specific recipients and choose the alerts that are important to you. When you configure Call Home to the IBM Support Center, all events are sent through email only.

1.8.3 SNMP traps

SNMP is a standard protocol for managing networks and exchanging messages. IBM Storwize V5000 Gen2 can send SNMP messages that notify personnel about an event. You can use an SNMP manager to view the SNMP messages that IBM Storwize V5000 Gen2 sends. You can use the management GUI or the IBM Storwize V5000 Gen2 CLI to configure and modify your SNMP settings.
You can use the Management Information Base (MIB) file for SNMP to configure a network management program to receive SNMP messages that are sent by the IBM Storwize V5000 Gen2. This file can be used with SNMP messages from all versions of IBM Storwize V5000 Gen2 software.

1.8.4 Syslog messages

The syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be IPv4 or IPv6. IBM Storwize V5000 Gen2 can send syslog messages that notify personnel about an event. IBM Storwize V5000 Gen2 can transmit syslog messages in expanded or concise format. You can use a syslog manager to view the syslog messages that IBM Storwize V5000 Gen2 sends. IBM Storwize V5000 Gen2 uses the User Datagram Protocol (UDP) to transmit the syslog message. You can use the management GUI or the CLI to configure and modify your syslog settings.
Chapter 1. Overview of the IBM Storwize V5000 Gen2 system 33

1.8.5 Call Home email

The Call Home feature transmits operational and error-related data to you and IBM through a Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification email. When configured, this function alerts IBM service personnel about hardware failures and potentially serious configuration or environmental issues. You can use the Call Home function if you have a maintenance contract with IBM or if the IBM Storwize V5000 Gen2 is within the warranty period.
To send email, you must configure at least one SMTP server. You can specify as many as five other SMTP servers for backup purposes. The SMTP server must accept the relaying of email from the IBM Storwize V5000 Gen2 clustered system IP address. You can then use the management GUI or the CLI to configure the email settings, including contact information and email recipients.
Set the reply address to a valid email address. Send a test email to check that all connections and infrastructure are set up correctly. You can disable the Call Home function at any time by using the management GUI or the CLI.

1.9 More information resources

This section describes resources that are available for more information.

1.9.1 Useful IBM Storwize V5000 Gen2 websites

For more information about IBM Storwize V5000 Gen2, see the following websites: 򐂰 The IBM Storwize V5000 Gen2 home page:
https://ibm.biz/BdZsPJ
򐂰 IBM Storwize V5000 Gen2 Knowledge Center:
https://www.ibm.com/support/knowledgecenter/STHGUJ
򐂰 IBM Storwize V5000 Gen2 Online Announcement Letter:
https://ibm.biz/BdrbEZ
The Online Information Center also includes a Learning and Tutorial section where you can obtain videos that describe the use and implementation of the IBM Storwize V5000 Gen2.
34 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1

Chapter 2. Initial configuration

2
This chapter describes the initial configuration steps for the IBM Storwize V5000 Gen2.
Specifically, this chapter provides information about the following topics:
򐂰 Hardware installation planning 򐂰 SAN configuration planning 򐂰 FC direct-attach planning 򐂰 SAS direct-attach planning 򐂰 LAN configuration planning 򐂰 Host configuration planning 򐂰 Miscellaneous configuration planning 򐂰 System management 򐂰 First-time setup 򐂰 Initial configuration
© Copyright IBM Corp. 2018. All rights reserved. 35

2.1 Hardware installation planning

After you verify that you have all of the hardware components that you purchased, it is important to perform the correct planning before the actual physical installation. The following checklist of requirements can be used to plan your installation:
Install the hardware as described in Chapter 2 of the IBM Storwize V5000 Gen2 Quick
Installation Guide, GC27-8597 available from here:
https://www.ibm.com/support/home/product/5455835/IBM%20Storwize%20V5000
An appropriate 19-inch rack must be available. Depending on the number of enclosures to
install, more than one might be required. Each enclosure measures 2 U. A single Storwize V5010 or Storwize V5020 control enclosure supports up to 10 expansion enclosures. A single Storwize V5030 control enclosure supports up to 20 expansion enclosures.
Redundant power outlets must be in the rack for each of the two power cords that are
required for each enclosure to be installed. Several power outlets are required, depending on the number of enclosures to be installed. The power cords conform to the IEC320 C13/C14 standards.
A minimum of four Fibre Channel ports that are attached to redundant fabrics are required.
For dual I/O group systems, a minimum of eight Fibre Channel ports are required.
Fibre Channel ports: Fibre Channel (FC) ports are required only if you are using FC hosts or clustered systems that are arranged as two I/O groups. You can use the IBM Storwize V5000 Gen2 with Ethernet-only cabling for Internet Small Computer System Interface (iSCSI) hosts or use serial-attached SCSI (SAS) cabling for hosts that are directly attached.
For the Storwize V5020 systems, up to two hosts can be directly connected by using SAS
ports 2 and 3 on each node canister, with SFF-8644 mini SAS HD cabling.
You must have a minimum of two Ethernet ports on the LAN, with four preferred for more
redundancy or iSCSI host access.
You must have a minimum of two Ethernet cable drops, with four preferred for more
redundancy or iSCSI host access. If you have two I/O groups, you must have a minimum of four Ethernet cable drops. Ethernet port 1 on each node canister must be connected to the LAN, with port two as optional.
LAN connectivity: Port 1 on each node canister must be connected to the same physical local area network (LAN) or be configured in the same virtual LAN (VLAN) and be on the same subnet or set of subnets.
Technician port: On the Storwize V5010 and V5020 models, Port 2 is the
, which is used for system initialization and service. Port 2 must not be connected to
port
the LAN until the system initialization or service is complete.
The Storwize V5030 model has a dedicated technician port.
The 10 Gb Ethernet (copper) ports of a Storwize V5030 system require a Category 6A
shielded cable that is terminated with an 8P8C modular connector (RJ45 compatible connector) to function at 10 Gb.
technician
36 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Verify that the default IP addresses that are configured on Ethernet port 1 on each of the
node canisters (192.168.70.121 on node 1 and 192.168.70.122 on node 2) do not conflict with existing IP addresses on the LAN. The default mask that is used with these IP addresses is 255.255.255.0, and the default gateway address that is used is
192.168.70.1.
You need a minimum of three IPv4 or IPv6 IP addresses for systems that are arranged as
one I/O group and minimum of five if you have two I/O groups. One is for the clustered system and is used by the administrator for management, and one for each node canister for service access as needed.
IP addresses: An additional IP address must be used for backup configuration access. This other IP address allows a second system IP address to be configured on port 2 of either node canister, which the storage administrator can also use for management of the IBM Storwize V5000 Gen2 system.
A minimum of one and up to eight IPv4 or IPv6 addresses are needed if iSCSI-attached
hosts access volumes from the IBM Storwize V5000 Gen2.
At least two 0.6-meter (1.96 feet), 1.5-meter (4.9 feet), or 3-meter (9.8 feet) 12 Gb
mini-SAS cables are required for each expansion enclosure. The length of the cables depends on the physical rack location of the expansion enclosure relative to the control enclosures or other expansion enclosures.

2.1.1 Procedure to install the SAS cables

We show the procedures to install the SAS cables for the different models.
Storwize V5010 and Storwize V5020
The Storwize V5010 and Storwize V5020 support up to 10 expansion enclosures in a single chain. To install the cables, complete the following steps:
1. By using the supplied SAS cables, connect the control enclosure to the first expansion enclosure:
a. Connect SAS port 1 of the left node canister in the control enclosure to SAS port 1 of
the left expansion canister in the first expansion enclosure.
b. Connect SAS port 1 of the right node canister in the control enclosure to SAS port 1 of
the right expansion canister in the first expansion enclosure.
2. To connect a second expansion enclosure, use the supplied SAS cables to connect it to the previous enclosure in the chain:
a. Connect SAS port 2 of the left canister in the previous expansion enclosure to SAS
port 1 of the left expansion canister in the next expansion enclosure.
b. Connect SAS port 2 of the right canister in the previous expansion enclosure to SAS
port 1 of the right expansion canister in the next expansion enclosure.
3. Repeat the previous steps until all expansion enclosures are connected.
Chapter 2. Initial configuration 37
Figure 2-1 shows how to cable a Storwize V5010.
Figure 2-1 Storwize V5010 SAS cabling
Figure 2-2 shows how to cable a Storwize V5020.
Figure 2-2 Storwize V5020 SAS cabling
38 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Storwize V5030
The Storwize V5030 supports up to 20 expansion enclosures per I/O group in two SAS chains of 10. Up to 40 expansion enclosures can be supported in a two I/O group configuration. To install the cables, complete the following steps:
1. By using the supplied SAS cables, connect the control enclosure to first expansion enclosure by using the first chain:
a. Connect SAS port 1 of the left node canister in the control enclosure to SAS port 1 of
the left expansion canister in the first expansion enclosure.
b. Connect SAS port 1 of the right node canister in the control enclosure to SAS port 1 of
the right expansion canister in the first expansion enclosure.
2. To connect a second expansion enclosure, use the supplied SAS cables to connect the control enclosure to expansion enclosure by using the second chain:
a. Connect SAS port 2 of the left node canister in the control enclosure to SAS port 1 of
the left expansion canister in the second expansion enclosure.
b. Connect SAS port 2 of the right node canister in the control enclosure to SAS port 1 of
the right expansion canister in the second expansion enclosure.
3. To connect additional expansion enclosures, alternate connecting them between chain one and chain two to keep the configuration balanced:
a. Connect SAS port 2 of the left canister in the previous expansion enclosure to SAS
port 1 of the left expansion canister in the next expansion enclosure.
b. Connect SAS port 2 of the right canister in the previous expansion enclosure to SAS
port 1 of the right expansion canister in the next expansion enclosure.
4. Repeat the steps until all expansion enclosures are connected.
Chapter 2. Initial configuration 39
Figure 2-3 shows how to cable a Storwize V5030.
Figure 2-3 Storwize V5030 cabling

2.2 SAN configuration planning

To connect a Storwize V5000 Gen2 system to a Fibre Channel (FC) SAN, you must first install the optional 16 Gb FC adapters in every node canister that you connect. Ensure that you use the correct fibre cables to connect the Storwize V5000 Gen2 to the Fibre Channel SAN. With the FC cards installed, the Storwize V5000 Gen2 can be used to interface with FC hosts, external storage controllers, and other Storwize systems that are visible on the SAN fabric.
The Storwize V5010 and V5020 models support a single I/O group only and can migrate from external storage controllers only. The Storwize V5030 supports up to two I/O groups that form a cluster over the FC fabric. The Storwize V5030 also supports full virtualization of external storage controllers.
The advised SAN configuration consists of a minimum of two fabrics that encompass all host ports and any ports on external storage systems that are to be virtualized by the IBM Storwize V5000 Gen2. The IBM Storwize V5000 Gen2 ports must have the same number of cables that are connected, and they must be evenly split between the two fabrics to provide redundancy if one of the fabrics goes offline (planned or unplanned).
40 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Zoning must be implemented after the IBM Storwize V5000 Gen2, hosts, and optional external storage systems are connected to the SAN fabrics. To enable the node canisters to communicate with each other in band, create a zone with only the IBM Storwize V5000 Gen2 WWPNs (two from each node canister) on each of the two fabrics.
If an external storage system is to be virtualized, create a zone in each fabric with the IBM Storwize V5000 Gen2 worldwide port names (WWPNs) (two from each node canister) with up to a maximum of eight WWPNs from the external storage system. Assume that every host has a Fibre Channel connection to each fabric. Create a zone with the host WWPN and one WWPN from each node canister in the IBM Storwize V5000 Gen2 system in each fabric.
Important: It is critical that only one initiator host bus adapter (HBA) is in any zone.
For load balancing between the node ports on the IBM Storwize V5000 Gen2, alternate the host Fibre Channel ports between the ports of the IBM Storwize V5000 Gen2.
A maximum of eight paths through the SAN are allowed from each host to the IBM Storwize V5000 Gen2. Hosts where this number is exceeded are not supported. The restriction limits the number of paths that the multipathing driver must resolve. A host with only two HBAs must not exceed this limit with the correct zoning in a dual fabric SAN.
Maximum ports or WWPNs: The IBM Storwize V5000 Gen2 supports a maximum of 16 ports or WWPNs from a virtualized external storage system.
Figure 2-4 shows how to cable devices to the SAN. Optionally, ports 3 and 4 can be connected to SAN fabrics to provide additional redundancy and throughput. Refer to this example as the zoning is described.
Figure 2-4 SAN cabling and zoning diagram
Chapter 2. Initial configuration 41
Create a host/IBM Storwize V5000 Gen2 zone for each server that volumes are mapped to and from the clustered system, as shown in the following examples in Figure 2-4 on page 41:
򐂰 Zone Host A port 1 (HBA 1) with all node canister ports 1 򐂰 Zone Host A port 2 (HBA 2) with all node canister ports 2 򐂰 Zone Host B port 1 (HBA 1) with all node canister ports 3 򐂰 Zone Host B port 2 (HBA 2) with all node canister ports 4
Similar zones must be created for all other hosts with volumes on the IBM Storwize V5000 Gen2 I/O groups.
Verify the interoperability with which the IBM Storwize V5000 Gen2 connects to SAN switches or directors by following the requirements that are provided at this website:
https://ibm.biz/BdrDWV
Switches or directors are at the firmware levels that are supported by the IBM Storwize V5000 Gen2.
Important: The IBM Storwize V5000 Gen2 port login maximum that is listed in the restriction document must not be exceeded. The document is available at this website:
https://ibm.biz/BdjSJx
Connectivity issues: If any connectivity issues occur between the IBM Storwize V5000 Gen2 ports and the Brocade SAN switches or directors at 8 Gbps, see this website for the correct setting of the fillword port config parameter in the Brocade operating system:
https://ibm.biz/Bdrb4g

2.3 FC direct-attach planning

The IBM Storwize V5000 Gen2 can be used with a direct-attach Fibre Channel host configuration. The advised configuration for direct attachment is at least one Fibre Channel cable from the host that is connected to each node of the IBM Storwize V5000 Gen2 to provide redundancy if one of the nodes goes offline, as shown in Figure 2-5 on page 43.
42 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Figure 2-5 FC direct-attach host configuration
If your direct-attach Fibre Channel host requires connectivity to both of the IBM Storwize V5000 Gen2 I/O groups (the Storwize V5030), we suggest that at least one Fibre Channel cable is used from the host to each of the node canisters of the IBM Storwize V5000 Gen2, as shown in Figure 2-6. This suggestion also applies to a cluster where one I/O group is a Storwize V5000 Gen1 and the other I/O group is a Storwize V5030.
Figure 2-6 FC direct-attach host configuration to I/O groups
Chapter 2. Initial configuration 43
Verify direct-attach interoperability with the IBM Storwize V5000 Gen2 and the supported server operating systems by following the requirements that are provided at this website:
https://ibm.biz/BdrDsy

2.4 SAS direct-attach planning

The Storwize V5000 Gen2 allows SAS host attachment by using an optional SAS card that must be installed in both node canisters. In addition, the Storwize V5020 has two onboard SAS ports for host attachment. The SAS expansion ports cannot be used for host attachment. Figure 2-7, Figure 2-8, and Figure 2-9 show the SAS ports that can be used for host attachment in yellow for each Storwize V5000 Gen2 model.
Figure 2-7 Storwize V5010 SAS host attachment
Figure 2-8 Storwize V5020 SAS host attachment
Figure 2-9 Storwize V5030 SAS host attachment
44 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Inserting cables: You can insert the cables upside down despite the keyway. Ensure that
the blue tag on the SAS connector is underneath when you insert the cables.
We suggest that each SAS host is connected to both node canisters, because this approach provides redundancy in a path or canister failure.

2.5 LAN configuration planning

Two Ethernet ports per node canister are available for connection to the LAN on an IBM Storwize V5000 Gen2 system. Use Ethernet port 1 to access the management graphical user interface (GUI), the service assistant GUI for the node canister, and iSCSI host attachment. Port 2 can be used for the management GUI and iSCSI host attachment.
Each node canister in a control enclosure connects over an Ethernet cable from Ethernet port 1 of the canister to an enabled port on your Ethernet switch or router. Optionally, you can attach an Ethernet cable from Ethernet port 2 on the canister to your Ethernet network.
Configuring IP addresses: No issue exists with the configuration of multiple IPv4 or IPv6 addresses on an Ethernet port or with the use of the same Ethernet port for management and iSCSI access.
However, you cannot use the same IP address for management and iSCSI host use.
Table 2-1 shows possible IP configuration options of the Ethernet ports on the IBM Storwize V5000 Gen2 system.
Table 2-1 Storwize V5000 Gen2 IP address configuration options per node canister
Storwize V5000 Gen2 management node canister 1
IPv4/6 management address Ethernet port 1 IPv4/6 service address Ethernet port 1
IPv4/6 service address IPv4/6 iSCSI address
IPv4/6 iSCSI address
IPv4/6 management address Ethernet port 2 IPv4/6 iSCSI address Ethernet port 2
IPv4/6 iSCSI address
Storwize V5000 Gen2 partner node canister 2
IP management addresses: The IP management address that is shown on node canister
1 in Table 2-1 is an address on the configuration node. If a failover occurs, this address transfers to node canister 2, and this node canister becomes the new configuration node. The management addresses are managed by the configuration node canister only (1 or 2, and in this case, node canister 1).
Technician port: On the Storwize V5010 and V5020 models, port 2 serves as the technician port, which is used for system initialization and service. Port 2 must not be connected to the LAN until the system initialization or service is complete.
The Storwize V5030 model has a dedicated technician port.
Chapter 2. Initial configuration 45

2.5.1 Management IP address considerations

Because Ethernet port 1 from each node canister must connect to the LAN, a single management IP address for the clustered system is configured as part of the initial setup of the IBM Storwize V5000 Gen2 system.
The management IP address is associated with one of the node canisters in the clustered system and that node then becomes the configuration node. If this node goes offline (planned or unplanned), the management IP address fails over to the other node’s Ethernet port 1.
For more clustered system management redundancy, you need to connect Ethernet port 2 on each of the node canisters to the LAN, which allows for a backup management IP address to be configured for access, if necessary.
Figure 2-10 shows a logical view of the Ethernet ports that are available for the configuration of the one or two management IP addresses. These IP addresses are for the clustered system and associated with only one node, which is then considered the configuration node.
Figure 2-10 Ethernet ports that are available for configuration

2.5.2 Service IP address considerations

Ethernet port 1 on each node canister is used for system management and for service access, when required. In normal operation, the service IP addresses are not needed. However, if a node canister problem occurs, it might be necessary for service personnel to log on to the node to perform service actions.
46 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Figure 2-11 shows a logical view of the Ethernet ports that are available for the configuration of the service IP addresses. Only port one on each node can be configured with a service IP address.
Figure 2-11 Service IP addresses that are available for configuration

2.6 Host configuration planning

Hosts must have two Fibre Channel connections for redundancy, but the IBM Storwize V5000 Gen2 also supports hosts with a single HBA port connection. However, if that HBA loses its link to the SAN fabric or the fabric fails, the host loses access to its volumes.
Even with a single connection to the SAN, the host has multiple paths to the IBM Storwize V5000 Gen2 volumes because that single connection must be zoned with at least one Fibre Channel port per node. Therefore, multipath software is required. Multipath software is required also for direct-attach SAS hosts. They can connect by using a single host port, but for redundancy, two SAS connections per host are advised.
If two connections per host are used, multipath software is also required on the host. If an iSCSI host is deployed, it also requires multipath software. All node canisters must be configured and connected to the network so that any iSCSI hosts see at least two paths to volumes. Multipath software is required to manage these paths.
Various operating systems are supported by the IBM Storwize V5000 Gen2. For more information about various configurations supported, check the IBM System Storage Interoperation Center (SSIC) website at the following address:
https://ibm.biz/BdxQhe
For more information, see Chapter 5, “Host configuration” on page 199.
Chapter 2. Initial configuration 47

2.7 Miscellaneous configuration planning

During the initial setup of the IBM Storwize V5000 Gen2 system, the installation wizard asks for various information that needs to be available during the installation process. Several of these fields are mandatory to complete the initial configuration.
Collect the information in the following checklist and time can be manually entered, but to keep the clock synchronized, use a Network Time Protocol (NTP) service:
Document the LAN NTP server IP address that is used for the synchronization of devices.To send alerts to storage administrators and to set up Call Home to IBM for service and
support, you need the following information:
Name of the primary storage administrator for IBM to contact, if necessary.Email address of the storage administrator for IBM to contact, if necessary.Phone number of the storage administrator for IBM to contact, if necessary.Physical location of the IBM Storwize V5000 Gen2 system for IBM service (for
example, Building 22, first floor).
Simple Mail Transfer Protocol (SMTP) or email server address to direct alerts to and
from the IBM Storwize V5000 Gen2.
For the Call Home service to work, the IBM Storwize V5000 Gen2 system must have
access to an SMTP server on the LAN that can forward emails to the default IBM service address: callhome1@de.ibm.com for Americas-based systems and callhome0@de.ibm.com for the rest of the world.
Email address of local administrators that must be notified of alerts.IP address of Simple Network Management Protocol (SNMP) server to direct alerts to,
if required (for example, operations or Help desk).
before the initial setup is performed. The date
After the IBM Storwize V5000 Gen2 initial configuration, you might want to add more users who can manage the system. You can create as many users as you need, but the following roles generally are configured for users:
򐂰 Security Admin 򐂰 Administrator 򐂰 CopyOperator 򐂰 Service 򐂰 Monitor
The user in the Security Admin role can perform any function on the IBM Storwize V5000 Gen2.
The user in the Administrator role can perform any function on the IBM Storwize V5000 Gen2 system, except manage users.
User creation: The Security Admin role is the only role that has the create users function. Limit this role to as few users as possible.
The user in the CopyOperator role can view anything in the system, but the user can configure and manage only the copy functions of the FlashCopy capabilities.
The user in the Monitor role can view object and system configuration information but cannot configure, manage, or modify any system resource.
48 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
The only other role that is available is the service role, which is used if you create a user ID for the IBM service support representative (SSR). With this user role, IBM service personnel can view anything on the system (as with the Monitor role) and perform service-related commands, such as adding a node back to the system after it is serviced or including disks that were excluded.

2.8 System management

The graphical user interface (GUI) is used to configure, manage, and troubleshoot the IBM Storwize V5000 Gen2 system. It is used primarily to configure Redundant Array of Independent Disks (RAID) arrays and logical drives, assign logical drives to hosts, replace and rebuild failed disk drives, and expand the logical drives.
It allows for troubleshooting and management tasks, such as checking the status of the storage server components, updating the firmware, and managing the storage server.
The GUI also offers advanced functions, such as FlashCopy, Volume Mirroring, Remote Mirroring, and Easy Tier. A command-line interface (CLI) for the IBM Storwize V5000 Gen2 system also is available.
This section describes system management by using the GUI and CLI.

2.8.1 Graphical user interface (GUI)

A web browser is used for GUI access. You must use a supported web browser to access the management GUI. At the time of writing, the Storwize V5000 Gen2 supports the following browsers:
򐂰 Mozilla Firefox 54 򐂰 Mozilla Firefox Extended Support Release (ESR) 52 򐂰 Microsoft Internet Explorer (IE) 11 and Microsoft Edge 40 򐂰 Google Chrome 59
Supported web browsers: Follow this link to find more information about supported browsers and to check the latest supported levels:
https://ibm.biz/BdjSJ2
Complete the following steps to open the management GUI from any web browser:
1. Browse to one of the following locations:
http(s)://host name of your cluster/http(s)://cluster IP address of your cluster/
(An example is https://192.168.70.120.)
2. Use the password that you created during system setup to authenticate with the superuser or any additional accounts that you created. The default user name and password for the management GUI is shown:
– User name: superuser – Password: passw0rd
Note: The 0 character in the password is the number zero, not the letter O.
Chapter 2. Initial configuration 49
For more information, see Chapter 3, “Graphical user interface overview” on page 77.
After you complete the initial configuration that is described in 2.10, “Initial configuration” on page 55, the IBM Storwize V5000 Gen2 System overview window opens, as shown in Figure 2-12.
Figure 2-12 Setup wizard: Overview window

2.8.2 Command-line interface (CLI)

The command-line interface (CLI) is a flexible tool for system management that uses the Secure Shell (SSH) protocol. A public/private SSH key pair is optional for SSH access. The storage system can be managed by using the CLI, as shown in Example 2-1.
Example 2-1 System management by using the command-line interface
IBM_Storwize:ITSO-V5000:superuser>svcinfo lsenclosureslot enclosure_id slot_id port_1_status port_2_status drive_present drive_id 1 1 online online yes 10 1 2 online online yes 11 1 3 online online yes 15 1 4 online online yes 16 1 5 online online yes 12 1 6 online online yes 4 1 7 online online yes 7 1 8 online online yes 8 1 9 online online yes 9 1 10 online online yes 5 1 11 online online yes 18 1 12 online online yes 14 1 13 online online yes 13 1 14 online online yes 2 1 15 online online yes 6 1 16 online online yes 3 1 17 online online yes 1 1 18 online online yes 0 1 19 online online yes 20 1 20 online online no
50 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
1 21 online online yes 19 1 22 online online yes 21 1 23 online online yes 22 1 24 online online yes 17 2 1 online online yes 25 2 2 online online yes 27 2 3 online online no 2 4 online online yes 31 2 5 online online yes 24 2 6 online online yes 26 2 7 online online yes 33 2 8 online online yes 32 2 9 online online yes 23 2 10 online online yes 28 2 11 online online yes 29 2 12 online online yes 30 IBM_Storwize:ITSO-V5000:superuser>
You can set up the initial IBM Storwize V5000 Gen2 system by using the process and tools that are described in 2.9, “First-time setup” on page 51.

2.9 First-time setup

This section describes how to set up a first-time IBM Storwize V5000 Gen2 service and system.
Before you set up the initial IBM Storwize V5000 Gen2 system, ensure that the system is powered on.
Power on: See the following information to check the power status of the system:
https://ibm.biz/BdjSJz
Set up the IBM Storwize V5000 Gen2 system by using the technician Ethernet port:
1. Configure an Ethernet port on the personal computer to enable the Dynamic Host Configuration Protocol (DHCP) configuration of its IP address and Domain Name System (DNS) settings.
If you do not use DHCP, you must manually configure the personal computer. Specify the static IPv4 address 192.168.0.2, subnet mask 255.255.255.0, gateway 192.168.0.1, and DNS 192.168.0.1.
2. Locate the Ethernet port that is labeled T on the rear of the node canister.
Chapter 2. Initial configuration 51
On the Storwize V5010 and Storwize V5020 systems, the second Ethernet port is also used as the technician port, as shown in Figure 2-13 and Figure 2-14.
Figure 2-13 Storwize V5010 technician port
Figure 2-14 Storwize V5020 technician port
The Storwize V5030 systems use a dedicated technician port, which is shown in Figure 2-15.
Figure 2-15 Storwize V5030 technician port
3. Connect an Ethernet cable between the port of the personal computer that is configured in step 2 and the technician port. After the connection is made, the system automatically configures the IP address and DNS settings for the personal computer if DHCP is available. If it is not available, the system uses the values that you provided in step 1.
52 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
4. After the Ethernet port of the personal computer connects, open a supported browser and browse to the address http://install. (If you do not have DHCP, open a supported browser and go to this static IP address: 192.168.0.1.) The browser is automatically directed to the initialization tool, as shown in Figure 2-16.
Figure 2-16 System initialization: Welcome
5. If you experience a problem when you try to connect due to a change in system states, wait 5 - 10 seconds and try again.
6. Click Next, as shown in Figure 2-17.
Figure 2-17 System initialization node usage
Chapter 2. Initial configuration 53
7. Choose the first option to set up the node as a new system and click Next to continue to the window that is shown in Figure 2-18.
Figure 2-18 System initialization: Create a New System
8. Complete all of the fields with the networking details for managing the system and click Next. When the task completes, as shown in Figure 2-19, click Close.
Figure 2-19 System initialization: Cluster creation
Note: The IBM Storwize V5000 Gen2 GUI shows the CLI as you go through the configuration steps.
9. The system takes approximately 10 minutes to reboot and reconfigure the Web Server as shown in Figure 2-20 on page 55. After this time, click Next to proceed to the final step.
54 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Figure 2-20 System Initialization: Restarting Web Server
10.After you complete the initialization process, disconnect the cable between the personal computer and the technician port as instructed in Figure 2-21. Reestablish the connection to the customer network and click Next to be redirected to the management address that you provided to configure the system initially.
Figure 2-21 System initialization: Completion summary

2.10 Initial configuration

This section describes how to complete the initial configuration, including the following tasks:
򐂰 System components verification 򐂰 Email event notifications 򐂰 System name, date, and time settings 򐂰 License functions 򐂰 Initial storage configuration 򐂰 Initial configuration summary
Chapter 2. Initial configuration 55
If you completed the initial setup, that wizard automatically redirects you to the IBM Storwize V5000 Gen2 GUI. Otherwise, complete the following steps to complete the initial configuration process:
1. Start the service configuration wizard by using a web browser on a workstation and point it to the system management IP address that was defined in Figure 2-18 on page 54.
2. Enter a new secure password twice for the superuser user, as shown in Figure 2-22 and click Log in.
Figure 2-22 Setup wizard: Password prompt
56 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
3. Verify the prerequisites in the Welcome window as shown in Figure 2-23 and click Next.
Figure 2-23 Setup wizard: Welcome
4. Accept the license agreement after reading it carefully as shown in Figure 2-24 on page 58 and click Next.
Chapter 2. Initial configuration 57
Figure 2-24 Setup wizard: License agreement
5. Change the password for superuser from the default, as shown in Figure 2-25, then click Apply and Next.
Figure 2-25 Setup wizard: Change password
58 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
6. You will see
Figure 2-26 Setup wizard: Password changed
The password was successfully changed message, as shown in Figure 2-26.
7. In the System Name window, enter the system name and click Apply and Next, as shown in Figure 2-27.
Figure 2-27 Setup wizard: System Name
Note: Use the chsystem command to modify the attributes of the clustered system. This command can be used any time after a system is created.
Chapter 2. Initial configuration 59
8. In the next window, the IBM Storwize V5000 Gen2 GUI provides help and guidance about additional licenses that are required for certain system functions. A license must be purchased for each enclosure that is attached to, or externally managed by, the IBM Storwize V5000 Gen2. For each of the functions, enter the number of enclosures, as shown in Figure 2-28. Then click Apply and Next.
Figure 2-28 Setup wizard: Licensed Functions
The following actions are required for each of the licensed functions:
– FlashCopy: Enter the number of enclosures that are licensed to use FlashCopy
function.
– Remote copy: Enter the number of Remote Mirroring licenses. This license setting
enables the use of Metro Mirror and Global Mirror functions. This value must be equal to the number of enclosures that are licensed for external virtualization, plus the
number of attached internal enclosures. – Easy Tier: Enter the number of enclosures that are licensed to use Easy Tier function. – External Virtualization: Enter the number of external enclosures that you are
virtualizing. An external virtualization license is required for each physical enclosure
that is attached to your system. – Real-time Compression (RtC): Enter the number of enclosures that are licensed to use
RtC.
Encryption license: The encryption feature that is available on the Storwize V5020 and V5030 systems uses a special licensing system that differs from the licensing system for the other features. Encryption requires a license key that can be activated in step 10.
60 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
9. Two options are available for configuring the date and time. Select the required method and enter the date and time manually or specify a network address for a Network Time Protocol (NTP) server. After this selection, the Apply and Next option becomes active, as shown in Figure 2-29. Click Apply and Next.
Figure 2-29 Setup wizard: Date and Time
Chapter 2. Initial configuration 61
10.If you purchased an Encryption License for a Storwize V5020 or Storwize V5030 system, select Yes as shown in Figure 2-30. One license is required for each control enclosure. Therefore, in a V5030 configuration with two I/O groups, two license keys are required.
Figure 2-30 Setup wizard: Encryption feature
11.The easiest way to activate the encryption license is to highlight each enclosure that you want to activate the license for and choose Actions Activate License Automatically and enter the authorization code that came with the purchase agreement for encryption. This action retrieves and applies a license key from ibm.com, as shown in Figure 2-31.
Figure 2-31 Setup wizard: Encryption license activation
62 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
12.If automatic activation cannot be performed, for example, if the Storwize V5000 Gen2 system is behind a firewall that prevents it from accessing the internet, choose Actions Activate License Manually. Follow these steps:
a. Go to this website:
https://www.ibm.com/storage/dsfa
b. Select Storwize. Enter the machine type (2077 or 2078), serial number, and machine
signature of the system. You can obtain this information by clicking Need Help.
c. Enter the authorization codes that were sent with your purchase agreement for the
encryption function.
d. Copy or download the key and paste it into the management GUI to activate the
license.
13.When all licenses are active, click Next to set up the system location, as shown in Figure 2-32.
Figure 2-32 Setup wizard: system location
Chapter 2. Initial configuration 63
14.After entering the system location, click Next to set up the contact person for the system as shown in Figure 2-33, then click Apply and Next.
Figure 2-33 Setup wizard: contact person
15.You can configure your system to send email reports to IBM if an issue is detected that requires hardware replacement. This function is called received, IBM automatically opens a problem report and contacts you to verify whether replacements parts are required.
Call Home: When Call Home is configured, the IBM Storwize V5000 Gen2 automatically creates a Support Contact with one of the following email addresses, depending on the country or region of installation:
򐂰 US, Canada, Latin America, and Caribbean Islands: callhome1@de.ibm.com 򐂰 All other countries or regions: callhome0@de.ibm.com
The IBM Storwize V5000 Gen2 can use Simple Network Management Protocol (SNMP) traps, syslog messages, and Call Home email to notify you and the IBM Support Center when significant events are detected. Any combination of these notification methods can be used simultaneously.
Call Home. When this email is
64 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
To set up Call Home, you need the location details of the IBM Storwize V5000 Gen2, Storage Administrator details, and at least one valid SMTP server IP address as shown in Figure 2-34.
Figure 2-34 Setup wizard: Email server details
Note: If you do not want to configure Call Home now, you can defer it using the check-box in the GUI and come back to it later via Settings Notifications.
If your system is under warranty or you bought a hardware maintenance agreement, we advise you to configure Call Home to enable proactive support of the IBM Storwize V5000 Gen2.
To enter more than one email server, click the plus sign ( Then, click Apply and Next to commit.
+) icon, as shown in Figure 2-34.
Chapter 2. Initial configuration 65
16.The next window is for setting up support assistance if desired, as shown in Figure 2-35.
Figure 2-35 Initial setup: Support Assistance
In our setup, we chose to set up the support assistance later, because it is covered extensively in Chapter 12, “RAS, monitoring, and troubleshooting” on page 661.
66 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
17.The Summary window for the contact details, system location, email server, Call Home, and email notification options is shown in Figure 2-36.
Figure 2-36 Setup wizard: Summary
18.After you click Finish, the web browser is redirected to the landing page of management GUI as shown in Figure 2-37.
Figure 2-37 Landing page of management GUI
Chapter 2. Initial configuration 67

2.10.1 Adding enclosures after the initial configuration

When the initial installation of the IBM Storwize V5000 Gen2 is complete, all expansion enclosures and control enclosures that were purchased at that time must be installed as part of the initial configuration. This process enables the system to make the best use of the enclosures and drives that are available.
Adding a control enclosure
If you are expanding the IBM Storwize V5000 Gen2 after the initial installation by adding a second I/O group (a second control enclosure), you must install it in the rack and connect it to the SAN. Ensure that you rezone your Fibre Channel switches so that the new control enclosure and the existing control enclosure are connected. For more information about zoning the node canisters, see 2.2, “SAN configuration planning” on page 40.
Note: Adding a second I/O group (via second controller enclosure) is only supported on the IBM Storwize V5000 Gen2 model V5030.
After the hardware is installed, cabled, zoned, and powered on, a second control enclosure is visible from the IBM Storwize V5000 Gen2 GUI, as shown in Figure 2-38.
Figure 2-38 Second control enclosure
68 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Complete the following steps to use the management GUI to configure the new enclosure:
1. In the main window, click Actions in the upper-left corner and select Add Enclosures. Alternatively, you can click the available control enclosure as shown in Figure 2-39.
Figure 2-39 Option to add a control enclosure
2. If the control enclosure is configured correctly, the new control enclosure is identified in the next window, as shown in Figure 2-40.
Figure 2-40 New control enclosure identification
Chapter 2. Initial configuration 69
3. Select the control enclosure and click Actions Identify to turn on the identify LEDs of the new enclosure, if required. Otherwise, click Next.
4. The new control enclosure is added to the system as shown in Figure 2-41. Click Finish to complete the operation.
Figure 2-41 Added enclosure summary
5. When the new enclosure is added, the storage that is provided by the internal drives is available to use as shown in Figure 2-42.
Figure 2-42 Adding storage completed
70 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
6. After the wizard completes the addition of the new control enclosure, the IBM Storwize V5000 Gen2 shows the management GUI that contains two I/O groups, as shown in Figure 2-43.
Figure 2-43 IBM Storwize V5000 Gen2 GUI with two I/O groups
Adding an expansion enclosure
Complete the following steps to add an expansion controller:
1. To add an expansion enclosure, change to the Monitoring tab and select System. If no new hardware is shown, check your cabling to ensure that the new expansion enclosure is connected correctly and refresh the window.
In the main window, click Actions in the upper-left corner and select Add Enclosures. Alternatively, you can click the available expansion enclosure as shown in Figure 2-44.
Figure 2-44 Adding an expansion enclosure
Chapter 2. Initial configuration 71
2. If the enclosure is cabled correctly, the wizard identifies the candidate expansion enclosure. Select the expansion enclosure and click Next, as shown in Figure 2-45.
Figure 2-45 Expansion enclosure cable check
3. Select the expansion enclosure and click Actions Identify to turn on the identify LEDs of the new enclosure, if required. Otherwise, click Next.
72 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
4. The new expansion enclosure is added to the system as shown in Figure 2-46. Click Finish to complete the operation.
Figure 2-46 Added enclosure summary
5. After the expansion enclosure is added, the IBM Storwize V5000 Gen2 shows the management GUI that contains two enclosures, as shown in Figure 2-47.
Figure 2-47 IBM Storwize V5000 Gen2 GUI with two enclosures in a single I/O group
Chapter 2. Initial configuration 73

2.10.2 Service Assistant Tool

The IBM Storwize V5000 Gen2, as a single I/O group, is configured initially with three IP addresses, one service IP address for each node canister, and a management IP address, which is set when the cluster is started.
The management IP and service IP addresses can be changed within the GUI as shown in Chapter 3, “Graphical user interface overview” on page 77.
IBM Service Assistant (SA) Tool is a web-based GUI that is used to service individual node canisters, primarily when a node has a fault and it is in a service state. A node cannot be active as part of a clustered system while the node is in a service state. The SA Tool is available even when the management GUI is not accessible. The following information and tasks are included:
򐂰 Status information about the connections and the node canister 򐂰 Basic configuration information, such as configuring IP addresses 򐂰 Service tasks, such as restarting the Common Information Model object manager
(CIMOM) and updating the worldwide node name (WWNN)
򐂰 Details about node error codes and hints about how to fix the node error
Important: Service Assistant Tool can be accessed by using the superuser account only. You must access Service Assistant Tool under the direction of IBM Support only.
The Service Assistance GUI is available by using a service assistant IP address on each node. The SA GUI is accessed through the cluster IP addresses by appending service to the cluster management URL. If the system is down, the only other method of communicating with the node canisters is through the SA IP address directly. Each node can have a single SA IP address on Ethernet port 1. We advise that these IP addresses are configured on all of the Storwize V5000 Gen2 node canisters.
The default IP address of canister 1 is 192.168.70.121 with a subnet mask of 255.255.255.0.
The default IP address of canister 2 is 192.168.70.122 with a subnet mask of 255.255.255.0.
To open the SA GUI, enter one of the following URLs into any web browser:
򐂰 http(s)://cluster IP address of your cluster/service 򐂰 http(s)://service IP address of a node/service
The following examples open the SA GUI: 򐂰 Management address:
http://1.2.3.4/service
򐂰 SA access address:
http://1.2.3.5/service
74 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
When you access SA by using the <cluster address>/service, the configuration node canister SA GUI login window opens, as shown in Figure 2-48.
Figure 2-48 Service Assistant Tool login
The SA interface can view status and run service actions on other nodes and the node where the user is connected.
Chapter 2. Initial configuration 75
After you are logged in, you see the Service Assistant Tool Home window, as shown in Figure 2-49.
Figure 2-49 Service Assistant Tool Home window
The current canister node is displayed in the upper-left corner of the GUI. As shown in Figure 2-49, the current canister node is node 1. To change the canister, select the relevant node in the Change Node section of the window. You see that the details in the upper-left corner change to reflect the new canister.
The SA GUI provides access to service procedures and shows the status of the node canisters. We advise that you perform these procedures only if you are directed to use them by IBM Support.
For more information about how to use the SA Tool, see the following website:
https://ibm.biz/BdjSJq
76 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Chapter 3. Graphical user interface
3
overview
This chapter provides an overview of the graphical user interface (GUI) of IBM Spectrum Virtualize on the IBM Storwize V5000 Gen2 and shows you how to use the navigation tools.
Specifically, this chapter provides information about the following topics:
򐂰 Overview of IBM Spectrum Virtualize management software 򐂰 Dashboard 򐂰 Monitoring menu 򐂰 Pools menu 򐂰 Volumes menu 򐂰 Hosts menu 򐂰 Copy services 򐂰 Access menu 򐂰 Settings menu
© Copyright IBM Corp. 2018. All rights reserved. 77

3.1 Overview of IBM Spectrum Virtualize management software

A GUI can simplify storage management and provide a fast and more efficient management tool. IBM Spectrum Virtualize V8.1 GUI has significant changes from previous versions, such as the icons, color palette, object locations, and more. However, usability is a priority, as in all IBM Spectrum products, and usability is maintained in the GUI.
JavaScript: You must enable JavaScript in your browser. For Mozilla Firefox, JavaScript is enabled by default and requires no additional configuration. For more information about configuring your web browser, go to this website:
https://ibm.biz/BdjS9Z

3.1.1 Access to the storage management software

To access the Storwize V5000 Gen2, complete the following steps:
1. To log on to the management software, type the IP address that was set during the initial setup process into the address line of your web browser. You can connect from any workstation that can communicate with the system. The login window opens (Figure 3-1).
Figure 3-1 Login window
We suggest that each user who operates IBM Spectrum Virtualize has an account that is not shared with someone else. The default user accounts need to be unavailable for remote access, or the passwords need to be changed from the default password and known only to the system owner or kept secured for emergency purposes only.
78 Implementing the IBM Storwize V5000 Gen2 with IBM Spectrum Virtualize V8.1
Loading...