HP P2000 G3 Reference Manual

Page 1
HP StorageWorks P2000 G3 MSA System
SMU Reference Guide
Par t number: 500911- 0 0 5 First edition: September 2010
Page 2
Legal and notice information
© Copyright 2010 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Acknowledgements
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.
UNIX is a registered trademark of The Open Group.
Page 3
Contents
About this guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Intended audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Related documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Document conventions and symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
HP technical support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Product warranties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Subscription service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
HP web sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Documentation feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1 Getting started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Configuring and provisioning a new storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Browser setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Signing in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Tips for signing in and signing out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Tips for using the main window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Tips for using the help window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
System concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
About user accounts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
About vdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
About spares. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
About volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
About hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
About volume mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
About volume cache options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Using write-back or write-through caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Optimizing read-ahead caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
About managing remote systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
About the Snapshot feature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
About the Volume Copy feature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
About the VDS and VSS hardware providers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
About RAID levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
About size representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
About the system date and time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Related topics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
About storage-space color codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
About Configuration View icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
About vdisk reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
About data protection in a single-controller storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 3
Page 4
2 Configuring the system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Using the Configuration Wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Step 1: Starting the wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Step 2: Changing default passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Step 3: Configuring network ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Step 4: Enabling system-management services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Step 5: Setting system information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Step 6: Configuring event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Step 7: Configuring host ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Step 8: Confirming configuration changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Installing a license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Configuring system services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Changing management interface settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Configuring email notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Configuring SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Configuring user accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Adding users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Modifying users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Removing users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Configuring system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Changing the system date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Changing host interface settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Changing network interface settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Setting system information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Configuring advanced settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Changing disk settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Configuring SMART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Configuring drive spin down for available disks and global spares. . . . . . . . . . . . . . . . . . . . . . . . 49
Scheduling drive spin down for all disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Configuring dynamic spares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Configuring the EMP polling rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Changing system cache settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Changing the synchronize-cache mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Changing the missing LUN response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Controlling host access to the system's write-back cache setting . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Changing auto-write-through cache triggers and behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Configuring partner firmware update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Configuring system utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Configuring background scrub for vdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Configuring background scrub for disks not in vdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Configuring utility priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Configuring remote systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Adding a remote system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Removing a remote system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Configuring a vdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Managing dedicated spares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Changing a vdisk's name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Changing a vdisk's owner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Configuring drive spin down for a vdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Configuring a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Changing a volume's name or OpenVMS UID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Changing a volume's cache settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Configuring a snapshot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Changing a snapshot’s name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Configuring a snap pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Changing a snap pool’s name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4Contents
Page 5
3 Provisioning the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Using the Provisioning Wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Step 1: Starting the wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Step 2: Specifying the vdisk name and RAID level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Step 3: Selecting disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Step 4: Defining volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Step 5: Setting the default mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Step 6: Confirming vdisk settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Creating a vdisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Deleting vdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Managing global spares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Creating a volume set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Deleting volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Changing default mapping for multiple volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Explicitly mapping multiple volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Changing a volume's default mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Changing a volume's explicit mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Unmapping volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Creating multiple snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Creating a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Deleting snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Resetting a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Creating a volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Aborting a volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Rolling back a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Creating a snap pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Deleting snap pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Adding a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Removing hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Changing a host's name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Changing host mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Configuring CHAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Modifying a schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Deleting schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4 Using system tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Updating firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Updating controller module firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Updating expansion module firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Updating disk firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Saving logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Resetting a host port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Rescanning disk channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Restoring system defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Clearing disk metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Restarting or shutting down controllers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Restarting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Shutting down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Testing event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Expanding a vdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Verifying a vdisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Scrubbing a vdisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Removing a vdisk from quarantine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Expanding a snap pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Checking links to a remote system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 5
Page 6
5 Viewing system status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Viewing information about the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
System properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Enclosure properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Disk properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Vdisk properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Snap-pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Snapshot properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Schedule properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Licensed features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Version properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Viewing the system event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Viewing information about all vdisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Viewing information about a vdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Vdisk properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Disk properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Snap-pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Viewing information about a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Mapping properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Schedule properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Viewing information about a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Snapshot properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Mapping properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Schedule properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Viewing information about a snap pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Snap-pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Volume properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Snapshot properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Viewing information about all hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Viewing information about a host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Host properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Mapping properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Viewing information about an enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Enclosure properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Disk properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Power supply properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Fan properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Controller module properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Controller module: network port properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Controller module: host port properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Controller module: expansion port properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Controller module: CompactFlash properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Drive enclosure: I/O module properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
I/O module: In port properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
I/O module: Out port properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Viewing information about a remote system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6 Using Remote Snap to replicate volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
About the Remote Snap replication feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Replication actions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Performing initial replication locally or remotely. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Remote replication disaster recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Remote replication licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Related topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6Contents
Page 7
Using the Replication Setup Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Step 1: Starting the wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Step 2: Selecting the primary volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Step 3: Selecting the replication mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Step 4: Selecting the secondary volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Step 5: Confirming replication settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Replicating a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Replicating a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Removing replication from a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Suspending replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Resuming replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Aborting replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Detaching a secondary volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Stopping a vdisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Starting a vdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Reattaching a secondary volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Exporting a replication image to a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Changing the primary volume for a replication set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Viewing replication properties, addresses, and images for a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Replication properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Replication addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Replication images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Viewing information about a subordinate replication volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Replication properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Replication addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Replication image properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Viewing information about a replication image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Replication status properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Primary volume snapshot properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Secondary volume snapshot properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
A SNMP reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Standard MIB-II behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Enterprise traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
FA MIB 2.2 SNMP behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
External details for certain FA MIB 2.2 objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
External details for connUnitRevsTable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
External details for connUnitSensorTable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
External details for connUnitPortTable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Configuring SNMP event notification in SMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
SNMP management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Enterprise trap MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
FA MIB 2.2 and 4.0 differences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
B Using FTP to download logs and update firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Downloading system logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Updating controller module firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Updating expansion module firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Updating disk firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Installing a license file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 7
Page 8
8Contents
Page 9
Figures
1 Relationship between a master volume and its snapshots and snap pool. . . . . . . . . . . . . . . . . . . . . . 27
2 Rolling back a master volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3 Creating a volume copy from a master volume or a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4 Intersite and intrasite replication sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
5 Actions that occur during a series of replications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
6 Example of primary-volume failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 9
Page 10
10 Figures
Page 11
Tables
1 Document conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2 SMU communication status icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3 Settings for default users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4 Example applications and RAID levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5 RAID level comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6 Vdisk expansion by RAID level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7 Size representations in base 2 and base 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
8 Decimal (radix) point character by locale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9 Storage-space color codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
10 Configuration View icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
11 FA MIB 2.2 objects, descriptions, and values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
12 connUnitRevsTable index and description values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
13 connUnitSensorTable index, name, type, and characteristic values. . . . . . . . . . . . . . . . . . . . . . . . . . 128
14 connUnitPortTable index and name values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 11
Page 12
12 Tables
Page 13
About this guide
This guide provides information about managing an HP StorageWorks P2000 G3 MSA System by using its web interface, Storage Management Utility (SMU).
Intended audience
This guide is intended for storage system administrators.
Prerequisites
Prerequisites for using this product include knowledge of:
Network administration
Storage system configuration
Storage area network (SAN) management and direct attach storage (DAS)
Fibre Channel, Serial Attached SCSI (SAS), Internet SCSI (iSCSI), and Ethernet protocols
Related documentation
In addition to this guide, please refer to other documents for this product:
HP StorageWorks P2000 G3 MSA System Racking Instructions
HP StorageWorks P2000 G3 MSA System Installation Instructions
HP StorageWorks P2000 G3 MSA System Cable Configuration Guide
HP StorageWorks P2000 G3 MSA System FC User’s Guide
HP StorageWorks P2000 G3 MSA System FC/iSCSI User’s Guide
HP StorageWorks P2000 G3 MSA System SAS User’s Guide
HP StorageWorks P2000 G3 MSA System iSCSI User’s Guide
HP StorageWorks P2000 G3 MSA System CLI Reference Guide
HP StorageWorks P2000 G3 MSA System Event Descriptions Reference Guide
Online help for HP StorageWorks P2000 G3 MSA System management interfaces
You can find these documents from the Manuals page of the HP Business Support Center web site:
http://www.hp.com/support
/manuals.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 13
Page 14
Document conventions and symbols
Table 1 Document conventions
Convention Element
Medium blue text: Figure 1 Cross-reference links and e-mail addresses
Medium blue, underlined text (http://www.hp.com
Bold font Key names
Italics font Text emphasis
Monospace font File and directory names
Monospace, italic font Code variables
Monospace, bold font Emphasis of file and directory names, system output, code, and text
)
Web site addresses
Text typed into a GUI element, such as into a box
GUI elements that are clicked or selected, such as menu and list
items, buttons, and check boxes
System output
Code
Text typed at the command-line
Command-line variables
typed at the command line
CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.
IMPORTANT: Provides clarifying information or specific instructions.
NOTE: Provides additional information.
TIP: Provides helpful hints and shortcuts.
HP technical support
Telephone numbers for worldwide technical support are listed on the HP support web site:
http://www.hp.com/support/
Collect the following information before calling:
Technical support registration number (if applicable)
Product serial numbers
Product model names and numbers
Applicable error messages
Operating system type and revision level
Detailed, specific questions
For continuous quality improvement, calls may be recorded or monitored.
.
14 About this guide
Page 15
Product warranties
For information about HP StorageWorks product warranties, see the warranty information website:
h
ttp://www.hp.com/go/storagewarranty
Subscription service
HP strongly recommends that customers sign up online using the Subscriber's choice web site:
http://www.hp.com/go/e-updates
Subscribing to this service provides you with e-mail updates on the latest product enhancements, newest
versions of drivers, and firmware documentation updates as well as instant access to numerous other product resources.
After signing up, you can quickly locate your products by selecting Business support and then Storage
under Product Category.
HP web sites
For other product information, see the following HP web sites:
.
http://www.hp.com
http://www.hp.com/go/storage
http://www.hp.com/support/manuals
http://www.hp.com/support/downloads
http://www.hp.com/storage/whitepapers
h
ttp://www.hp.com/go/p2000
Documentation feedback
HP welcomes your feedback.
To make comments and suggestions about product documentation, please send a message to
storagedocs.feedback@hp.com. All submissions become the property of HP.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 15
Page 16
16 About this guide
Page 17
1 Getting started
Storage Management Utility (SMU) is a web-based application for configuring, monitoring, and managing the storage system.
Each controller module in the storage system contains a web server, which you access when you sign in to SMU. In a dual-controller system, you can access all functions from either controller. If one controller becomes unavailable, you can continue to manage the storage system from the partner controller.
SMU is also referred to as the web-browser interface (WBI).
NOTE: It is possible to upgrade an MSA2000 storage system by replacing its controllers with P2000 G3
controllers, which use the version of SMU described in this guide. For upgrade information go to
http://
StorageWorks MSA2000 G2 to the P2000 G3 MSA.”
Configuring and provisioning a new storage system
To configure and provision a storage system for the first time:
1. Configure your web browser for SMU and sign in, as described in Browser setup and Signing in below.
2. Set the system date and time, as described in Changing the system date and time on page 46.
3. Use the Configuration Wizard to configure other system settings, as described in Using the
4. Use the Provisioning Wizard to create a virtual disk (vdisk) containing storage volumes, and optionally
5. Use the Replication Setup Wizard to configure replication for a primary volume to a remote system, as
6. If you mapped volumes to hosts, verify the mappings by mounting the volumes from each host and
7. Verify that controller modules and expansion modules have the latest firmware, as described in Viewing
www.hp.com/go/p2000 click Resource Library, and view the white paper “Upgrading the HP
Configuration Wizard on page 37.
to map the volumes to hosts, as described in Using the Provisioning Wizard on page 59.
described in Using the Replication Setup Wizard on page 110.
performing simple read/write tests to the volumes.
information about the system on page 89 and Updating firmware on page 79.
You can then make additional configuration and provisioning changes and view system status, as described in later chapters of this guide.
Browser setup
Use Mozilla Firefox 3 or later, or Microsoft Internet Explorer 7 or later.
To see the help window, you must enable pop-up windows.
To optimize the display, use a color monitor and set its color quality to the highest setting.
To navigate beyond the Sign In page (with a valid user account):
• Set the browser's local-intranet security option to medium or medium-low.
• Verify that the browser is set to allow cookies at least for the IP addresses of the storage-system network ports.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 17
Page 18
Signing in
To sign in:
1. In the web browser’s address field, type the IP address of a controller network port and press Enter. The
SMU Sign In page is displayed. If the Sign In page does not display, verify that you have entered the correct IP address.
2. On the Sign In page, enter the name and password of a configured user. The default user name and
password are manage and !manage. If you are logging in to SMU for the first time, the Language field displays user setting or English, either of which results in English.
Language preferences can be configured for the system and for individual users.
3. Click Sign In. If the system is available, the System Overview page is displayed; otherwise, a message
indicates that the system is unavailable.
Tips for signing in and signing out
Do not include a leading zero in an IP address. For example, enter 10.1.4.6 not 10.1.4.06.
Multiple users can be signed in to each controller simultaneously.
For each active SMU session an identifier is stored in the browser. Depending on how your browser
treats this session identifier, you might be able to run multiple independent sessions simultaneously. Each instance of Internet Explorer can run a separate SMU session; however, all instances of Firefox share the same session.
If you end a SMU session without clicking the Sign Out link near the top of the SMU window, the
session automatically ends when the user's automatic sign-out time expires. If this preference is set to Never, the session ends after 9999 minutes.
Tips for using the main window
The Configuration View panel displays logical and physical components of the storage system. To
perform a task, select the component to act on and then either:
• Right-click to display a context menu and select the task to perform. This is the method that help topics describe.
• Click a task category in the main panel and select the task to perform.
The System Status panel shows how many events of each severity have occurred in the system. To view
event details, click a severity icon. For more information see Viewing the system event log on page 90.
Many tables can be sorted by a specific column. To do so, click the column heading to sort low to high;
click again to sort high to low.
Do not use the browser's Back, Forward, Reload, or Refresh buttons. SMU has a single page whose
content changes as you perform tasks and automatically updates to show current data.
An asterisk (*) identifies a required setting.
The icon in the upper right corner of the main window shows the status of communication between
SMU, the Management Controller (MC), and the Storage Controller (SC), as described in the following table.
18 Getting started
Page 19
Table 2 SMU communication status icons
Icon Meaning
SMU can communicate with the Management Controller, which can communicate with the Storage Controller.
SMU cannot communicate with the Management Controller.
SMU can communicate with the Management Controller, which cannot communicate with the Storage Controller.
Below the communication status icon, a timer shows how long the session can be idle until you are
automatically signed out. This timer resets after each action you perform. One minute before automatic sign-out you are prompted to continue using SMU. The timer does not appear if the current user's Auto Sign Out preference is set to Never.
If a SMU session is active on a controller and the controller is power cycled or is forced offline by the
partner controller or certain other events occur, the session might hang. SMU might say that it is “Connecting” but stop responding, or the page may become blank with the browser status “Done.” After the controller comes back online, the session will not restart. To continue using SMU, close and reopen the browser and start a new SMU session.
Colors that identify how storage space is used are described in About storage-space color codes on
page 33.
Icons shown in the Configuration View panel are described in About Configuration View icons on
page 34.
Tips for using the help window
To display help for a component in the Configuration View panel, right-click the component and select
Help. To display help for the content in the main panel, click either Help in the menu bar or the help icon in the upper right corner of the panel.
In the help window, click the table of contents icon to show or hide the Contents pane.
A help topic remains displayed until you browse to another topic in the help window, display help for a
different item in the main window, or close the help window.
If you have viewed more than one help topic, you can click the arrow icons to display the previous or
next topic.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 19
Page 20
System concepts
About user accounts
The system provides three default user accounts and allows a maximum of 12 user accounts to be configured. Any account can be modified or removed except you cannot remove the user you are signed in as.
User accounts have these options:
User Name. A user name is case sensitive and cannot already exist in the system. A name cannot
include a comma, double quote, or backslash.
Password. A password is case sensitive. A password cannot include a comma, double quote, or
backslash. Though optional, passwords are highly recommended to ensure system security.
User Roles. Select Monitor to let the user view system settings, or Manage to let the user view and
change system settings. You cannot change the role of user manage.
User Type. Select Standard to allow access to standard functions, or Advanced to allow access to all
functions except diagnostic functions, or Diagnostic to allow access to all functions.
NOTE: This release has no functions that require Advanced or Diagnostic access; a Standard user can
access all functions.
WBI Access. Allows access to the web-based management interface.
CLI Access. Allows access to the command-line management interface.
FTP Access. Allows access to the file transfer protocol interface, which provides a way to install
firmware updates and download logs.
Base Preference. The base for entry and display of storage-space sizes. In base 2, sizes are shown as
powers of 2, using 1024 as a divisor for each magnitude. In base 10, sizes are shown as powers of 10, using 1000 as a divisor for each magnitude. Operating systems usually show volume size in base 2. Disk drives usually show size in base 10. Memory (RAM and ROM) size is always shown in base 2.
Precision Preference. The number of decimal places (1–10) for display of storage-space sizes.
Unit Preference. Sets the unit for display of storage-space sizes. The Auto option lets the system
determine the proper unit for a size. Based on the precision setting, if the selected unit is too large to meaningfully display a size, the system uses a smaller unit for that size. For example, if the unit is set to TB and the precision is set to 1, the size 0.11709 TB is s h o wn a s 119. 9 G B.
Temperature Preference. Specifies to use either the Celsius scale or the Fahrenheit scale for temperature
values.
Auto Sign Out. Select the amount of time that the user's session can be idle before the user is
automatically signed out: 5, 15, or 30 minutes, or Never (9999 minutes). The default is 30 minutes.
Locale. The user’s preferred display language, which overrides the system’s default display language.
Installed language sets include Chinese-simplified, Chinese-traditional, Dutch, English, French, German, Italian, Japanese, Korean, and Spanish.
Table 3 Settings for default users
Name Password Roles Type WBI CLI FTP Base Prec. Units Temp. Auto
monitor !monitor Monitor Standard Yes Yes No 10 1 Auto
manage !manage Monitor,
Manage
ftp !ftp Monitor,
Manage
20 Getting started
Yes Yes Yes
No No Yes
Celsius
Sign Out
30 Min.
Locale
English
Page 21
NOTE: To secure the storage system, set a new password for each default user.
Related topics
Configuring user accounts on page 44
About vdisks
A vdisk is a “virtual” disk that is composed of one or more disks, and has the combined capacity of those disks. The number of disks that a vdisk can contain is determined by its RAID level. All disks in a vdisk must be the same type (SAS or SATA, small or large form-factor). A maximum of 16 vdisks per controller can exist.
A vdisk can contain different models of disks, and disks with different capacities. For example, a vdisk can include a 500-GB disk and a 750-GB disk. If you mix disks with different capacities, the smallest disk determines the logical capacity of all other disks in the vdisk, regardless of RAID level. For example, if a RAID-0 vdisk contains one 500-GB disk and four 750-GB disks, the capacity of the vdisk is equivalent to approximately five 500-GB disks.
Each disk has metadata that identifies whether the disk is a member of a vdisk, and identifies other members of that vdisk. This enables disks to be moved to different slots in a system; an entire vdisk to be moved to a different system; and a vdisk to be quarantined if disks are detected missing.
In a single-controller system, all vdisks are owned by that controller. In a dual-controller system, when a vdisk is created the system automatically assigns the owner to balance the number of vdisks each controller owns; or, you can select the owner. Typically it does not matter which controller owns a vdisk.
In a dual-controller system, when a controller fails, the partner controller assumes temporary ownership of the failed controller's vdisks and resources. If a fault-tolerant cabling configuration is used to connect the controllers to drive enclosures and hosts, both controllers' LUNs are accessible through the partner.
When you create a vdisk you can use the default chunk size or one that better suits your application. The chunk size is the amount of contiguous data that is written to a disk before moving to the next disk. After a vdisk is created its chunk size cannot be changed. For example, if the host is writing data in 16-KB transfers, that size would be a good choice for random transfers because one host read would generate the read of exactly one disk in the volume. That means if the requests are random-like, then the requests would be spread evenly over all of the disks, which is good for performance. If you have 16-KB accesses from the host and a 64-KB block size, then some of the hosts accesses would hit the same disk; each chunk contains four possible 16-KB groups of data that the host might want to read, which is not an optimal solution. Alternatively, if the host accesses were 128 KB, then each host read would have to access two disks in the vdisk. For random patterns, that ties up twice as many disks.
When you create a vdisk you can also create volumes within it. A volume is a logical subdivision of a vdisk, and can be mapped to controller host ports for access by hosts. The storage system presents only volumes, not vdisks, to hosts.
You can create vdisks with or without volumes by using the Provisioning Wizard, or you can create vdisks manually.
Best practices for creating vdisks include:
To maximize capacity, use disks of similar size.
For greatest reliability, use disks of the same size and rotational speed.
For storage configurations using many disks, create a few vdisks each containing many disks instead of
many vdisks each containing a few disks.
To maximize capacity and disk usage (but not performance), you can create vdisks larger than 2 TB
and divide them into multiple volumes each having a capacity of 2 TB or less. This increases the usable capacity of storage configurations by reducing the total number of parity disks required when using parity-protected RAID levels. This differs from using a volume larger than 2 TB, which requires specific support by the host operating system, I/O adapter, and application.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 21
Page 22
For maximum use of a dual-controller system’s resources, each controller should own a similar number
of vdisks.
Set the chunk size to match the transfer block size of the host application.
Related topics
About RAID levels on page 30
About spares on page 22
About volumes on page 23
Vdisk topics in Provisioning the system on page 59
Configuring a vdisk on page 55
Verifying a vdisk on page 85
Scrubbing a vdisk on page 85
Viewing information about a vdisk (page 92), all vdisks (page 91), or the system (page 89)
Removing a vdisk from quarantine on page 86
About spares
A controller automatically reconstructs a redundant (fault-tolerant) vdisk (RAID 1, 3, 5, 6, 10, 50) when one or more of its disks fails and a compatible spare disk is available. A compatible disk has enough capacity to replace the failed disk and is the same type (SAS or SATA).
There are three types of spares:
Dedicated spare. Reserved for use by a specific vdisk to replace a failed disk. Most secure way to
provide spares for vdisks but expensive to reserve a spare for each vdisk.
Global spare. Reserved for use by any redundant vdisk to replace a failed disk.
Dynamic spare. An available compatible disk that is automatically assigned to replace a failed disk in
a redundant vdisk.
When a disk fails, the system looks for a dedicated spare first. If it does not find a dedicated spare, it looks for a global spare. If it does not find a compatible global spare and the dynamic spares option is enabled, it takes any available compatible disk. If no compatible disk is available, reconstruction cannot start.
A best practice is to designate spares for use if disks fail. Dedicating spares to vdisks is the most secure method, but it is also expensive to reserve spares for each vdisk. Alternatively, you can enable dynamic spares or assign global spares.
Related topics
Configuring dynamic spares on page 50
Managing dedicated spares on page 55
Managing global spares on page 62
Using the Provisioning Wizard on page 59
Creating a vdisk on page 61
Viewing information about a vdisk (page 92) or all vdisks (page 91)
22 Getting started
Page 23
About volumes
A volume is a logical subdivision of a vdisk, and can be mapped to controller host ports for access by hosts. A mapped volume provides the storage for a file system partition you create with your operating system or third-party tools. The storage system presents only volumes, not vdisks, to hosts. A vdisk can have a maximum of 128 volumes.
You can create a vdisk that has one volume or multiple volumes.
Single-volume vdisks work well in environments that need one large, fault-tolerant storage space for
data on one host. A large database accessed by users on a single host that is used only for that application is an example.
Multiple-volume vdisks work well when you have very large disks and you want to make the most
efficient use of disk space for fault tolerance (parity and spares). For example, you could create one 10-TB RAID-5 vdisk and dedicate one spare to the vdisk. This minimizes the amount of disk space allocated to parity and spares compared to the space required if you created five 2-TB RAID-5 vdisks. However, I/O to multiple volumes in the same vdisk can slow system performance.
When you create volumes you can specify their sizes. If the total size of a vdisk's volumes equals the size of the vdisk, you will not have any free space. Without free space, you cannot add or expand volumes. If you need to add or expand a volume in a vdisk without free space, you can delete a volume to create free space. Or, you can expand the vdisk and then either add a volume or expand a volume to use the new free space.
You can use a volume's default name or change it to identify the volume's purpose. For example, a volume used to store payroll information can be named Payroll.
You can create vdisks with volumes by using the Provisioning Wizard, or you can create volumes manually.
Related topics
About vdisks on page 21
About volume mapping on page 24
About volume cache options on page 25
Volume topics in Provisioning the system on page 59
Changing a volume's name or OpenVMS UID on page 56
Changing a volume's cache settings on page 57
Viewing information about a volume on page 94
About hosts
A host identifies an external port that the storage system is attached to. The external port may be a port in an I/O adapter (such as an FC HBA) in a server, or a port in a network switch.
The controllers automatically add hosts that have sent an to the storage system. Hosts typically do this when they boot up or rescan for devices. When the command from the host occurs, the system saves the host ID. The ID for an FC or SAS host is its WWPN. The ID for an iSCSI host is typically, but not limited to, its IQN.
You must assign a name to an automatically added host to have the system retain it after a restart. Naming hosts also makes them easy to recognize for volume mapping. A maximum of 64 names can be assigned.
The Configuration View panel lists hosts by name, or if they are unnamed, by ID.
inquiry
command or a
report luns
command
A storage system with iSCSI ports can be protected from unauthorized access via iSCSI by enabling Challenge Handshake Authentication Protocol (CHAP). CHAP authentication occurs during an attempt by a host to login to the system. This authentication requires an identifier for the host and a shared secret between the host and the system. Optionally, the storage system can also be required to authenticate itself to the host; this is called mutual CHAP. Steps involved in enabling CHAP include:
Decide on host node names (identifiers) and secrets. The host node name is typically, but not limited to,
its IQN. A secret must have 12–16 characters.
Define CHAP entries in the storage system. If the node name is a host name, then it may be useful to
display the hosts that are known to the system.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 23
Page 24
Enable CHAP on the storage system. Note that this applies to all iSCSI hosts, in order to avoid security
exposures.
Define CHAP secret in the host iSCSI initiator.
Request host login to the storage system. The host should be displayable by the system, as well as the
ports through which connections were made.
If it becomes necessary to add more hosts after CHAP is enabled, additional CHAP node names and secrets can be added. If a host attempts to login to the storage system, it will become visible to the system, even if the full login is not successful due to incompatible CHAP definitions. This information may be useful in configuring CHAP entries for new hosts. This information becomes visible when an iSCSI discovery session is established, because the storage system does not require discovery sessions to be authenticated.
Related topics
Using the Configuration Wizard on page 37
Changing host interface settings on page 46
Adding a host on page 74
Removing hosts on page 75
Changing a host's name on page 75
Changing host mappings on page 75
Viewing information about a host (page 99) or all hosts (page 98)
About volume mapping
Each volume has default host-access settings that are set when the volume is created; these settings are called the default mapping. The default mapping applies to any host that has not been explicitly mapped using different settings. Explicit mappings for a volume override its default mapping.
Default mapping enables all attached hosts to see a volume using a specified LUN and access permissions set by the administrator. This means that when the volume is first created, all connected hosts can immediately access the volume using the advertised default mapping settings. This behavior is expected by some operating systems, such as Microsoft Windows, which can immediately discover the volume. The advantage of a default mapping is that all connected hosts can discover the volume with no additional work by the administrator. The disadvantage is that all connected hosts can discover the volume with no restrictions. Therefore, this process is not recommended for specialized volumes that require restricted access.
You can change a volume's default mapping, and create, modify, or delete explicit mappings. A mapping can specify read-write, read-only, or no access through one or more controller host ports to a volume. When a mapping specifies no access, the volume is masked. You can apply access privileges to one or more of the host ports on either controller. To maximize performance, map a volume to at least one host port on the controller that owns it. To sustain I/O in the event of controller failure, map to at least one host port on each controller.
For example, a payroll volume could be mapped with read-write access for the Human Resources host and be masked for all other hosts. An engineering volume could be mapped with read-write access for the Engineering host and read-only access for other departments’ hosts.
A LUN identifies a mapped volume to a host. Both controllers share a set of LUNs, and any unused LUN can be assigned to a mapping; however, each LUN can only be used once per volume as its default LUN. For example, if LUN 5 is the default for Volume1, no other volume in the storage system can use LUN 5 as its default LUN. For explicit mappings, the rules differ: LUNs used in default mappings can be reused in explicit mappings for other volumes and other hosts.
TIP: When an explicit mapping is deleted, the volume’s default mapping takes effect. Therefore, it is
recommended to use the same LUN for explicit mappings as for the default mapping.
24 Getting started
Page 25
IMPORTANT: In an FC/iSCSI combo system, do not connect hosts or map volumes to host ports used for
replication. Attempting to do so could interfere with replication operation.
Volume mapping settings are stored in disk metadata. If enough of the disks used by a volume are moved into a different enclosure, the volume's vdisk can be reconstructed and the mapping data is preserved.
About volume cache options
You can set options that optimize reads and writes performed for each volume.
Using write-back or write-through caching
CAUTION: Only disable write-back caching if you fully understand how the host operating system,
application, and adapter move data. If used incorrectly, you might hinder system performance.
You can change a volume's write-back cache setting. Write-back is a cache-writing strategy in which the controller receives the data to be written to disks, stores it in the memory buffer, and immediately sends the host operating system a signal that the write operation is complete, without waiting until the data is actually written to the disk. Write-back cache mirrors all of the data from one controller module cache to the other. Write-back cache improves the performance of write operations and the throughput of the controller.
When write-back cache is disabled, write-through becomes the cache-writing strategy. Using write-through cache, the controller writes the data to the disks before signaling the host operating system that the process is complete. Write-through cache has lower write operation and throughput performance than write-back, but it is the safer strategy, with minimum risk of data loss on power failure. However, write-through cache does not mirror the write data because the data is written to the disk before posting command completion and mirroring is not required. You can set conditions that cause the controller to change from write-back caching to write-through caching.
In both caching strategies, active-active failover of the controllers is enabled.
You can enable and disable the write-back cache for each volume. By default, volume write-back cache is enabled. Because controller cache is backed by super-capacitor technology, if the system loses power, data is not lost. For most applications, this is the correct setting. But because back-end bandwidth is used to mirror cache and because this mirroring uses back-end bandwidth, if you are writing large chunks of sequential data (as would be done in video editing, telemetry acquisition, or data logging), write-through cache has much better performance. Therefore, you might want to experiment with disabling the write-back cache. You might see large performance gains (as much as 70 percent) if you are writing data under the following circumstances:
Sequential writes
Large I/Os in relation to the chunk size
Deep queue depth
If you are doing random access to this volume, leave the write-back cache enabled.
The best practice for a fault-tolerant configuration is to use write-back caching.
Optimizing read-ahead caching
CAUTION: Only change read-ahead cache settings if you fully understand how the host operating
system, application, and adapter move data so that you can adjust the settings accordingly.
You can optimize a volume for sequential reads or streaming data by changing its read-ahead cache settings. Read ahead is triggered by two back-to-back accesses to consecutive LBA ranges, whether forward (increasing LBAs) or reverse (decreasing LBAs).
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 25
Page 26
You can change the amount of data read in advance after two back-to-back reads are made. Increasing the read-ahead cache size can greatly improve performance for multiple sequential read streams; however, increasing read-ahead size will likely decrease random read performance.
The Default option works well for most applications: it sets one chunk for the first access in a sequential
read and one stripe for all subsequent accesses. The size of the chunk is based on the chunk size used when you created the vdisk (the default is 64 KB). Non-RAID and RAID-1 vdisks are considered to have a stripe size of 64 KB.
Specific size options let you select an amount of data for all accesses.
The Maximum option lets the controller dynamically calculate the maximum read-ahead cache size for
the volume. For example, if a single volume exists, this setting enables the controller to use nearly half the memory for read-ahead cache.
Only use Maximum when disk latencies must be absorbed by cache. For example, for read-intensive applications, you will want data that is most often read to be in cache so that the response to the read request is very fast; otherwise, the controller has to locate which disks the data is on, move it up to cache, and then send it to the host. Do not use Maximum if more than two volumes are owned by the controller on which the read-ahead setting is being made. If there are more than two volumes, there is contention on the cache as to which volume’s read data should be held and which has the priority; each volume constantly overwrites the other volume’s data in cache, which could result in taking a lot of the controller’s processing power.
The Disabled option turns off read-ahead cache. This is useful if the host is triggering read ahead for
what are random accesses. This can happen if the host breaks up the random I/O into two smaller reads, triggering read ahead.
You can also change the optimization mode.
The standard read-ahead caching mode works well for typical applications where accesses are a
combination of sequential and random; this method is the default. For example, use this mode for transaction-based and database update applications that write small files in random order.
For an application that is strictly sequential and requires extremely low latency, you can use Super
Sequential mode. This mode makes more room for read-ahead data by allowing the controller to discard cache contents that have been accessed by the host. For example, use this mode for video playback and multimedia post-production video- and audio-editing applications that read and write large files in sequential order.
Related topics
Changing a volume's cache settings on page 57
Changing system cache settings on page 51
Viewing information about a volume on page 94
About managing remote systems
You can add a management object to obtain information from a remote storage system. This allows a local system to track remote systems by their network-port IP addresses and cache their login credentials — the user name and password for a manage-level user on that system. The IP address can then be used in commands that need to interact with the remote system.
After a remote system has been added, you can check the connectivity of host ports in the local system to host ports in that remote system. A port in the local system can only link to ports with the same host interface, such as Fibre Channel (FC), in a remote system.
Communication between local and remote systems is an essential part of the remote replication feature.
Related topics
Adding a remote system on page 54
Removing a remote system on page 54
Viewing information about a remote system on page 104
Checking links to a remote system on page 87
About the Remote Snap replication feature on page 105
26 Getting started
Page 27
About the Snapshot feature
Snapshot is a licensed feature that provides data protection by enabling you to create and save snapshots of a volume. Each snapshot preserves the source volume's data state at the point in time when the snapshot was created. Snapshots can be created manually or by using the task scheduler.
When the first snapshot is taken of a standard volume, the system automatically converts the volume into a master volume and reserves additional space for snapshot data. This reserved space, called a snap pool, stores pointers to the source volume's data. Each master volume has its own snap pool. The system treats a snapshot like any other volume; the snapshot can be mapped to hosts with read-only access, read-write access, or no access, depending on the snapshot's purpose. Any additional unique data written to a snapshot is also stored in the snap pool.
The following figure shows how the data state of a master volume is preserved in the snap pool by two snapshots taken at different points in time. The dotted line used for the snapshot borders indicates that snapshots are logical volumes, not physical volumes as are master volumes and snap pools.
MasterVolume-1 Snap Pool-1
Snapshot-1
(Monday)
Snapshot-2
(Tuesday)
Figure 1 Relationship between a master volume and its snapshots and snap pool
The snapshot feature uses the single copy-on-write method to capture only data that has changed. That is, if a block is to be overwritten on the master volume, and a snapshot depends on the existing data in the block being overwritten, the data is copied from the master volume to the snap pool before the data is changed. All snapshots that depend on the older data are able to access it from the same location in the snap pool; this reduces the impact of snapshots when writing to a master volume. In addition, only a single copy-on-write operation is performed on the master volume.
The storage system allows a maximum number of snapshots to be retained, as determined by an installed license. For example, if your license allows four snapshots, when the fifth snapshot is taken an error message informs you that you have reached the maximum number of snapshots allowed on your system. Before you can create a new snapshot you must either delete an existing snapshot, or purchase and install a license that increases the maximum number of snapshots.
The snapshot service has two features for reverting data back to original data:
Deleting only modified data on a snapshot. For snapshots that have been made accessible as
read-write, you can delete just the modified (write) data that was written directly to a snapshot. When the modified data is deleted, the snapshot data reverts to the original data that was snapped. This feature is useful for testing an application, for example. You might want to test some code, which writes data to the snapshot. Rather than having to take another snapshot, you can just delete any write data and start again.
Rolling back the data in a source volume. The rollback feature enables you to revert the data in a
source volume to the data that existed when a specified snapshot was created (preserved data). Alternatively, the rollback can include data that has been modified (write data) on the snapshot since the snapshot was taken. For example, you might want to take a snapshot, mount that snapshot for read/write, and then install new software on that snapshot for test purposes. If the software installation is successful, you can rollback the master volume to the contents of the modified snapshot (preserved data plus the write data).
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 27
Page 28
The following figure shows the difference between rolling back the master volume to the data that existed when a specified snapshot was created (preserved), and rolling back preserved and modified data.
MasterVolume-1
Snapshot-1
Preserved Data (Monday)
Modified Data (Tuesday)
When you use the rollback feature, you can choose to exclude the modified data, which will revert the data on the master volume to the preserved data when the snapshot was taken.
Snap Pool-1
MasterVolume-1
Figure 2 Rolling back a master volume
Snapshot operations are I/O-intensive. Every write to a unique location in a master volume after a snapshot is taken will cause an internal read and write operation to occur in order to preserve the snapshot data. If you intend to create snapshots of, create volume copies of, or replicate volumes in a vdisk, ensure that the vdisk contains no more than four master volumes, snap pools, or both. For example: 2 master volumes and 2 snap pools; 3 master volumes and 1 snap pool; 4 master volumes and 0 snap pools.
Related topics
Installing a license on page 41
Creating a snapshot (page 69) or multiple snapshots (page 68)
Changing a snapshot’s default mapping (page 66) or explicit mappings (page 67)
Deleting snapshots on page 69
Resetting a snapshot on page 70
Viewing information about a snapshot (page 96), a vdisk (page 92), all vdisks (page 91), or the
system (page 89)
Rolling back a volume on page 73
Deleting schedules on page 77
Snapshot-1
Preserved Data (Monday)
Modified Data (Tuesday)
Or you can choose to include the modified data since the snapshot was taken, which will revert the data on the master volume
to the current snapshot.
Snap Pool-1
28 Getting started
Page 29
About the Volume Copy feature
Volume Copy enables you to copy a volume or a snapshot to a new standard volume.
While a snapshot is a point-in-time logical copy of a volume, the volume copy service creates a complete “physical” copy of a volume within a storage system. It is an exact copy of a source volume as it existed at the time the volume copy operation was initiated, consumes the same amount of space as the source volume, and is independent from an I/O perspective. Volume independence is a key distinction of a volume copy (versus a snapshot, which is a “virtual” copy and dependent on the source volume).
Benefits include:
Additional data protection. An independent copy of a volume (versus logical copy through snapshot)
provides additional data protection against a complete master volume failure. If the source master volume fails, the volume copy can be used to restore the volume to the point in time the volume copy was taken.
Non-disruptive use of production data. With an independent copy of the volume, resource contention
and the potential performance impact on production volumes is mitigated. Data blocks between the source and the copied volumes are independent (versus shared with snapshot) so that I/O is to each set of blocks respectively; application I/O transactions are not competing with each other when accessing the same data blocks.
The following figure illustrates how volume copies are created.
Creating a volume copy from a standard or master volume
Source volume Transient snapshot Data transfer New volume
1. Volume copy request is made with a standard volume or a master volume as the source.
2. If the source a standard volume, it is converted to a master volume and a snap pool is created.
3. A new volume is created for the volume copy, and a hidden, transient snapshot is created.
4. Data is transferred from the transient snapshot to the new volume.
5. On completion, the transient volume is deleted and the new volume is a completely independent copy of the master volume, representing the data that was present when the volume copy was started.
Creating a volume copy from a snapshot
Master volume
1. A master volume exists with one or more snapshots associated with it. Snapshots can be in their original state or they can be modified.
2. You can select any snapshot to copy, and you can specify that the modified or unmodified data be copied.
3. On completion, the new volume is a completely independent copy of the snapshot. The snapshot remains, though you can choose to delete it.
Snapshot(s) Data transfer New volume
Figure 3 Creating a volume copy from a master volume or a snapshot
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 29
Page 30
Snapshot operations are I/O-intensive. Every write to a unique location in a master volume after a snapshot is taken will cause an internal read and write operation to occur in order to preserve the snapshot data. If you intend to create snapshots of, create volume copies of, or replicate volumes in a vdisk, ensure that the vdisk contains no more than four master volumes, snap pools, or both. For example: 2 master volumes and 2 snap pools; 3 master volumes and 1 snap pool; 4 master volumes and 0 snap pools.
Guidelines to keep in mind when performing a volume copy include:
The destination vdisk must be owned by the same controller as the source volume.
The destination vdisk must have free space that is at least as large as the mount of space allocated to
the original volume. A new volume will be created using this free space for the volume copy.
The destination vdisk does not need to have the same attributes (such as disk type, RAID level) as the
volume being copied.
Once the copy is complete, the new volume will no longer have any ties to the original.
Volume Copy makes a copy from a snapshot of the source volume; therefore, the snap pool for the
source volume must have sufficient space to store snapshot data when performing this copy.
Related topics
Creating a volume copy on page 71
Aborting a volume copy on page 72
Viewing information about a volume on page 94
Deleting schedules on page 77
About the VDS and VSS hardware providers
Virtual Disk Service (VDS) enables host-based applications to manage vdisks and volumes. Volume Shadow Copy Service (VSS) enables host-based applications to manage snapshots. For more information, see the VDS and VSS hardware provider documentation for your product.
About RAID levels
The RAID controllers enable you to set up and manage vdisks, whose storage may be spread across multiple disks. This is accomplished through firmware resident in the RAID controller. RAID refers to vdisks in which part of the storage capacity may be used to store redundant data. The redundant data enables the system to reconstruct data if a disk in the vdisk fails.
Hosts see each partition of a vdisk, known as a volume, as a single disk. A volume is actually a portion of the storage space on disks behind a RAID controller. The RAID controller firmware makes each volume appear as one very large disk. Depending on the RAID level used for a vdisk, the disk presented to hosts has advantages in fault-tolerance, cost, performance, or a combination of these.
NOTE: Choosing the right RAID level for your application improves performance.
The following tables:
Provide examples of appropriate RAID levels for different applications
Compare the features of different RAID levels
Describe the expansion capability for different RAID levels
Table 4 Example applications and RAID levels
Application RAID level
Testing multiple operating systems or software development (where redundancy is not an issue) NRAID
Fast temporary storage or scratch disks for graphics, page layout, and image rendering 0
Workgroup servers 1 or 10
Video editing and production 3
30 Getting started
Page 31
Table 4 Example applications and RAID levels
Application RAID level
Network operating system, databases, high availability applications, workgroup servers 5
Very large databases, web server, video on demand 50
Mission-critical environments that demand high availability and use large sequential workloads 6
Table 5 RAID level comparison
RAID level
NRAID 1 Non-RAID, nonstriped
0 2 Data striping without
1 2 Disk mirroring Very high performance and data
3 3 Block-level data striping
5 3 Block-level data striping
Min. disks
Description Strengths Weaknesses
Ability to use a single disk to store
mapping to a single disk
redundancy
with dedicated parity disk
with distributed parity
additional data
Highest performance No data protection: if one disk
protection; minimal penalty on write performance; protects against single disk failure
Excellent performance for large, sequential data requests (fast read); protects against single disk failure
Best cost/performance for transaction-oriented networks; very high performance and data protection; supports multiple simultaneous reads and writes; can also be optimized for large, sequential requests; protects against single disk failure
Not protected, lower performance (not striped)
fails all data is lost
High redundancy cost overhead: because all data is duplicated, twice the storage capacity is required
Not well-suited for transaction-oriented network applications: single parity disk does not support multiple, concurrent write requests
Write performance is slower than RAID 0 or RAID 1
6 4 Block-level data striping
with double distributed parity
10 (1 +0)
50 (5+0)
4Stripes data across
multiple RAID-1 sub-vdisks
6Stripes data across
multiple RAID-5 sub-vdisks
Best suited for large sequential workloads; non-sequential read and sequential read/write performance is comparable to RAID 5; protects against dual disk failure
Highest performance and data protection (protects against multiple disk failures)
Better random read and write performance and data protection than RAID 5; supports more disks than RAID 5; protects against multiple disk failures
Higher redundancy cost than RAID 5 because the parity overhead is twice that of RAID 5; not well-suited for transaction-oriented network applications; non-sequential write performance is slower than RAID 5
High redundancy cost overhead: because all data is duplicated, twice the storage capacity is required; requires minimum of four disks
Lower storage capacity than RAID 5
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 31
Page 32
Table 6 Vdisk expansion by RAID level
RAID level Expansion capability Maximum disks
NRAID Cannot expand. 1
0, 3, 5, 6 You can add 1–4 disks at a time. 16
1 Cannot expand. 2
10 You can add 2 or 4 disks at a time. 16
50 You can add one sub-vdisk at a time. The added sub-vdisk must contain the same
number of disks as each of the existing sub-vdisks.
About size representations
In SMU panels, parameters such as names of users and volumes have a maximum length in bytes. ASCII characters are 1 byte; most Latin (Western European) characters with diacritics are 2 bytes; most Asian characters are 3 bytes.
Operating systems usually show volume size in base 2. Disk drives usually show size in base 10. Memory (RAM and ROM) size is always shown in base 2. In SMU, the base for entry and display of storage-space sizes can be set per user or per session. When entering storage-space sizes only, either base-2 or base-10 units can be specified.
Table 7 Size representations in base 2 and base 10
Base 2 Base 10
Unit Size in bytes Unit Size in bytes
KiB (kibibyte) 1,024 KB (kilobyte) 1,000
MiB (mebibyte) 1,024
GiB (gibibyte) 1,024
TiB (tebibyte) 1,024
PiB (pebibyte) 1,024
EiB (exbibyte) 1,024
2
3
4
5
6
MB (megabyte) 1,000
GB (gigabyte) 1,000
TB (terabyte) 1,000
PB (petabyte) 1,000
EB (exabyte) 1,000
32
2
3
4
5
6
The locale setting determines the character used for the decimal (radix) point, as shown below.
Table 8 Decimal (radix) point character by locale
Language Character Examples
English, Chinese, Japanese, Korean Period (.) 146.81 GB
Dutch, French, German, Italian, Spanish Comma (,) 146,81 GB
Related topics
About user accounts on page 20
32 Getting started
3.0 Gb/s
3,0 Gb/s
Page 33
About the system date and time
You can change the storage system's date and time, which are displayed in the System Status panel. It is important to set the date and time so that entries in system logs and event-notification email messages have correct time stamps.
You can set the date and time manually or configure the system to use Network Time Protocol (NTP) to obtain them from a network-attached server. When NTP is enabled, and if an NTP server is available, the system time and date can be obtained from the NTP server. This allows multiple storage devices, hosts, log files, and so forth to be synchronized. If NTP is enabled but no NTP server is present, the date and time are maintained as if NTP was not enabled.
NTP server time is provided in Coordinated Universal Time (UTC), which provides several options:
If you want to synchronize the times and logs between storage devices installed in multiple time zones,
set all the storage devices to use UTC.
If you want to use the local time for a storage device, set its time zone offset.
If a time server can provide local time rather than UTC, configure the storage devices to use that time
server, with no further time adjustment.
Whether NTP is enabled or disabled, the storage system does not automatically make time adjustments, such as for U.S. daylight savings time. You must make such adjustments manually.
Related topics
Changing the system date and time on page 46
About storage-space color codes
SMU panels use the following color codes to identify how storage space is used.
Table 9 Storage-space color codes
Area Color Meaning
Overview panels Total space
Available/free space
Used space
Reserved/overhead space, used for parity and snap pools, for example
Vdisk panels Space used by spares
Wasted space, due to use of mixed disk sizes
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 33
Page 34
About Configuration View icons
The Configuration View panel uses the following icons to let you view physical and logical components of the storage system.
Table 10 Configuration View icons
Icon Meaning
Show all subcomponents
Hide all subcomponents
Show the component’s subcomponents
Hide the component’s subcomponents
Storage system
Enclosure
Host/initiator
Vdisk
Standard or master volume
Snapshot
Snap pool
Replication-prepared volume
Local primary volume
Local secondary volume
Local replication image
Remote primary volume
Remote secondary volume
Remote replication image
About vdisk reconstruction
If one or more disks fail in a redundant vdisk (RAID 1, 3, 5, 6, 10, or 50) and compatible spares are available, the storage system automatically uses the spares to reconstruct the vdisk. Vdisk reconstruction does not require I/O to be stopped, so the vdisk can continue to be used while the Reconstruct utility runs.
A properly sized spare is one whose capacity is equal to or greater than the smallest disk in the vdisk. A compatible spare has enough capacity to replace the failed disk and is the same type (SAS or SATA). If no compatible spares are available, reconstruction does not start automatically. To start reconstruction manually, replace each failed disk and then do one of the following:
Add each new disk as either a dedicated spare or a global spare. Remember that a global spare might
be taken by a different critical vdisk than the one you intended.
Enable the Dynamic Spare Capability option to use the new disks without designating them as spares.
Reconstructing a RAID-6 vdisk to a fault-tolerant state requires two compatible spares to be available.
If two disks fail and only one compatible spare is available, an event indicates that reconstruction is
about to start. The Reconstruct utility starts to run, using the spare, but its progress remains at 0% until a second compatible spare is available.
If a disk fails during online initialization, the initialization fails. In order to generate the two sets of
parity that RAID 6 requires, the controller fails a second disk in the vdisk, which changes the vdisk status to Critical, and then assigns that disk as a spare for the vdisk. The Reconstruct utility starts to run, using the spare, but its progress remains at 0% until a second compatible spare is available.
34 Getting started
Page 35
The second available spare can be an existing global spare, another existing spare for the vdisk, or a replacement disk that you designate as a spare or that is automatically taken when dynamic sparing is enabled.
During reconstruction, you can continue to use the vdisk. When a global spare replaces a disk in a vdisk, the global spare’s icon in the enclosure view changes to match the other disks in that vdisk.
NOTE: Reconstruction can take hours or days to complete, depending on the vdisk RAID level and size,
disk speed, utility priority, and other processes running on the storage system. You can stop reconstruction only by deleting the vdisk.
About data protection in a single-controller storage system
A P2000 G3 MSA System can be purchased or operated with a single controller. Because single-controller mode is not a redundant configuration, this section presents some considerations concerning data protection.
A volume’s default caching mode is write back, as opposed to write through. In write-back mode, data is held in controller cache until it is written to disk. In write-through mode, data is written directly to disk.
If the controller fails while in write-back mode, unwritten cache data likely exists. The same is true if the controller enclosure or the target volume's enclosure is powered off without a proper shut down. Data remains in the controller's cache and associated volumes will be missing that data. This can result in data loss or in some cases volume loss; for example, if using snapshot functionality a snap pool might become inaccessible and the master volume could go offline.
If the controller can be brought back online long enough to perform a proper shut down, the controller should be able to write its cache to disk without causing data loss.
If the controller cannot be brought back online long enough to write its cache data to disk, you can move its CompactFlash cache card to a replacement controller. This enables the cache data to be available when the new controller comes online. The CompactFlash card is externally accessible from the back of the controller.
To avoid the possibility of data loss in case the controller fails you can change a volume's caching mode to write through. While this will cause significant performance degradation, this configuration guards against data loss. While write-back mode is much faster, this mode is not guaranteed against data loss in the case of a controller failure. If data protection is more important, use write-through caching; if performance is more important, use write-back caching.
For details about caching modes see About volume cache options on page 25. To change a volume’s caching mode, see Changing a volume's cache settings on page 57.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 35
Page 36
36 Getting started
Page 37
2 Configuring the system
Using the Configuration Wizard
The Configuration Wizard helps you initially configure the system or change system configuration settings.
The wizard guides you through the following steps. For each step you can view help by clicking the help icon in the wizard panel. As you complete steps they are highlighted at the bottom of the panel. If you cancel the wizard at any point, no changes are made.
Change passwords for the default users
Configure each controller's network port
Enable or disable system-management services
Enter information to identify the system
Configure event notification
Configure controller host ports
Confirm changes and apply them
When you complete this wizard you are given the option to start the Provisioning Wizard to provision storage.
Step 1: Starting the wizard
1. In the Configuration View panel, right-click the system and select either Configuration > Configuration
Wizard or Wizards > Configuration Wizard. The wizard panel appears.
2. Click Next to continue.
Step 2: Changing default passwords
The system provides the default users password for each default user. A password is case sensitive. A password cannot include a comma, double quote, or backslash. Though optional, passwords are highly recommended to ensure system security.
Click Next to continue.
manage
Step 3: Configuring network ports
You can configure addressing parameters for each controller's network port. You can set static IP values or use DHCP.
In DHCP mode, network port IP address, subnet mask, and gateway values are obtained from a DHCP server if one is available. If a DHCP server is unavailable, current addressing is unchanged. You must have some means of determining what addresses have been assigned, such as the list of bindings on the DHCP server.
Each controller has the following factory-default IP settings:
DHCP: enabled
Controller A IP address: 10.0.0.2
Controller B IP address: 10.0.0.3
IP subnet mask: 255.255.255.0
Gateway IP address: 10.0.0.1
and
monitor
. To secure the storage system, set a new
When DHCP is enabled, the following initial values are set and remain set until the system is able to contact a DHCP server for new addresses:
Controller A IP address: 10.0.0.2
Controller B IP address: 10.0.0.3
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 37
Page 38
IP subnet mask: 255.255.255.0
Gateway IP address: 0.0.0.0
CAUTION: Changing IP settings can cause management hosts to lose access to the storage system.
To use DHCP to obtain IP values for network ports
1. Set IP address source to DHCP.
2. Click Next to continue.
To set static IP values for network ports
1. Determine the IP address, subnet mask, and gateway values to use for each controller.
2. Set IP address source to manual.
3. Set the values for each controller. You must set a unique IP address for each network port.
4. Click Next to continue.
Step 4: Enabling system-management services
You can enable or disable management-interface services to limit the ways in which users and host-based management applications can access the storage system. Network management interfaces operate out-of-band and do not affect host I/O to the system. The network options are:
Web Browser Interface (WBI). The primary interface for managing the system. You can enable use of
HTTP, of HTTPS for increased security, or both.
Command Line Interface (CLI). An advanced user interface for managing the system. You can enable
use of Telnet, of SSH (secure shell) for increased security, or both.
Storage Management Initiative Spec. (SMIS). Used for remote management of the system through your
network. The Storage Management Initiative Specification (SMI-S) is a Storage Networking Industry Association
(SNIA) standard that enables interoperable management for storage networks and storage devices. SMI-S replaces multiple disparate managed object models, protocols, and transports with a single
object-oriented model for each type of component in a storage network. The specification was created by SNIA to standardize storage management solutions. SMI-S enables management applications to support storage devices from multiple vendors quickly and reliably because they are no longer proprietary. SMI-S detects and manages storage elements by type, not by vendor.
For more information about SMI-S, see “Introduction to SMI-S for HP Systems Insight Manager” at
http://h18006.www1.hp.com/storage/pdfs/introsmis.pdf
File Transfer Protocol (FTP). A secondary interface for installing firmware updates, downloading logs,
and installing a license.
Simple Network Management Protocol (SNMP). Used for remote monitoring of the system through your
network.
Service Debug. Used for technical support only.
.
In-band management interfaces operate through the data path and can slightly reduce I/O performance. The in-band option is:
In-band SES Capability. Used for in-band monitoring of system status based on SCSI Enclosure Services
(SES) data.
If a service is disabled, it continues to run but cannot be accessed. To allow specific users to access WBI, CLI, or FTP, see About user accounts on page 20.
To change management interface settings
1. Enable the options that you want to use to manage the storage system, and disable the others.
2. Click Next to continue.
38 Configuring the system
Page 39
Step 5: Setting system information
Enter a name, contact person, location, and description for the system. The name is shown in the browser title bar or tab. The name, location, and contact are included in event notifications. All four values are recorded in system debug logs for reference by service personnel.
Click Next to continue.
Step 6: Configuring event notification
Configure up to four email addresses and three SNMP trap hosts to receive notifications of system events.
1. In the Email Configuration section, set the options:
• Notification Level. Select the minimum severity for which the system should send notifications:
Critical (only); Error (and Critical); Warning (and Error and Critical); Informational (all). The default is none, which disables email notification.
• SMTP Server address. The IP address of the SMTP mail server to use for the email messages. If the
mail server is not on the local network, make sure that the gateway IP address was set in the network configuration step.
• Sender Name. The sender name that, with the domain name, forms the “from” address for remote
notification. Because this name is used as part of an email address, do not include spaces. For example: Storage-1. If no sender name is set, a default name is created.
• Sender Domain. The domain name that, with the sender name, forms the “from” address for remote
notification. Because this name is used as part of an email address, do not include spaces. For example: MyDomain.com. If no domain name is set here, the default domain value is used. If the domain name is not valid, some email servers will not process the mail.
• Email Address fields. Up to four email addresses that the system should send notifications to. Email
addresses must use the format user-name@domain-name. For example: Admin@MyDomain.com.
2. In the SNMP Configuration section, set the options:
• Notification Level. Select the minimum severity for which the system should send notifications:
Critical (only); Error (and Critical); Warning (and Error and Critical); Informational (all). The default is none, which disables SNMP notification.
• Read Community. The SNMP read password for your network. The value is case sensitive and can
include letters, numbers, hyphens, and underscores. The default is
• Write Community. The SNMP write password for your network. The value is case sensitive and can
include letters, numbers, hyphens, and underscores. The default is
• Trap Host Address fields. IP addresses of up to three host systems that are configured to receive
SNMP traps.
3. Click Next to continue.
public
private
.
.
Step 7: Configuring host ports
To enable the system to communicate with hosts or with remote systems, you must configure the system's host-interface options. There are options for FC and iSCSI ports but not for SAS ports.
For FC ports you can set these options:
Speed can be set to auto (the default), which auto-negotiates the proper link speed with the host, or to
8Gb (Gbit per second), 4Gb, or 2Gb. Use auto if the port is directly connected to a host or switch. Because a speed mismatch prevents communication between the port and host, set a speed only if you need to force the port to use a known speed for testing, or you need to specify a mutually supported speed for more than two FC devices connected in an arbitrated loop.
Connection mode can be set to loop (the default), point-to-point, or auto. Loop protocol can be used in
a physical loop or in a direct physical connection between two devices. Point-to-point protocol can only be used on a direct physical connection between exactly two devices. Auto sets the mode based on the detected connection type.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 39
Page 40
Loop IDs can be set, per controller, to use soft or hard target addressing:
• Soft target addressing, which is the default, enables a LIP (loop initialization process) to determine the loop ID. Use this setting if the loop ID is permitted to change after a LIP or power cycle.
• Hard target addressing requests a specific loop ID that should remain after a LIP or power cycle. If the port cannot acquire the specified ID, it is assigned a soft target address. Use this option if you want ports to have specific addresses, if your system checks addresses in reverse order (lowest address first), or if an application requires that specific IDs be assigned to recognize the controller.
For iSCSI ports you can set these options:
IP Address. The port IP address.
Netmask. The port netmask address.
Gateway. The port gateway address.
Authentication (CHAP). Enables or disables use of Challenge Handshake Authentication Protocol.
Disabled by default.
Jumbo Frames. Enables or disables support for jumbo frames. A normal frame can contain 1500 bytes
whereas a jumbo frame can contain a maximum of 9000 bytes for larger data transfers. Disabled by default.
NOTE: Use of jumbo frames can succeed only if jumbo-frame support is enabled on all network
components in the data path.
Link Speed.
• Auto: Auto-negotiates the proper speed. This is the default.
• 1 Gbit/s: Forces the speed to 1 Gbit/sec, overriding a downshift that can occur during auto-negotiation with 1-Gbit/sec HBAs. This setting does not apply to 10-Gbit/sec HBAs.
iSCSI IP Version. Specifies whether IP values use Internet Protocol version 4 (IPv4) or version 6 (IPv6)
format. IPv4 uses 32-bit addresses. IPv6 uses 128-bit addresses.
iSNS. Enables or disables registration with a specified Internet Storage Name Service server, which
provides name-to-IP-address mapping. Disabled by default.
iSNS Address. Specifies the IP address of an iSNS server. The default address is all zeroes.
Alternate iSNS Address. Specifies the IP address of an alternate iSNS server, which can be on a
different subnet. The default address is all zeroes.
For SAS ports there are no host-interface options. Click Next to continue.
To change FC host-interface settings
1. For controller host ports that are attached to hosts:
• Set the speed to the proper value to communicate with the host.
• Set the connection mode.
2. For each controller, set the loop ID to use soft or hard target addressing. To use soft target addressing,
select Soft?. To use hard target addressing, clear Soft? and enter an address in the range 0–125. You cannot set the same hard target address for both controllers. An asterisk indicates that the value shown will be changed.
3. Click Next to continue.
To change iSCSI host-interface settings
1. For each iSCSI port, set the IP address, netmask, and gateway. Ensure that each iSCSI host port in the
storage system is assigned a different IP address.
2. For all iSCSI ports, set the authentication, jumbo frames, link speed, and iSNS options.
3. Click Next to continue.
40 Configuring the system
Page 41
Step 8: Confirming configuration changes
Confirm that the values listed in the wizard panel are correct.
If they are not correct, click Previous to return to previous steps and make necessary changes.
If they are correct, click Finish to apply the setting changes and finish the wizard.
NOTE: If you changed a controller’s FC loop ID setting, you must restart the controller to make the change
take effect.
Installing a license
A license is required to expand Snapshot limits and to use Replication. The license is specific to a controller enclosure serial number and firmware version.
If a permanent license is not installed and you want to try these features before buying a permanent license, you can create a temporary license one time. A temporary license will expire 60 days from the time it is created. After creating a temporary license, each time you sign in to SMU, a message specifies the time remaining in the trial period. If you do not install a permanent license before the temporary license expires, you cannot create new items with these features; however, you can continue to use existing items.
After a temporary license is created or a permanent license is installed, the option to create a temporary license is no longer displayed.
To view information about system licenses
In the Configuration View panel, right-click the system and select Tools > Install License.
The System Licenses table shows the following information about licensed features:
Feature. The name of the licensed feature.
Base. Either:
• The number of components that users can create without a license.
• N/A. Not applicable.
License. Either:
• The number of user-created components that the installed license supports.
• Enabled or Disabled.
In Use. Either:
• The number of user-created components that exist.
• N/A. Not applicable.
Max Licensable. Either:
• The number of user-created components that the maximum license supports.
• N/A. Not applicable.
Expiration. One of the following:
• Never. License doesn’t expire.
• Number of days remaining for a temporary license.
• Expired. Temporary license has expired and cannot be renewed.
• N/A. No license installed.
The panel also shows the licensing serial number (controller enclosure serial number) and licensing version number (controller firmware version), for which a license file must be generated in order to successfully install.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 41
Page 42
To create a temporary license
1. In the Configuration View panel, right-click the system and select Tools > Install License. If the option to
create a temporary license is available, the End User License Agreement appears in the lower portion of the license panel.
2. Read the license agreement.
3. If you accept the terms of the license agreement, select the checkbox. A confirmation dialog appears.
4. Click Yes to start the trial period. The feature’s Expiration value shows the number of days remaining in
the trial period; the trial period will expire on the last day. When the trial period expires, the value changes to Expired or Expired/Renewable.
To install a permanent license
1. Ensure that:
• The license file is saved to a network location that SMU can access.
• You are signed into the controller enclosure that the file was generated for.
2. In the Configuration View panel, right-click the system and select Tools > Install License.
3. Click Browse to locate and select the license file.
4. Click Install License File. If installation succeeds, the System Licenses table is updated. The licensing
change takes effect immediately. The feature’s Expiration value shows Never.
Configuring system services
Changing management interface settings
You can enable or disable management interfaces to limit the ways in which users and host-based management applications can access the storage system. Network management interfaces operate out-of-band and do not affect host I/O to the system. The network options are:
Web Browser Interface (WBI). The primary interface for managing the system. You can enable use of
HTTP, of HTTPS for increased security, or both.
Command Line Interface (CLI). An advanced user interface for managing the system. You can enable
use of Telnet, of SSH (secure shell) for increased security, or both.
Storage Management Initiative Specification (SMIS). Used for remote management of the system
through your network.
File Transfer Protocol (FTP). A secondary interface for installing firmware updates, downloading logs,
and installing a license.
Simple Network Management Protocol (SNMP). Used for remote monitoring of the system through your
network.
Service Debug. Used for technical support only.
In-band management interfaces operate through the data path and can slightly reduce I/O performance. The in-band option is:
In-band SES Capability. Used for in-band monitoring of system status based on SCSI Enclosure Services
(SES) data.
If a service is disabled, it continues to run but cannot be accessed. To allow specific users to access WBI, CLI, or FTP, see About user accounts on page 20.
To change management interface settings
1. In the Configuration View panel, right-click the system and select Configuration > Services >
Management.
2. Enable the options that you want to use to manage the storage system, and disable the others.
3. Click Apply. If you disabled any options, a confirmation dialog appears.
4. Click Yes to continue; otherwise, click No. If you clicked Yes, a processing dialog appears. When
processing is complete a success dialog appears.
5. Click OK.
42 Configuring the system
Page 43
Configuring email notification
To configure email notification of events
1. In the Configuration View panel, right-click the system and select Configuration > Services > Email
Notification.
2. In the main panel, set the options:
• Notification Level. Select the minimum severity for which the system should send notifications:
Critical (only); Error (and Critical); Warning (and Error and Critical); Informational (all). The default is none (Disabled), which disables email notification.
• SMTP Server address. The IP address of the SMTP mail server to use for the email messages. If the
mail server is not on the local network, make sure that the gateway IP address is set in System Settings > Network Interfaces.
• Sender Name. The sender name that, with the domain name, forms the “from” address for remote
notification. Because this name is used as part of an email address, do not include spaces. For example: Storage-1. If no sender name is set, a default name is created.
• Sender Domain. The domain name that, with the sender name, forms the “from” address for remote
notification. Because this name is used as part of an email address, do not include spaces. For example: MyDomain.com. If no domain name is set here, the default domain value is used. If the domain name is not valid, some email servers will not process the mail.
• Email Address fields. Up to four email addresses that the system should send notifications to. Email
addresses must use the format user-name@domain-name. For example: Admin@MyDomain.com.
3. Click Apply.
4. Optionally, send a test message to the configured destinations as described on page 84.
Configuring SNMP notification
To configure SNMP notification of events
1. In the Configuration View panel, right-click the system and select Configuration > Services > SNMP
Notification.
2. In the main panel, set the options:
• Notification Level. Select the minimum severity for which the system should send notifications:
Critical (only); Error (and Critical); Warning (and Error and Critical); Informational (all). The default is none, which disables SNMP notification.
• Read Community. The SNMP read password for your network. The value is case sensitive and can
include letters, numbers, hyphens, and underscores. The default is public.
• Write Community. The SNMP write password for your network. The value is case sensitive and can
include letters, numbers, hyphens, and underscores. The default is private.
• Trap Host Address fields. IP addresses of up to three host systems that are configured to receive
SNMP traps.
3. Click Apply.
4. Optionally, send a test message to the configured destinations as described on page 84.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 43
Page 44
Configuring user accounts
Adding users
To add a user
1. In the Configuration View panel, right-click the system and select Configuration > Users > Add User.
2. In the main panel, set the options:
• User Name. A user name is case sensitive and cannot already exist in the system. A name cannot include a comma, double quote, or backslash.
NOTE: The user name admin is reserved for internal use.
• Password. A password is case sensitive. A password cannot include a comma, double quote, or backslash. Though optional, passwords are highly recommended to ensure system security.
• User Roles. Select Monitor to let the user view system settings, or Manage to let the user view and change system settings. You cannot change the role of user manage.
• User Type. Select Standard to allow access to standard functions, or Advanced to allow access to all functions except diagnostic functions, or Diagnostic to allow access to all functions.
NOTE: This release has no functions that require Advanced or Diagnostic access; a Standard user
can access all functions.
WBI Access. Allows access to the web-based management interface.
CLI Access. Allows access to the command-line management interface.
FTP Access. Allows access to the file transfer protocol interface, which provides a way to install firmware updates and download logs.
• Base Preference. Select the base for entry and display of storage-space sizes. In base 2, sizes are shown as powers of 2, using 1024 as a divisor for each magnitude. In base 10, sizes are shown as powers of 10, using 1000 as a divisor for each magnitude. Operating systems usually show volume size in base 2. Disk drives usually show size in base 10. Memory (RAM and ROM) size is always shown in base 2.
• Precision Preference. Select the number of decimal places (1–10) for display of storage-space sizes.
• Unit Preference. Select the unit for display of storage-space sizes. Select Auto to let the system determine the proper unit for a size. Based on the precision setting, if the selected unit is too large to meaningfully display a size, the system uses a smaller unit for that size. For example, if the unit is set to TB and the precision is set to 1, the size 0.11709 TB is shown as 119.9 GB.
• Temperature Preference. Specifies to use either the Celsius scale or the Fahrenheit scale for temperature values.
• Auto Sign Out. Select the amount of time that the user's session can be idle before the user is automatically signed out: 5, 15, or 30 minutes, or Never (9999 minutes). The default is 30 minutes.
• Locale. The user's preferred display language, which overrides the system's default display language. Installed language sets include Chinese-simplified, Chinese-traditional, Dutch, English, French, German, Italian, Japanese, Korean, and Spanish.
3. Click Add User.
44 Configuring the system
Page 45
Modifying users
To modify a user
1. In the Configuration View panel, right-click the system and select Configuration > Users > Modify User.
2. In the main panel, select the user to modify.
3. Set the options:
• User Name. A user name is case sensitive and cannot already exist in the system. A name cannot include a comma, double quote, or backslash.
• Password. A password is case sensitive. A password cannot include a comma, double quote, or backslash. Though optional, passwords are highly recommended to ensure system security.
• User Roles. Select Monitor to let the user view system settings, or Manage to let the user view and change system settings. You cannot change the role of user manage.
• User Type. Select Standard to allow access to standard functions, or Advanced to allow access to all functions except diagnostic functions, or Diagnostic to allow access to all functions.
NOTE: This release has no functions that require Advanced or Diagnostic access; a Standard user
can access all functions.
WBI Access. Allows access to the web-based management interface.
CLI Access. Allows access to the command-line management interface.
FTP Access. Allows access to the file transfer protocol interface, which provides a way to install firmware updates and download logs.
• Base Preference. Select the base for entry and display of storage-space sizes. In base 2, sizes are shown as powers of 2, using 1024 as a divisor for each magnitude. In base 10, sizes are shown as powers of 10, using 1000 as a divisor for each magnitude. Operating systems usually show volume size in base 2. Disk drives usually show size in base 10. Memory (RAM and ROM) size is always shown in base 2.
• Precision Preference. Select the number of decimal places (1–10) for display of storage-space sizes.
• Unit Preference. Select the unit for display of storage-space sizes. Select Auto to let the system determine the proper unit for a size. Based on the precision setting, if the selected unit is too large to meaningfully display a size, the system uses a smaller unit for that size. For example, if the unit is set to TB and the precision is set to 1, the size 0.1170 9 T B i s s ho wn a s 119. 9 G B .
• Temperature Preference. Specifies to use either the Celsius scale or the Fahrenheit scale for temperature values.
• Auto Sign Out. Select the amount of time that the user's session can be idle before the user is automatically signed out: 5, 15, or 30 minutes, or Never (9999 minutes). The default is 30 minutes.
• Locale. The user's preferred display language, which overrides the system's default display language. Installed language sets include Chinese-simplified, Chinese-traditional, Dutch, English, French, German, Italian, Japanese, Korean, and Spanish.
4. Click Modify User.
Removing users
To remove a user
1. In the Configuration View panel, right-click the system and select Configuration > Users > Remove User.
2. In the main panel, select the user to remove. You cannot remove the manage user.
3. Click Remove User. A confirmation dialog appears.
4. Click Remove to continue; otherwise, click Cancel. If you clicked Remove, a processing dialog appears.
When processing is complete, the user is removed from the table.
5. Click OK.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 45
Page 46
Configuring system settings
Changing the system date and time
You can enter values manually for the system date and time, or you can set the system to use NTP as explained in About the system date and time on page 33. Date and time values use Coordinated Universal Time (UTC).
To use manual date and time settings
1. In the Configuration View panel, right-click the system and select Configuration > System Settings >
Date, Time. The date and time options appear.
2. Set the options:
• Time. Enter the time in the format hh:mm:ss, where hh is the hour (0–23), mm is the minutes (0–59),
and ss is the seconds (0–59).
•Month.
•Day.
• Year. Enter the year using four digits.
• Network Time Protocol (NTP). Select Disabled.
3. Click Apply.
To obtain the date and time from an NTP server
1. In the Configuration View panel, right-click the system and select Configuration > System Settings >
Date, Time. The date and time options appear.
2. Set the options:
• Network Time Protocol (NTP). Select Enabled.
• NTP Time Zone Offset. Optional. The system's time zone as an offset in hours from UTC. For
example, the Pacific Time Zone offset is -8 during Pacific Standard Time or -7 during Pacific Daylight Time.
• NTP Server Address. Optional. If the system should retrieve time values from a specific NTP server,
enter the address of an NTP server. If no IP server address is set, the system listens for time messages sent by an NTP server in broadcast mode.
3. Click Apply.
Changing host interface settings
To enable the system to communicate with hosts or with remote systems, you must configure the system's host-interface options. There are options for FC and iSCSI ports but not for SAS ports.
To change FC host interface settings
1. In the Configuration View panel, right-click the system and select Configuration > System Settings >
Host Interfaces.
2. Set the speed to the proper value to communicate with the host. Speed can be set to auto (the default),
which auto-negotiates the proper link speed with the host, or to 8Gb (Gbit per second), 4Gb, or 2Gb. Use auto if the port is directly connected to a host or switch. Because a speed mismatch prevents communication between the port and host, set a speed only if you need to force the port to use a known speed for testing, or you need to specify a mutually supported speed for more than two FC devices connected in an arbitrated loop.
3. Set the connection mode to loop (the default), point-to-point, or auto. Loop protocol can be used in a
physical loop or in a direct physical connection between two devices. Point-to-point protocol can only be used on a direct physical connection between exactly two devices. Auto sets the mode based on the detected connection type.
46 Configuring the system
Page 47
4. Set the loop ID for each controller to request when the controller arbitrates during a LIP. A controller can
use soft or hard target addressing:
• Soft target addressing, which is the default, enables a LIP (loop initialization process) to determine the loop ID. Use this setting if the loop ID is permitted to change after a LIP or power cycle. To use this option, select Soft?.
• Hard target addressing requests a specific loop ID that should remain after a LIP or power cycle. If the port cannot acquire the specified ID, it is assigned a soft target address. Use this option if you want ports to have specific addresses, if your system checks addresses in reverse order (lowest address first), or if an application requires that specific IDs be assigned to recognize the controller. To use this option, clear Soft and enter an address in the range 0–125. You cannot set the same hard target address for both controllers.
5. Click Apply. If you changed a loop ID setting, a message specifies that you must restart the controller to
make the change take effect. An asterisk indicates that the value shown will be changed.
To change iSCSI host interface settings
1. In the Configuration View panel, right-click the system and select Configuration > System Settings >
Host Interfaces.
2. Set the port-specific options:
• IP Address. For each controller, assign one port to one subnet and the other port to a second
subnet. Ensure that each iSCSI host port in the storage system is assigned a different IP address. For example:
• Controller A port 0: 10.10.10.100
• Controller A port 1: 10.11.10.120
• Controller B port 0: 10.10.10.110
• Controller B port 1: 10.11.10.130
• Netmask. IP subnet mask. The default is 255.255.255.0.
• Gateway. Gateway IP address. The default is 0.0.0.0.
CAUTION: Changing IP settings can cause data hosts to lose access to the storage system.
3. Set the common options:
• Authentication (CHAP). Enables or disables use of Challenge Handshake Authentication Protocol.
Disabled by default.
• Jumbo Frames. Enables or disables support for jumbo frames. A normal frame can contain 1500
bytes whereas a jumbo frame can contain a maximum of 9000 bytes for larger data transfers. Disabled by default.
NOTE: Use of jumbo frames can succeed only if jumbo-frame support is enabled on all network
components in the data path.
•Link Speed.
• Auto: Auto-negotiates the proper speed. This is the default.
• 1 Gbit/s: Forces the speed to 1 Gbit/sec, overriding a downshift that can occur during auto-negotiation with 1-Gbit/sec HBAs. This setting does not apply to 10-Gbit/sec HBAs.
• iSCSI IP Version. Specifies whether IP values use Internet Protocol version 4 (IPv4) or version 6 (IPv6) format. IPv4 uses 32-bit addresses. IPv6 uses 128-bit addresses.
• iSNS. Enables or disables registration with a specified Internet Storage Name Service server, which provides name-to-IP-address mapping. Disabled by default.
• iSNS Address. Specifies the IP address of an iSNS server. The default address is all zeroes.
• Alternate iSNS Address. Specifies the IP address of an alternate iSNS server, which can be on a different subnet. The default address is all zeroes.
4. Click Apply.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 47
Page 48
Changing network interface settings
You can configure addressing parameters for each controller's network port. You can set static IP values or use DHCP.
In DHCP mode, network port IP address, subnet mask, and gateway values are obtained from a DHCP server if one is available. If a DHCP server is unavailable, current addressing is unchanged. You must have some means of determining what addresses have been assigned, such as the list of bindings on the DHCP server.
Each controller has the following factory-default IP settings:
DHCP: enabled
Controller A IP address: 10.0.0.2
Controller B IP address: 10.0.0.3
IP subnet mask: 255.255.255.0
Gateway IP address: 10.0.0.1
When DHCP is enabled, the following initial values are set and remain set until the system is able to contact a DHCP server for new addresses:
Controller A IP address: 10.0.0.2
Controller B IP address: 10.0.0.3
IP subnet mask: 255.255.255.0
Gateway IP address: 0.0.0.0
CAUTION: Changing IP settings can cause management hosts to lose access to the storage system.
To use DHCP to obtain IP values for network ports
1. In the Configuration View panel, right-click the system and select Configuration > System Settings >
Network Interfaces.
2. Set the IP address source to DHCP.
3. Click Apply. If the controllers successfully obtain IP values from the DHCP server, the new IP values are
displayed.
4. Record the new addresses.
5. Sign out and try to access SMU using the new IP addresses.
To set static IP values for network ports
1. Determine the IP address, subnet mask, and gateway values to use for each controller.
2. In the Configuration View panel, right-click the system and select Configuration > System Settings >
Network Interfaces.
3. Set the IP address source to manual.
4. Set the options for each controller. You must set a unique IP address for each network port.
5. Record the IP values you assign.
6. Click Apply.
7. Sign out and try to access SMU using the new IP addresses.
48 Configuring the system
Page 49
Setting system information
To set system information
1. In the Configuration View panel, right-click the system and select Configuration > System Settings >
System Information.
2. In the main panel, set the name, contact person or group, location, and other information about the
system. The name is shown in the browser title bar or tab. The name, location, and contact are included in event notifications. All four values are recorded in system debug logs for reference by service personnel.
3. Click Apply.
Configuring advanced settings
Changing disk settings
Configuring SMART
Self-Monitoring Analysis and Reporting Technology (SMART) provides data that enables you to monitor disks and analyze why a disk failed. When SMART is enabled, the system checks for SMART events one minute after a restart and every five minutes thereafter. SMART events are recorded in the event log.
To change the SMART setting
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
Disk.
2. Set SMART Configuration to either:
Don’t Modify. Allows current disks to retain their individual SMART settings and does not change
the setting for new disks added to the system.
Enabled. Enables SMART for all current disks after the next rescan and automatically enables
SMART for new disks added to the system. This option is the default.
Disabled. Disables SMART for all current disks after the next rescan and automatically disables
SMART for new disks added to the system.
3. Click Apply.
Configuring drive spin down for available disks and global spares
The drive spin down (DSD) feature monitors disk activity within system enclosures and spins down inactive disks. You can enable or disable DSD for available disks and global spares, and set the period of inactivity after which the vdisk's disks and dedicated spares automatically spin down.
To configure a time period to suspend and resume DSD for all disks, see Scheduling drive spin down for all
disks on page 50. To configure DSD for a vdisk, see Configuring drive spin down for a vdisk on page 56.
DSD affects disk operations as follows:
Spun-down disks are not polled for SMART events.
Operations requiring access to disks may be delayed while the disks are spinning back up.
To configure DSD for available disks and global spares
1. In the Configuration View panel, right-click the local system and select Configuration > Advanced
Settings > Disk.
2. Set the options:
• Either select (enable) or clear (disable) the Available and Spare Drive Spin Down Capability option.
If you are enabling DSD, a warning prompt appears; to use DSD, click Yes; to leave DSD disabled, click No.
• Set the Drive Spin Down Delay (minutes), which is the period of inactivity after which available disks
and global spares automatically spin down, from 1–360 minutes. If DSD is enabled and no delay value is set, the default is 15 minutes. The value 0 disables DSD.
3. Click Apply. When processing is complete a success dialog appears.
4. Click OK.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 49
Page 50
Scheduling drive spin down for all disks
For all disks that are configured to use drive spin down (DSD), you can configure a time period to suspend and resume DSD so that disks remain spun-up during hours of frequent activity.
To configure DSD for a vdisk, see Configuring drive spin down for a vdisk on page 56. To configure DSD for available disks and global spares, see Configuring drive spin down for available disks and global
spares on page 49.
DSD affects disk operations as follows:
Spun-down disks are not polled for SMART events.
Operations requiring access to disks may be delayed while the disks are spinning back up.
If a suspend period is configured and it starts while a disk has started spinning down, the disk spins up
again.
To schedule DSD for all disks
1. In the Configuration View panel, right-click the local system and select Configuration > Advanced
Settings > Disk.
2. Set the options:
• Select the Drive Spin Down Suspend Period option.
• Set a time to suspend and a time to resume DSD. For each, enter hour and minutes values and
select either AM, PM, or 24H (24-hour clock).
• If you want the schedule to apply only Monday through Friday, select the Exclude Weekend Days
from Suspend Period option.
3. Click Apply. When processing is complete a success dialog appears.
4. Click OK.
Configuring dynamic spares
The dynamic spares feature lets you use all of your disks in redundant vdisks without designating a disk as a spare. With dynamic spares enabled, if a disk fails and you replace it with a compatible disk, the storage system rescans the bus, finds the new disk, automatically designates it a spare, and starts reconstructing the vdisk. A compatible disk has enough capacity to replace the failed disk and is the same type (SAS or SATA). If a dedicated spare, global spare, or compatible available disk is already present, the dynamic spares feature uses that disk to start the reconstruction and the replacement disk can be used for another purpose.
To change the dynamic spares setting
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
Disk.
2. Either select (enable) or clear (disable) the Dynamic Spare Capability option.
3. Click Apply.
50 Configuring the system
Page 51
Configuring the EMP polling rate
You can change the interval at which the storage system polls each attached enclosure's EMP for status changes. Typically you can use the default setting.
Increasing the interval might slightly improve processing efficiency, but changes in device status are
communicated less frequently. For example, this increases the amount of time before LEDs are updated to reflect status changes.
Decreasing the interval slightly decreases processing efficiency, but changes in device status are
communicated more frequently. For example, this decreases the amount of time before LEDs are updated to reflect status changes.
To change the EMP polling rate
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
Disk.
2. Set the EMP Polling Rate interval. The default is 5 seconds.
3. Click Apply.
Changing system cache settings
Changing the synchronize-cache mode
You can control how the storage system handles the can use the default setting. However, if the system has performance problems or problems writing to databases or other applications, contact technical support to determine if you should change this option.
To change the synchronize-cache mode
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
Cache.
2. Set Sync Cache Mode to either:
Immediate. Good status is returned immediately and cache content is unchanged. This is the
default.
Flush to Disk. Good status is returned only after all write-back data for the specified volume is
flushed to disk.
3. Click Apply.
Changing the missing LUN response
Some operating systems do not look beyond LUN 0 if they do not find a LUN 0 or cannot handle noncontiguous LUNs. The Missing LUN Response option handles these situations by enabling the host drivers to continue probing for LUNs until they reach the LUN to which they have access.
This option controls the SCSI sense data returned for volumes that are not accessible because they don't exist or have been hidden through volume mapping (this does not apply to volumes of offline vdisks). Use the default value unless a service technician asks you to change it to work around a host driver problem.
To change the missing LUN response
SCSI SYNCHRONIZE CACHE
command. Typically you
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
Cache.
2. Set Missing LUN Response to either:
Not Ready. Sends a reply that there is a LUN where a gap has been created but that it's “not
ready.” Sense data returned is a Sense Key of 2h and an ASC/ASCQ of 04/03. This option is the default.
Illegal Request. Sends a reply that there is a LUN but that the request is “illegal.” Sense data
returned is a Sense Key of 5h and an ASC/ASCQ of 25/00.
3. Click Apply.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 51
Page 52
Controlling host access to the system's write-back cache setting
You can prevent hosts from using setting. Some operating systems disable write cache. If host control of write-back cache is disabled, the host cannot modify the cache setting. The default is Disabled.
This option is useful in some environments where the host disables the system's write-back cache, resulting in degraded performance.
SCSI MODE SELECT
commands to change the system's write-back cache
To change host access to the write-back cache setting
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
Cache.
2. Either select (enable) or clear (disable) the Host Control of Write-Back Cache option.
3. Click Apply.
Changing auto-write-through cache triggers and behaviors
You can set conditions that cause (“trigger”) a controller to change the cache mode from write-back to write-through, as described in About volume cache options on page 25. You can also specify actions for the system to take when write-through caching is triggered.
To change auto-write-through cache triggers and behaviors
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
Cache.
2. In the Auto-Write Through Cache Trigger Conditions section, either select (enable) or clear (disable) the
options:
Controller Failure. Changes to write-through if a controller fails. Disabled by default.
Cache Power. Changes to write-through if cache backup power is not fully charged or fails. Enabled
by default.
CompactFlash. Changes to write-through if CompactFlash memory is not detected during POST, fails
during POST, or fails while the controller is under operation. Enabled by default.
Power Supply Failure. Changes to write-through if a power supply unit fails. Disabled by default.
Fan Failure. Changes to write-through if a cooling fan fails. Disabled by default.
Overtemperature Failure. Forces a controller shutdown if a temperature is detected that exceeds
system threshold limits. Disabled by default.
3. In the Auto-Write Through Cache Behaviors section, either select (enable) or clear (disable) the options:
Revert when Trigger Condition Clears. Changes back to write-back caching after the trigger
condition is cleared. Enabled by default.
Notify Other Controller. Notifies the partner controller that a trigger condition occurred. Enable this
option to have the partner also change to write-through mode for better data protection. Disable this option to allow the partner continue using its current caching mode for better performance. Disabled by default.
4. Click Apply.
Configuring partner firmware update
In a dual-controller system in which partner firmware update is enabled, when you update firmware on one controller, the system automatically updates the partner controller. Disable partner firmware update only if requested by a service technician.
To change the partner firmware update setting
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
Firmware.
2. Either select (enable) or clear (disable) the Partner Firmware Update option.
3. Click Apply.
52 Configuring the system
Page 53
Configuring system utilities
Configuring background scrub for vdisks
You can enable or disable whether the system continuously analyzes disks in vdisks to detect, report, and store information about disk defects. Vdisk-level errors reported include: hard errors, media errors, and bad block replacements (BBRs). Disk-level errors reported include: metadata read errors, SMART events during scrub, bad blocks during scrub, and new disk defects during scrub. For RAID 3, 5, 6, and 50, the utility checks all parity blocks to find data-parity mismatches. For RAID 1 and 10, the utility compares the primary and secondary disks to find data inconsistencies. For NRAID and RAID 0, the utility checks for media errors.
You can use a vdisk while it is being scrubbed. Background vdisk scrub runs at background utility priority, which reduces to no activity if CPU usage is above a certain percentage or if I/O is occurring on the vdisk being scrubbed. A vdisk scrub may be in process on multiple vdisks at once. A new vdisk will first be scrubbed 20 minutes after creation. After a vdisk is scrubbed, scrub will start again after the interval specified by the Vdisk Scrub Interval option.
When a scrub is complete, an event with code 207 is logged that specifies whether errors were found. For details, see the Event Descriptions Reference Guide.
Enabling background vdisk scrub is recommended for both SATA and SAS disks.
TIP: If you choose to disable background vdisk scrub, you can still scrub a selected vdisk by using Media
Scrub Vdisk (page 85).
To configure background scrub for vdisks
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
System Utilities.
2. Set the options:
• Either select (enable) or clear (disable) the Vdisk Scrub option. This option is enabled by default.
• Set the Vdisk Scrub Interval, which is the interval between background vdisk scrub finishing and
starting again, from 1–360 hours; the default is 24 hours.
3. Click Apply.
Configuring background scrub for disks not in vdisks
You can enable or disable whether the system continuously analyzes disks that are not in vdisks to detect, report, and store information about disk defects. Errors reported include: metadata read errors, SMART events during scrub, bad blocks during scrub, and new disk defects during scrub. The interval between background disk scrub finishing and starting again is 24 hours.
Enabling background vdisk scrub is recommended for both SATA and SAS disks.
To configure background scrub for vdisks
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
System Utilities.
2. Either select (enable) or clear (disable) the Disk Scrub option. This option is disabled by default.
3. Click Apply.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 53
Page 54
Configuring utility priority
You can change the priority at which the Verify, Reconstruct, Expand, and Initialize utilities run when there are active I/O operations competing for the system's controllers.
To change the utility priority
1. In the Configuration View panel, right-click the system and select Configuration > Advanced Settings >
System Utilities.
2. Set Utility Priority to either:
High. Use when your highest priority is to get the system back to a fully fault-tolerant state. This
causes heavy I/O with the host to be slower than normal. This value is the default.
Medium. Use when you want to balance data streaming with data redundancy.
Low. Use when streaming data without interruption, such as for a web server, is more important than
data redundancy. This enables a utility such as Reconstruct to run at a slower rate with minimal effect on host I/O.
Background. Utilities run only when the processor has idle cycles.
3. Click Apply.
Configuring remote systems
Adding a remote system
You can add a management object to obtain information from a remote storage system. This allows a local system to track remote systems by their network-port IP addresses and cache their login credentials. The IP address can then be used in commands that need to interact with the remote system.
To add a remote system
1. In the Configuration View panel, either:
• Right-click the local system and select Configuration > Remote System > Add Remote System.
• Right-click a remote system and select Configuration > Add Remote System.
2. In the main panel set the options:
IP address. IP address of a network port on the remote system.
User Name. User name for a user that has Manage-level access on the remote system.
Password. Optional. Password, if any, for the specified user.
3. Click Create Remote System. If the task succeeds, the new remote system appears in the Configuration
View panel.
Removing a remote system
You can remove the management objects for remote systems.
CAUTION: Before removing a remote system, ensure that it is not being used for remote replication.
To remove remote systems
1. In the Configuration View panel, either:
• Right-click the local system and select Configuration > Remote System > Delete Remote System.
• Right-click a remote system and select Configuration > Delete Remote System.
2. In the main panel, select the remote systems to remove. To select or clear all hosts, toggle the checkbox
in the heading row.
3. Click Delete Remote System(s). A confirmation dialog appears.
4. Click Delete to continue; otherwise, click Cancel. If you clicked Delete, a processing dialog appears. If
the task succeeds, the remote systems are removed from the table and from the Configuration View panel. When processing is complete a success dialog appears.
5. Click OK.
54 Configuring the system
Page 55
Configuring a vdisk
Managing dedicated spares
You can assign a maximum of four available disks to a redundant vdisk (RAID 1, 3, 5, 6, 10, 50) for use as spares by that vdisk only. A spare must be the same type (SAS or SATA, small or large form-factor) as other disks in the vdisk, and have sufficient capacity to replace the smallest disk in the vdisk.
If a disk in the vdisk fails, a dedicated spare is automatically used to reconstruct the vdisk. A redundant vdisk other than RAID-6 becomes Critical when one disk fails. A RAID-6 vdisk becomes Degraded when one disk fails and Critical when two disks fail. After the vdisk's parity or mirror data is completely written to the spare, the vdisk returns to fault-tolerant status. For RAID-50 vdisks, if more than one sub-vdisk becomes critical, reconstruction and use of assigned spares occur in the order sub-vdisks are numbered.
To change a vdisk's spares
1. In the Configuration View panel, right-click a vdisk and select Configuration > Manage Dedicated
Spares. The main panel shows information about the selected vdisk, its spares, and all disks in the
system. Existing spares are labeled SPARE.
• In the Disk Sets table, the number of white slots in the SPARE entry's Disks field shows how many
spares you can add to the vdisk.
• In the enclosure view or list, only existing spares and suitable available disks are selectable.
2. Select spares to remove, disks to add as spares, or both.
3. Click Modify Spares. If the task succeeds, the panel is updated to show which disks are now spares for
the vdisk.
Changing a vdisk's name
To change a vdisk's name
1. In the Configuration View panel, right-click a vdisk and select Configuration > Modify Vdisk Name. The
main panel shows the vdisk's name.
2. Enter a new name. A vdisk name is case sensitive and cannot already exist in the system. A name
cannot include a comma, double quote, or backslash.
3. Click Modify Name. The new name appears in the Configuration View panel.
Changing a vdisk's owner
Each vdisk is owned by one of the controllers, known as the preferred owner. Typically, you should not need to change vdisk ownership.
When a controller fails, the partner controller assumes temporary ownership of the failed controller's vdisks and resources, becoming the current owner. If the system uses a fault-tolerant cabling configuration, both controllers' LUNs are accessible through the partner.
CAUTION:
Before changing the owning controller for a vdisk, you must stop host I/O to the vdisk’s volumes.
Because a volume and its snap pool must be in vdisks owned by the same controller, if an ownership
change will cause volumes and their snap pools to be owned by different controllers, the volumes will not be able to access their snap pools.
Changing the owner of a vdisk does not affect the mappings volumes in that vdisk.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 55
Page 56
To change a vdisk's owner
1. In the Configuration View panel, right-click a vdisk and select Configuration > Modify Vdisk Owner.
The main panel shows the vdisk's owner.
2. Select a new owner.
3. Click Modify Owner. A confirmation dialog appears.
4. Click Yes to continue; otherwise, click No. If you clicked Yes, a processing dialog appears. When
processing is complete a success dialog appears.
5. Click OK.
Configuring drive spin down for a vdisk
The drive spin down (DSD) feature monitors disk activity within system enclosures and spins down inactive disks. For a specific vdisk, you can enable or disable DSD and set the period of inactivity after which the vdisk's disks and dedicated spares automatically spin down.
To configure a time period to suspend and resume DSD for all vdisks, see Scheduling drive spin down for
all disks on page 50. To configure DSD for available disks and global spares, see Configuring drive spin down for available disks and global spares on page 49.
DSD affects disk operations as follows:
Spun-down disks are not polled for SMART events.
Operations requiring access to disks may be delayed while the disks are spinning back up.
If a suspend period is configured and it starts while a vdisk has started spinning down, the vdisk spins
up again.
To configure DSD for a vdisk
1. In the Configuration View panel, right-click a vdisk and select Configuration > Configure Vdisk Drive
Spin Down.
2. Set the options:
• Either select (enable) or clear (disable) the Enable Drive Spin Down option.
• Set the Drive Spin Down Delay (minutes), which is the period of inactivity after which the vdisk's
disks and dedicated spares automatically spin down, from 1–360 minutes. If DSD is enabled and no delay value is set, the default is 15 minutes. A value of 0 disables DSD.
3. Click Apply. When processing is complete a success dialog appears.
4. Click OK.
Configuring a volume
Changing a volume's name or OpenVMS UID
To change a volume's name
1. In the Configuration View panel, right-click a volume and select Configuration > Modify Volume Name.
2. Enter a new name. A volume name is case sensitive and cannot already exist in a vdisk. A name
cannot include a comma, double quote, or backslash.
3. Click Modify Name. The new name appears in the Configuration View panel.
To change a volume's OpenVMS UID
1. In the Configuration View panel, right-click a volume and select Configuration > Modify Volume Name.
2. Enter a number in the range 1–32767 to identify the volume to the OpenVMS host.
3. Click Modify UID.
56 Configuring the system
Page 57
Changing a volume's cache settings
CAUTION:
Only disable write-back caching if you fully understand how the host operating system, application,
and adapter move data. If used incorrectly, you might hinder system performance.
Only change read-ahead cache settings if you fully understand how the host operating system,
application, and adapter move data so that you can adjust the settings accordingly.
To change a volume's cache settings
1. In the Configuration View panel, right-click a volume and select Configuration > Modify Volume Cache
Settings.
2. In the main panel, set the read-ahead cache options:
• Write Policy. Select write-back or write-through. The default is write-back.
• Write Optimization. Select Standard or Super Sequential. The default is Standard.
• Read Ahead Size. Select Default, a specific size (64, 128, 256, or 512 KB; 1, 2, 4, 8, 16, or 32
MB), Maximum, or Disabled.
3. Click Modify Cache Settings.
Configuring a snapshot
Changing a snapshot’s name
To change a snapshot's name
1. In the Configuration View panel, right-click a snapshot and select Configuration > Modify Snapshot
Name.
2. Enter a new name. A snapshot name is case sensitive and cannot already exist in a vdisk. A name
cannot include a comma, double quote, or backslash.
3. Click Modify Name. The new name appears in the Configuration View panel.
Configuring a snap pool
Changing a snap pool’s name
To change a snap pool's name
1. In the Configuration View panel, right-click a snap pool and select Configuration > Modify Snap Pool
Name.
2. Enter a new name. A snap pool name is case sensitive and cannot already exist in a vdisk. A name
cannot include a comma, double quote, or backslash.
3. Click Modify Name. The new name appears in the Configuration View panel.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 57
Page 58
58 Configuring the system
Page 59
3Provisioning the system
Using the Provisioning Wizard
The Provisioning Wizard helps you create a vdisk with volumes and to map the volumes to hosts. Before using this wizard, read documentation and Resource Library guidelines for your product to learn about vdisks, volumes, and mapping. Then plan the vdisks and volumes you want to create and the default mapping settings you want to use.
The wizard guides you through the following steps. For each step you can view help by clicking the help icon in the wizard panel. As you complete steps they are highlighted at the bottom of the panel. If you cancel the wizard at any point, no changes are made.
Specify a name and RAID level for the vdisk
Select disks to use in the vdisk
Specify the number and size of volumes to create in the vdisk
Specify the default mapping for access to the volume by hosts
Confirm changes and apply them
Step 1: Starting the wizard
1. In the Configuration View panel, right-click the system and select either Provisioning > Provisioning
Wizard or Wizards > Provisioning Wizard. The wizard panel appears.
2. Click Next to continue.
Step 2: Specifying the vdisk name and RAID level
A vdisk is a “virtual” disk that is composed of one or more disks, and has the combined capacity of those disks. The number of disks that a vdisk can contain is determined by its RAID level. All disks in a vdisk must be the same type (SAS or SATA, small or large form-factor). A maximum of 16 vdisks per controller can exist.
A vdisk can contain different models of disks, and disks with different capacities. For example, a vdisk can include a 500-GB disk and a 750-GB disk. If you mix disks with different capacities, the smallest disk determines the logical capacity of all other disks in the vdisk, regardless of RAID level. For example, if a RAID-0 vdisk contains one 500-GB disk and four 750-GB disks, the capacity of the vdisk is equivalent to approximately five 500-GB disks. To maximize capacity, use disks of similar size. For greatest reliability, use disks of the same size and rotational speed.
In a single-controller system, all vdisks are owned by that controller. In a dual-controller system, when a vdisk is created the system automatically assigns the owner to balance the number of vdisks each controller owns; or, you can select the owner. Typically it doesn’t matter which controller owns a vdisk.
In a dual-controller system, when a controller fails, the partner controller assumes temporary ownership of the failed controller's vdisks and resources. If the system uses a fault-tolerant cabling configuration, both controllers' LUNs are accessible through the partner.
When you create a vdisk you can also create volumes within it. A volume is a logical subdivision of a vdisk, and can be mapped to controller host ports for access by hosts. The storage system presents only volumes, not vdisks, to hosts.
To create a vdisk
1. Set the options:
• Vdisk name. Optionally change the default name for the vdisk. A vdisk name is case sensitive and
cannot already exist in the system. A name cannot include a comma, double quote, or backslash.
• Assign to. Optionally select a controller to be the preferred owner for the vdisk. The default, Auto,
automatically assigns the owner to load-balance vdisks between controllers.
• RAID Level. Select a RAID level for the vdisk.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 59
Page 60
• Number of sub-vdisks. For a RAID-10 or RAID-50 vdisk, optionally change the number of sub-vdisks that the vdisk should contain.
• Chunk size. For RAID 3, 5, 6, 10, or 50, optionally set the amount of contiguous data that is written to a vdisk member before moving to the next member of the vdisk. For RAID 50, this option sets the chunk size of each RAID-5 sub-vdisk. The chunk size of the RAID-50 vdisk is calculated as: configured-chunk-size x (subvdisk-members - 1). The default is 64KB. For NRAID and RAID 1, chunk size has no meaning and is therefore disabled.
2. Click Next to continue.
Step 3: Selecting disks
Select disks to include in the vdisk. The Disk Selection Sets table has one row for each sub-vdisk in a RAID-10 or RAID-50 vdisk, or a single row for a vdisk having another RAID level. The table also has a SPARE row where you can assign dedicated spares to the vdisk. In each row, the Disks field shows how many disks you can, and have, assigned. As you select disks, the table shows the amount of storage space in the vdisk. For descriptions of storage-space color codes, see About storage-space color codes on page 33.
The Enclosures Front View table shows all disks in all enclosures. The Graphical tab shows disk information graphically; the Tabular tab shows disk information in a table. Disks you select are highlighted and color-coded to match the rows in the Disk Selection Sets table. Based on the type of disk you select first (SAS or SATA), only available disks of that type become selectable; you cannot mix SAS and SATA disks in a vdisk.
To select disks and spares
1. Select disks to populate each vdisk row. When you have selected enough disks, a checkmark appears
in the table's Complete field.
2. Optionally select up to four dedicated spares for the vdisk.
3. Click Next to continue.
Step 4: Defining volumes
A volume is a logical subdivision of a vdisk and can be mapped to controller host ports for access by hosts. A mapped volume provides the storage for a file system partition you create with your operating system or third-party tools. The storage system presents only volumes, not vdisks, to hosts.
You can create multiple volumes with the same base name, size, and default mapping settings. If you choose to define volumes in this step, you will define their mapping settings in the next step.
To define volumes
1. Set the options:
• Specify the number of volumes to create. If you do not want to create volumes, enter 0. After changing the value, press Tab.
• Optionally change the volume size. The default size is the total space divided by the number of volumes.
• Optionally change the base name for the volumes. A volume name is case sensitive and cannot already exist in a vdisk. A name cannot include a comma, double quote, or backslash.
2. Click Next to continue.
60 Provisioning the system
Page 61
Step 5: Setting the default mapping
Specify default mapping settings to control whether and how hosts will be able to access the vdisk’s volumes. These settings include:
A logical unit number (LUN), used to identify a mapped volume to hosts. Both controllers share one set
of LUNs. Each LUN can be assigned as the default LUN for only one volume in the storage system; for example, if LUN 5 is the default for Volume1, LUN5 cannot be the default LUN for any other volume.
The level of access — read-write, read-only, or no access — that hosts will have to each volume. When
a mapping specifies no access, the volume is masked.
Controller host ports through which hosts will be able to access each volume. To maximize
performance, it is recommended to map a volume to at least one host port on the controller that the volume’s vdisk is assigned to. To sustain I/O in the event of controller failure, it is recommended to map to at least one host port on each controller.
After a volume is created you can change its default mapping, and create, modify, or delete explicit mappings. An explicit mapping overrides the volume’s default mapping for a specific host.
NOTE: When mapping a volume to a host using the Linux ext3 file system, specify read-write access;
otherwise, the file system will be unable to mount the volume and will report an error such as “unknown partition table.”
To specify the default mapping
1. Select Map.
2. Set the base LUN for the volumes. If this LUN is available, it will be assigned to the first volume and the
next available LUNs in sequence will be assigned to any remaining volumes.
3. In the enclosure view or list, select controller host ports through which attached hosts can access each
volume.
4. Select the access level that hosts will have to each volume: read-write, read-only, or no-access
(masked).
5. Click Next to continue.
Step 6: Confirming vdisk settings
Confirm that the values listed in the wizard panel are correct.
If they are not correct, click Previous to return to previous steps and make necessary changes.
If they are correct, click Finish to apply the setting changes and finish the wizard.
Creating a vdisk
To create a vdisk
1. In the Configuration View panel, right-click the system or Vdisks and then select Provisioning > Create
Vdisk.
2. In the main panel set the options:
• Vdisk name. Optionally change the default name for the vdisk. A vdisk name is case sensitive and
cannot already exist in the system. A name cannot include a comma, double quote, or backslash.
• Assign to. Optionally select a controller to be the preferred owner for the vdisk. The default, Auto,
automatically assigns the owner to load-balance vdisks between controllers.
• RAID Level. Select a RAID level for the vdisk.
• Number of Sub-vdisks. For a RAID-10 or RAID-50 vdisk, optionally change the number of sub-vdisks
that the vdisk should contain.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 61
Page 62
• Chunk size. For RAID 3, 5, 6, 10, or 50, optionally set the amount of contiguous data that is written to a vdisk member before moving to the next member of the vdisk. For RAID 50, this option sets the chunk size of each RAID-5 sub-vdisk. The chunk size of the RAID-50 vdisk is calculated as: configured-chunk-size x (subvdisk-members - 1). The default is 64KB. For NRAID and RAID 1, chunk size has no meaning and is therefore disabled.
• Online Initialization. If this option is enabled, you can use the vdisk while it is initializing but because the verify method is used to initialize the vdisk, initialization takes more time. If this option is disabled, you must wait for initialization to complete before using the vdisk, but initialization takes less time. Online initialization is fault tolerant.
3. Select disks to include in the vdisk. Only available disks have checkboxes. The number of disks you can
select is determined by the RAID level, and is specified in the Disk Selection Sets table. When you have selected enough disks, a checkmark appears in the table's Complete field.
4. Click Create Vdisk. If the task succeeds, the new vdisk appears in the Configuration View panel.
Deleting vdisks
CAUTION: Deleting a vdisk removes all of its volumes and their data.
To delete vdisks
1. Verify that hosts are not accessing volumes in the vdisks that you want to delete.
2. In the Configuration View panel, either:
• Right-click the system or Vdisks and then select Provisioning > Delete Vdisks.
• Right-click a vdisk and select Provisioning > Delete Vdisk.
3. In the main panel, select the vdisks to delete. To select or clear all vdisks, toggle the checkbox in the
heading row.
4. Click Delete Vdisk(s). A confirmation dialog appears.
5. Click Delete to continue; otherwise, click Cancel. If you clicked Delete, a processing dialog appears. As
vdisks are deleted they are removed from the table and from the Configuration View panel. When processing is complete a success dialog appears.
6. Click OK.
Managing global spares
You can designate a maximum of eight global spares for the system. If a disk in any redundant vdisk (RAID 1, 3, 5, 6, 10, 50) fails, a global spare is automatically used to reconstruct the vdisk. At least one vdisk must exist before you can add a global spare. A spare must have sufficient capacity to replace the smallest disk in an existing vdisk.
The vdisk remains in Critical status until the parity or mirror data is completely written to the spare, at which time the vdisk returns to Fault Tolerant status. For RAID-50 vdisks, if more than one sub-vdisk becomes critical, reconstruction and use of spares occur in the order sub-vdisks are numbered.
To change the system's global spares
1. In the Configuration View panel, right-click the system and select Provisioning > Manage Global
Spares. The main panel shows information about available disks in the system. Existing spares are
labeled GLOBAL SP.
• In the Disk Sets table, the number of white slots in the Disks field shows how many spares you can
add.
• In the enclosure view or list, only existing global spares and suitable available disks are selectable.
2. Select spares to remove, disks to add as spares, or both.
3. Click Modify Spares. If the task succeeds, the panel is updated to show which disks are now global
spares.
62 Provisioning the system
Page 63
Creating a volume set
In a vdisk that has sufficient free space, you can create multiple volumes with the same base name and size. Optionally, you can specify a default mapping for the volumes; otherwise, they will be created unmapped.
IMPORTANT: In an FC/iSCSI combo system, do not connect hosts or map volumes to host ports used for
replication. Attempting to do so could interfere with replication operation.
To create a volume set
1. In the Configuration View panel, right-click a vdisk and select Provisioning > Create Volume Set.
2. In the main panel, set the options:
• Volume Set Base-name. Optionally change the base name for the volumes. The volume names will consist of the base name and a number that increments from 000. If a name in the series is already in use, the next name in the series is assigned. For example, for a two-volume set starting with Volume000, if Volume001 already exists, the second volume is named Volume002. A base name is case sensitive and cannot already be used by another vdisk. A name cannot include a comma, double quote, or backslash.
• Total Volumes. Specify the number of volumes to create.
• Size. Optionally change the volume size. The default size is the total space divided by the number of volumes.
• Map. Select this option to specify a default mapping for the volumes:
• Access. Select the access level that hosts will have to the volumes.
• LUN. If the access level is set to read-write or read-only, set a LUN for the first volume. The next
available LUN is assigned to the next volume mapped through the same ports. For example, for a two-volume set starting with LUN 100, if 101 is already assigned to a volume mapped through the same ports, the second volume is assigned 102.
• In the enclosure view or list, select controller host ports through which attached hosts can access
the volumes.
3. Click Apply. If the task succeeds, the new volumes appear in the Configuration View panel.
Creating a volume
You can add a volume to a vdisk that has sufficient free space, and define default mapping settings.
IMPORTANT: In an FC/iSCSI combo system, do not connect hosts or map volumes to host ports used for
replication. Attempting to do so could interfere with replication operation.
To create a volume in a vdisk
1. In the Configuration View panel, right-click a vdisk and select Provisioning > Create Volume.
2. In the main panel, set the options:
• Volume name. Optionally change the default name. A volume name is case sensitive and cannot already exist in a vdisk. A name cannot include a comma, double quote, or backslash.
• Size. Optionally change the default size, which is all free space in the vdisk.
• OpenVMS Volume. Select this option if an OpenVMS host will access the volume.
• OpenVMS Volume UID. If OpenVMS Volume is selected, enter a number in the range 1–32767 to identify the volume to the host.
• Snappable. If the system is licensed to use Snapshots and you want to create snapshots of this volume, select this option. This specifies to create the volume as a master volume instead of as a standard volume, and enables the Enable Snapshots and Replication Prepare options.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 63
Page 64
• Enable Snapshots. Select either:
• Standard Policy. This option creates a snap pool named spvolume-name whose size is either 20% of the volume size or the minimum snap-pool size, whichever is larger.
• Reserve Size. Specify the size of the snap pool to create in the vdisk and associate with the new volume. The default size is either 20% of the volume size or the minimum snap-pool size, whichever is larger.
• Attach Pool. Select an existing snap pool to associate with the new volume.
• Replication Prepare. If the system is licensed to use remote replication and you want to use this volume as a secondary volume, select this option. Selecting this option disables the Map option.
• Map. Select this option to change the default mapping for the volume:
• Access. Select the access level that hosts will have to the volume.
• LUN. If the access level is set to read-write or read-only, set a LUN for the volume.
• In the enclosure view or list, select controller host ports through which attached hosts can access
the volume.
3. Click Apply. If the task succeeds, the new volume appears in the Configuration View panel. If you
specified an option to create a snap pool, the new snap pool also appears in that panel.
Deleting volumes
You can use the Delete Volumes panel to delete standard and master volumes.
CAUTION: Deleting a volume removes its mappings and schedules and deletes its data.
To delete volumes
1. Verify that hosts are not accessing the volumes that you want to delete.
2. In the Configuration View panel, either:
• Right-click the system or Vdisks or a vdisk and then select Provisioning > Delete Volumes.
• Right-click a volume and select Provisioning > Delete Volume.
3. In the main panel, select the volumes to delete. To select or clear all volumes, toggle the checkbox in
the heading row.
4. Click Delete Volume(s).
5. Click Delete to continue; otherwise, click Cancel. If you clicked Delete, a processing dialog appears. As
volumes are deleted they are removed from the table and from the Configuration View panel. When processing is complete a success dialog appears.
6. Click OK.
NOTE: The system might be unable to delete a large number of volumes in a single operation. If you
specified to delete a large number of volumes, verify that all were deleted. If some of the specified volumes remain, repeat the deletion on those volumes.
Changing default mapping for multiple volumes
For all volumes in all vdisks or a selected vdisk, you can change the default access to those volumes by all hosts. When multiple volumes are selected, LUN values are sequentially assigned starting with a LUN value that you specify. For example, if the starting LUN value is 1 for 30 selected volumes, the first volume's mapping is assigned LUN 1 and so forth, and the last volume's mapping is assigned LUN 30. For LUN assignment to succeed, ensure that no value in the sequence is already in use. When specifying access through specific ports, the ports and host must be the same type (for example, FC).
64 Provisioning the system
Page 65
CAUTION: Volume mapping changes take effect immediately. Make changes that limit access to volumes
when the volumes are not in use. Before changing a mapping's LUN, be sure to unmount a mapped volume from a host system.
IMPORTANT: In an FC/iSCSI combo system, do not connect hosts or map volumes to host ports used for
replication. Attempting to do so could interfere with replication operation.
NOTE: You cannot map the secondary volume of a replication set.
NOTE: When mapping a volume to a host using the Linux ext3 file system, specify read-write access;
otherwise, the file system will be unable to mount the volume and will report an error such as “unknown partition table.”
To change default mapping for multiple volumes
1. In the Configuration View panel, right-click Vdisks or a vdisk and then select Provisioning > Map
Volume Defaults.
2. In the main panel, select the volumes to change. To select or clear all volumes, toggle the checkbox in
the heading row.
3. Select Map.
4. Either:
• Map the volumes to all hosts by setting a starting LUN, selecting ports, and setting access to
read-only or read-write.
• Mask the volumes from all hosts by setting a starting LUN, selecting ports, and setting access to
no-access.
5. Click Apply. A message specifies whether the change succeeded or failed.
6. Click OK.
Explicitly mapping multiple volumes
For all volumes in all vdisks or a selected vdisk, you can change access to those volumes by a specific host. When multiple volumes are selected, LUN values are sequentially assigned starting with a LUN value that you specify. For example, if the starting LUN value is 1 for 30 selected volumes, the first volume's mapping is assigned LUN 1 and so forth, and the last volume's mapping is assigned LUN 30. For LUN assignment to succeed, ensure that no value in the sequence is already in use. When specifying access through specific ports, the ports and host must be the same type (for example, FC).
CAUTION: Volume mapping changes take effect immediately. Make changes that limit access to volumes
when the volumes are not in use. Before changing a mapping's LUN, be sure to unmount a mapped volume from a host system.
IMPORTANT: In an FC/iSCSI combo system, do not connect hosts or map volumes to host ports used for
replication. Attempting to do so could interfere with replication operation.
NOTE: You cannot map the secondary volume of a replication set.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 65
Page 66
NOTE: When mapping a volume to a host using the Linux ext3 file system, specify read-write access;
otherwise, the file system will be unable to mount the volume and will report an error such as “unknown partition table.”
To explicitly map multiple volumes
1. In the Configuration View panel, right-click Vdisks or a vdisk and then select Provisioning > Map
Volumes.
2. In the main panel, select the volumes to change. To select or clear all volumes, toggle the checkbox in
the heading row.
3. In the Maps for Selected Volumes table, select the host to change access for.
4. Select Map.
5. Either:
• Map the volumes to the host by setting a starting LUN, selecting ports, and setting access to
read-only or read-write.
• Mask the volumes from the host by setting a starting LUN, selecting ports, and setting access to
no-access.
6. Click Apply. A message specifies whether the change succeeded or failed.
7. Click OK.
Changing a volume's default mapping
CAUTION: Volume mapping changes take effect immediately. Make changes that limit access to volumes
when the volumes are not in use. Be sure to unmount a mapped volume from a host system before changing the mapping's LUN.
NOTE: You cannot map the secondary volume of a replication set.
NOTE: When mapping a volume to a host using the Linux ext3 file system, specify read-write access;
otherwise, the file system will be unable to mount the volume and will report an error such as “unknown partition table.”
To view the default mapping
In the Configuration View panel, right-click a volume and select Provisioning > Default Mapping. The main panel shows the volume's default mapping:
Ports. Controller host ports through which the volume is mapped to the host.
LUN. Volume identifier presented to the host.
Access. Volume access type: read-write, read-only, no-access (masked), or not-mapped.
To modify the default mapping
1. Select Map.
2. Set the LUN and select the ports and access type.
3. Click Apply. A message specifies whether the change succeeded or failed.
4. Click OK. Each mapping that uses the default settings is updated.
66 Provisioning the system
Page 67
To delete the default mapping
1. Clear Map.
2. Click Apply. A message specifies whether the change succeeded or failed.
3. Click OK. Each mapping that uses the default settings is updated.
Changing a volume's explicit mappings
CAUTION: Volume mapping changes take effect immediately. Make changes that limit access to volumes
when the volumes are not in use. Be sure to unmount a mapped volume from a host system before changing the mapping's LUN.
NOTE: You cannot map the secondary volume of a replication set.
NOTE: When mapping a volume to a host using the Linux ext3 file system, specify read-write access;
otherwise, the file system will be unable to mount the volume and will report an error such as “unknown partition table.”
To view volume mappings
In the Configuration View panel, right-click a volume and select Provisioning > Explicit Mappings. The main panel shows the following information about the volume's mappings:
Type. Explicit or Default. Settings for an explicit mapping override the default mapping.
Host ID. WWPN or IQN.
Name. Host name.
Ports. Controller host ports through which the host is mapped to the volume.
LUN. Volume identifier presented to the host.
Access. Volume access type: read-write, read-only, no-access (masked), or not-mapped.
To create an explicit mapping
1. In the Maps for Volume table, select a host.
2. Select Map.
3. Set the LUN and select the ports and access type.
4. Click Apply. A message specifies whether the change succeeded or failed.
5. Click OK. The mapping becomes Explicit with the new settings.
To modify an explicit mapping
1. In the Maps for Volume table, select the Explicit mapping to change.
2. Set the LUN and select the ports and access type.
3. Click Apply. A message specifies whether the change succeeded or failed.
4. Click OK. The mapping settings are updated.
To delete an explicit mapping
1. In the Maps for Volume table, select the Explicit mapping to delete.
2. Clear Map.
3. Click Apply. A message specifies whether the change succeeded or failed.
4. Click OK. The mapping returns to the Default mapping.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 67
Page 68
Unmapping volumes
You can delete all of the default and explicit mappings for multiple volumes.
CAUTION: Volume mapping changes take effect immediately. Make changes that limit access to volumes
when the volumes are not in use. Before changing a mapping's LUN, be sure to unmount a mapped volume from a host system.
To unmap volumes
1. In the Configuration View panel, right-click Vdisks or a vdisk and then select Provisioning > Unmap
Volumes.
2. In the main panel, select the volumes to unmap. To select or clear all volumes, toggle the checkbox in
the heading row.
3. Click Unmap Volume(s). A message specifies whether the change succeeded or failed.
4. Click OK. Default and explicit mappings are deleted and the volumes' access type changes to
not-mapped.
Expanding a volume
You can expand a standard volume if its vdisk has free space and sufficient resources. Because volume expansion does not require I/O to be stopped, the volume can continue to be used during expansion.
NOTE: This command is not supported for master volumes.
To expand a volume
1. In the Configuration View panel, right-click a standard volume and select Tools > Expand Volume.
2. In the main panel, specify the amount of free space to add to the volume.
3. Click Expand Volume. If the specified value exceeds the amount of free space in the vdisk, a dialog lets
you expand the volume to the limit of free space in the vdisk. If the task succeeds, the volume's size is updated in the Configuration View panel.
Creating multiple snapshots
If the system is licensed to use Snapshots, you can select multiple volumes and immediately create a snapshot of each volume.
NOTE: The first time a snapshot is created of a standard volume, the volume is converted to a master
volume and a snap pool is created in the volume’s vdisk. The snap pool’s size is either 20% of the volume size or the minimum snap-pool size, whichever is larger. Before creating or scheduling snapshots, verify that the vdisk has enough free space to contain the snap pool.
To create multiple snapshots
1. In the Configuration View panel, right-click the system or Vdisks or a vdisk and then select Provisioning
> Create Multiple Snapshots.
2. In the main panel, select each volume to take a snapshot of. To select or clear all volumes, toggle the
checkbox in the heading row.
3. Click Create Snapshots. If the task succeeds, the snapshots appear in the Configuration View panel.
68 Provisioning the system
Page 69
Creating a snapshot
If the system is licensed to use Snapshots, you can create a snapshot now or schedule the snapshot task.
NOTE: The first time a snapshot is created of a standard volume, the volume is converted to a master
volume and a snap pool is created in the volume’s vdisk. The snap pool’s size is either 20% of the volume size or the minimum snap-pool size, whichever is larger. Before creating or scheduling snapshots, verify that the vdisk has enough free space to contain the snap pool.
To create a snapshot now
1. In the Configuration View panel, right-click a volume and select Provisioning > Create Snapshot.
2. In the main panel, select Now.
3. Optionally change the default name for the snapshot. A snapshot name is case sensitive and cannot
already exist in a vdisk. A name cannot include a comma, double quote, or backslash.
4. Click Create Snapshot. If the task succeeds, the snapshot appears in the Configuration View panel.
To schedule a create snapshot task
1. In the Configuration View panel, right-click a volume and select Provisioning > Create Snapshot.
2. In the main panel, select Scheduled.
3. Set the options:
• Snapshot prefix. Optionally change the default prefix to identify snapshots created by this task. The prefix is case sensitive and cannot include a comma, double quote, or backslash. Automatically created snapshots are named prefix_s001 through prefix_s1023.
• Snapshots to Retain. Select the number of snapshots to retain. When the task runs, the retention count is compared with the number of existing snapshots:
• If the retention count has not been reached, the snapshot is created.
• If the retention count has been reached, the volume's oldest snapshot is unmapped, reset, and
renamed to the next name in the sequence.
• Start Schedule. Specify a date and a time in the future for the schedule to start running.
• Date must use the format yyyy-mm-dd.
• Time must use the format hh:mm followed by either AM, PM, or 24H (24-hour clock). For
example, 13:00 24H is the same as 1:00 PM.
• Recurrence. Specify the interval at which the task should run. For better performance if this task will run under heavy I/O conditions or on more than three volumes, the retention count and the schedule interval should be set to similar values; for example if the retention count is 10 then the interval should be set to 10 minutes.
• Time Constraint. Specify a time range within which the task should run.
• Date Constraint. Specify days when the task should run.
• End Schedule. Specify when the task should stop running.
4. Click Schedule Snapshots. If processing succeeds, the schedule is saved and can be viewed in the
overview panel for the volume or system.
Deleting snapshots
You can use the Delete Snapshots panel to delete standard and replication snapshots.
When you delete a snapshot, all data uniquely associated with that snapshot is deleted and associated space in the snap pool is freed for use. Snapshots can be deleted in any order, irrespective of the order in which they were created.
CAUTION: Deleting a snapshot removes its mappings and schedules and deletes its data.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 69
Page 70
CAUTION: If a replication snapshot’s type is shown as a “sync point” for its replication set, consider
carefully whether you want to delete that snapshot. If you delete the current sync point, then if a replication-set failure occurs, a prior sync point will be used. If you delete the only sync point then the next replication will require a full sync to be performed (all data to be re-replicated from the primary volume to a secondary volume).
To delete snapshots
1. Verify that hosts are not accessing the snapshots that you want to delete.
2. In the Configuration View panel, right-click either the system or a vdisk or a master volume or a primary
volume or a secondary volume or a snapshot or a replication image and then select Provisioning > Delete Snapshot.
3. In the main panel, select the snapshots to delete.
4. Click Delete Snapshot(s).
5. Click OK to continue; otherwise, click Cancel. If you clicked OK, a processing dialog appears. When
the snapshots are deleted they are removed from the table and from the Configuration View panel. When processing is complete a success dialog appears.
6. Click OK.
Resetting a snapshot
If the system is licensed to use Snapshots, as an alternative to taking a new snapshot of a volume, you can replace the data in a snapshot with the current data in the source volume. The snapshot's name and mapping settings are not changed. The snapshot data is stored in the source volume's snap pool.
CAUTION: To avoid data corruption, before resetting a snapshot it must be unmounted from hosts.
You can reset a snapshot now or schedule the reset task.
To reset a snapshot now
1. Unmount the snapshot from hosts.
2. In the Configuration View panel, right-click a snapshot and select Provisioning > Reset Snapshot.
3. In the main panel, select Now.
4. Click Reset Snapshot. A confirmation dialog appears.
5. Click Yes to continue; otherwise, click No. If you clicked Yes, a processing dialog appears. When
processing is complete a success dialog appears.
6. Click OK.
7. Optionally, remount the snapshot.
To schedule a reset snapshot task
1. In the Configuration View panel, right-click a snapshot and select Provisioning > Reset Snapshot.
2. In the main panel, select Scheduled.
3. Set the options:
• Start Schedule. Specify a date and a time in the future for the schedule to start running.
• Date must use the format yyyy-mm-dd.
• Time must use the format hh:mm followed by either AM, PM, or 24H (24-hour clock). For example, 13:00 24H is the same as 1:00 PM.
• Recurrence. Specify how often the task should run. It is not recommended to set the interval to less than two minutes.
• Time Constraint. Specify a time range within which the task should run.
70 Provisioning the system
Page 71
• Date Constraint. Specify days when the task should run.
• End Schedule. Specify when the task should stop running.
4. Click Reset Snapshot. If the task succeeded, the schedule is saved and can be viewed in the overview
panel for the snapshot or system.
5. Make a reminder to unmount the snapshot before the scheduled task runs.
Creating a volume copy
You can copy a volume or a snapshot to a new standard volume. The destination volume must be in a vdisk owned by the same controller as the source volume. If the source volume is a snapshot, you can choose whether to include its modified data (data written to the snapshot since it was created). The destination volume is completely independent of the source volume.
The first time a volume copy is created of a standard volume, the volume is converted to a master volume and a snap pool is created in the volume’s vdisk. The snap pool's size is either 20% of the volume size or the minimum snap-pool size, whichever is larger. Before creating or scheduling copies, verify that the vdisk has enough free space to contain the snap pool.
For a master volume, the volume copy creates a transient snapshot, copies the data from the snapshot, and deletes the snapshot when the copy is complete. For a snapshot, the volume copy is performed directly from the source; this source data may change if modified data is to be included in the copy and the snapshot is mounted and I/O is occurring to it.
To ensure the integrity of a copy of a master volume, either unmount the volume, or at minimum perform a system cache flush and refrain from writing to the volume. Since the system cache flush is not natively supported on all OSs, it is recommended to unmount temporarily. The volume copy is for all data on the disk at the time of the request, so if there is data in the OS cache, that will not be copied over. The unmount forces the cache flush from the OS. After the volume copy has started, it is safe to remount the volume and/or resume I/O.
To ensure the integrity of a copy of a snapshot with modified data, either unmount the snapshot or perform a system cache flush. The snapshot will not be available for read or write access until the volume copy is complete. If modified write data is not to be included in the copy, then you may safely leave the snapshot mounted. During a volume copy using snapshot modified data, the system takes the snapshot offline, as shown by the Snapshot Overview panel.
The volume copy's progress is shown in the Volume Overview panel.
You can create a volume copy now or schedule the copy task.
To create a volume copy now
1. In the Configuration View panel, right-click a volume and select Provisioning > Create Volume Copy.
2. In the main panel, select Now.
3. Set the options:
• New Volume Name. Optionally change the default name for the destination volume. A volume name is case sensitive and cannot already exist in a vdisk. A name cannot include a comma, double quote, or backslash.
• Residing On Vdisk. Optionally change the destination vdisk.
With Modified Data. If the source volume is a snapshot, select this option to include the snapshot’s modified data in the copy. Otherwise, the copy will contain only the data that existed when the snapshot was created.
4. Click Copy the Volume. A confirmation dialog appears.
5. Click Yes to continue; otherwise, click No. If you clicked Yes and With Modified Data is selected and
the snapshot has modified data, a second confirmation dialog appears.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 71
Page 72
6. Click Yes to continue; otherwise, click No. If you clicked Yes, the volume copy operation starts. While
the operation is in progress, the destination volume is offline and its type is shown as “standard*”. If you unmounted a snapshot to copy its modified data, wait until processing is complete before you remount it. If the task succeeds, the destination volume's type becomes standard and the volume appears in the Configuration View panel.
7. Optionally map the volume to hosts.
To schedule a volume copy task
1. In the Configuration View panel, right-click a volume and select Provisioning > Create Volume Copy.
2. In the main panel, select Scheduled.
3. Set the options:
• New Volume Prefix. Optionally change the default prefix to identify volumes created by this task. The prefix is case sensitive and cannot include a comma, double quote, or backslash. Automatically created volumes are named prefix_c001 through prefix_c1023.
• Residing On Vdisk. Optionally change the destination vdisk.
With Modified Data. If the source volume is a snapshot, select this option to include the snapshot’s modified data in the copy. Otherwise, the copy will contain only the data that existed when the snapshot was created.
• Start Schedule. Specify a date and a time in the future for the schedule to start running.
• Date must use the format yyyy-mm-dd.
• Time must use the format hh:mm followed by either AM, PM, or 24H (24-hour clock). For
example, 13:00 24H is the same as 1:00 PM.
• Recurrence. Specify how often the task should run. It is not recommended to set the interval to less than two minutes.
• Time Constraint. Specify a time range within which the task should run.
• Date Constraint. Specify days when the task should run.
• End Schedule. Specify when the task should stop running.
4. Click Schedule Volume Copy. If the task succeeded, the schedule is saved and can be viewed in the
overview panel for the volume or system.
5. If you will copy snapshot modified data, make a reminder to unmount the snapshot before the
scheduled task runs.
Aborting a volume copy
You can cancel an in-progress volume copy operation. When the cancellation is complete, the destination volume is deleted.
To abort a volume copy
1. In the Configuration View panel, right-click the source volume and then select Provisioning > Abort
Volume Copy. The Volume Overview panel shows the operation's progress.
2. Click Abort Volume Copy. A message confirms that the operation has been aborted.
3. Click OK. The destination volume is removed from the Configuration View panel.
72 Provisioning the system
Page 73
Rolling back a volume
You can roll back (revert) the data in a volume to the data that existed when a specified snapshot was created. You also have the option of including its modified data (data written to the snapshot since it was created). For example, you might want to take a snapshot, mount it for read/write, and then install new software on the snapshot for testing. If the software installation is successful, you can roll back the volume to the contents of the modified snapshot.
CAUTION:
Before rolling back a volume you must unmount it from data hosts to avoid data corruption. If you want
to include snapshot modified data in the roll back, you must also unmount the snapshot.
Whenever you perform a roll back, the data that existed on the volume is replaced by the data on the
snapshot; that is, all data on the volume written since the snapshot was taken is lost. As a precaution, take a snapshot of the volume before starting a roll back.
Only one roll back is allowed on the same volume at one time. Additional roll backs are queued until the current roll back is complete. However, after the roll back is requested, the volume is available for use as if the roll back has already completed.
During a roll back operation using snapshot modified data, the snapshot must be unmounted and cannot be accessed. Unmounting ensures that all data cached by the host is written to the snapshot; if the unmount is not performed at the host level prior to starting the roll back, data may remain in host cache, and thus not be rolled back to the master volume. As a precaution against inadvertently accessing the snapshot, the system also takes the snapshot offline, as shown by the Snapshot Overview panel. The snapshot becomes inaccessible in order to prevent any data corruption to the master volume. The snapshot can be remounted once the roll back is complete. The roll back’s progress is shown in the Roll Back Volume panel.
To roll back a volume
1. Unmount the volume from hosts.
2. If the roll back will include snapshot modified data, unmount the snapshot from hosts.
3. In the Configuration View panel, right-click a volume and select Provisioning > Roll Back Volume.
4. In the main panel, set the options:
•For Volume.
• From Snapshot Volume. Enter the name of the snapshot to roll back to.
With Modified Data. Select this option to include the snapshot’s modified data in the roll back.
Otherwise, the master volume will contain only the data that existed when the snapshot was created.
5. Click Roll Back Volume. The roll back starts. You can now remount the volume. The panel shows the roll
back's progress.
6. When the roll back is complete, if you unmounted the snapshot you can remount it.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 73
Page 74
Creating a snap pool
Before you can convert a standard volume to a master volume or create a master volume for snapshots, a snap pool must exist. A snap pool and its associated master volumes can be in different vdisks, but must be owned by the same controller.
To create a snap pool
1. In the Configuration View panel, right-click a vdisk and select Provisioning > Create Snap Pool.
2. In the main panel set the options:
• Snap Pool Name. Optionally change the default name for the snap pool. A snap pool name is case
sensitive and cannot already exist in the system. A name cannot include a comma, double quote, or backslash.
• Size. Optionally change the default size, which is all free space in the vdisk.
3. Click Create Snap Pool If the task succeeds, the new snap pool appears in the Configuration View
panel.
Deleting snap pools
Before you can delete a snap pool you must delete any associated snapshots, and either delete the associated master volume or convert the master volume to a standard volume.
To delete snap pools
1. Verify that no master volume or snapshots are associated with the snap pool.
2. In the Configuration View panel, either:
• Right-click the local system or Vdisks or a vdisk and select Provisioning > Delete Snap Pools.
• Right-click a snap pool and select Provisioning > Delete Snap Pool.
3. In the main panel, select the snap pool to delete.
4. Click Delete Snap Pool(s).
5. Click Delete to continue; otherwise, click Cancel. If you clicked Delete, a processing dialog appears.
When the snap pool is deleted it is removed from the table and from the Configuration View panel. When processing is complete a success dialog appears.
6. Click OK.
Adding a host
To add a host
1. Determine the host's WWPN or IQN.
2. In the Configuration View panel, right-click the system or Hosts and then select Provisioning > Add Host.
3. In the main panel set the options:
• Host ID (WWN/IQN). Enter the host's WWPN or IQN. A WWPN value can include a colon
between each pair of digits but the colons will be discarded.
• Host Name. Optionally change the default name to one that helps you easily identify the host; for
example, MailServer_P1. A host name is case sensitive and cannot already exist in the system. A name cannot include a comma, double quote, or backslash.
• Profile. Select the appropriate option that specifies whether the host allows use of LUN 0 for
mappings:
• Standard: LUN 0 can be assigned to a mapping. This is the default.
• OpenVMS: LUN 0 cannot be assigned to a mapping.
• HP-UX: LUN 0 can be assigned to a mapping and the host uses Flat Space Addressing.
4. Click Add Host. If the task succeeds, the new host appears in the Configuration View panel.
74 Provisioning the system
Page 75
Removing hosts
To remove hosts
1. Verify that the hosts you want to remove are not accessing volumes.
2. In the Configuration View panel, either:
• Right-click the system or Hosts and then select Provisioning > Remove Hosts.
• Right-click a host and select Provisioning > Remove Host.
3. In the main panel, select the hosts to remove. To select or clear all hosts, toggle the checkbox in the
heading row.
4. Click Remove Host(s). A confirmation dialog appears.
5. Click Remove to continue; otherwise, click Cancel. If you clicked Remove, a processing dialog appears.
If the task succeeds, the hosts are removed from the table and from the Configuration View panel. When processing is complete a success dialog appears.
6. Click OK.
Changing a host's name
To change a host's name
1. In the Configuration View panel, right-click a host and select Provisioning > Rename Host.
2. Enter a new name that helps you easily identify the host; for example, MailServer_P1. A host name is
case sensitive and cannot already exist in the system. A name cannot include a comma, double quote, or backslash.
3. Click Modify Name.
Changing host mappings
For each volume that is mapped to the selected host, you can create, modify, and delete explicit mappings. To change a volume's default mapping, see Changing a volume's default mapping on page 66.
CAUTION: Volume mapping changes take effect immediately. Make changes that limit access to volumes
when the volumes are not in use. Be sure to unmount a mapped volume from a host system before changing the mapping's LUN.
NOTE: You cannot map the secondary volume of a replication set.
NOTE: When mapping a volume to a host using the Linux ext3 file system, specify read-write access;
otherwise, the file system will be unable to mount the volume and will report an error such as “unknown partition table.”
To view host mappings
In the Configuration View panel, right-click a host and select Provisioning > Manage Host Mappings. The main panel shows the following information about volumes mapped to the host:
Type. Explicit or Default. Settings for an explicit mapping override the default mapping.
Name. Volume name.
Serial Number. Volume serial number.
Ports. Controller host ports through which the volume is mapped to the host.
LUN. Volume identifier presented to the host.
Access. Volume access type: read-write, read-only, no-access (masked), or not-mapped.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 75
Page 76
To create an explicit mapping
1. In the Maps for Host table, select the Default mapping to override.
2. Select Map.
3. Set the LUN and select the ports and access type.
4. Click Apply. A message specifies whether the change succeeded or failed.
5. Click OK. The mapping becomes Explicit with the new settings.
To modify an explicit mapping
1. In the Maps for Host table, select the Explicit mapping to change.
2. Set the LUN and select the ports and access type.
3. Click Apply. A message specifies whether the change succeeded or failed.
4. Click OK. The mapping settings are updated.
To delete an explicit mapping
1. In the Maps for Host table, select the Explicit mapping to delete.
2. Clear Map.
3. Click Apply. A message specifies whether the change succeeded or failed.
4. Click OK. The mapping returns to the Default mapping.
Configuring CHAP
For iSCSI, you can use Challenge-Handshake Authentication Protocol (CHAP) to perform authentication between the initiator and target of a login request.
To perform this identification, a database of CHAP entries must exist on each device. Each CHAP entry can specify one name-secret pair to authenticate the initiator only (one-way CHAP) or two pairs to authenticate both the initiator and the target (mutual CHAP). For a login request from an iSCSI host to a storage system, the host is the initiator and the storage system is the target.
To enable or disable CHAP for all iSCSI hosts, see Changing host interface settings on page 46.
To add or modify a CHAP entry
1. In the Configuration View panel, right-click Hosts or a specific host and then select Provisioning >
Configure CHAP. If any CHAP entries exist, a table shows them by node name.
2. Optionally, select an entry whose name you want to change to create a new entry. The entry's values
appear in the option fields.
3. Set the options:
• Node Name (IQN). The initiator's IQN.
• Secret. The secret that the target uses to authenticate the initiator. The secret is case sensitive and
can include 12–16 bytes.
• Name, if mutual CHAP. Optional; for mutual CHAP only. Specifies the target name, which is
typically the target's IQN. The name is case sensitive, can include a maximum of 223 bytes, and must differ from the initiator name. To find a controller iSCSI port’s IQN, select the controller enclosure, view the Enclosure Overview panel (page 99), select the Rear Graphical tab, select an iSCSI port, and view the Target ID field.
• Secret, if mutual CHAP. Optional; for mutual CHAP only. Specifies the secret that the initiator uses to
authenticate the target. The secret is case sensitive, can include 12–16 bytes, and must differ from the initiator secret. A storage system's secret is shared by both controllers.
4. Click Add/Modify Entry. If the task succeeds, the new or modified entry appears in the CHAP entries
table.
76 Provisioning the system
Page 77
To delete a CHAP entry
1. In the Configuration View panel, right-click Hosts or a specific host and then select Provisioning >
Configure CHAP. If any CHAP entries exist, a table shows them by node name.
2. Select the entry to delete.
3. Click Delete Entry. If the task succeeds, the entry is removed from the CHAP entries table.
Modifying a schedule
To modify a schedule
1. In the Configuration View panel, right-click the system or a volume or a snapshot and select
Provisioning > Modify Schedule.
2. In the main panel, select the schedule to modify.
3. Set the options:
• Snapshot Prefix. Optionally change the default prefix to identify snapshots created by this task. The
prefix is case sensitive and cannot include a comma, double quote, or backslash. Automatically created snapshots are named prefix_s001 through prefix_s1023.
• Snapshots to Retain. Select the number of snapshots to retain. When the task runs, the retention
count is compared with the number of existing snapshots:
• If the retention count has not been reached, the snapshot is created.
• If the retention count has been reached, the volume's oldest snapshot is unmapped, reset, and renamed to the next name in the sequence.
• Start Schedule. Specify a date and a time in the future for the schedule to start running.
• Date must use the format yyyy-mm-dd.
• Time must use the format hh:mm followed by either AM, PM, or 24H (24-hour clock). For example, 13:00 24H is the same as 1:00 PM.
• Recurrence. Specify how often the task should run. It is not recommended to set the interval to less than two minutes. The default and minimum interval to replicate a volume is 30 minutes.
• Time Constraint. Specify a time range within which the task should run.
• Date Constraint. Specify days when the task should run.
• End Schedule. Specify when the task should stop running.
4. Click Modify Schedule.
5. Click Yes to continue; otherwise, click No. If you clicked Yes, a processing dialog appears. When
processing is complete a success dialog appears.
6. Click OK.
Deleting schedules
If a component has a scheduled task that you no longer want to occur, you can delete the schedule. When a component is deleted, its schedules are also deleted.
To delete task schedules
1. In the Configuration View panel, right-click the system or a volume or a snapshot and select
Provisioning > Delete Schedule.
2. In the main panel, select the schedule to remove.
3. Click Delete Schedule. A confirmation dialog appears.
4. Click Yes to continue; otherwise, click No. If you clicked Yes, a processing dialog appears. If the task
succeeds, the schedules are removed from the table and from the Configuration View panel. When processing is complete a success dialog appears.
5. Click OK.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 77
Page 78
78 Provisioning the system
Page 79
4 Using system tools
Updating firmware
You can view the current versions of firmware in controller modules, expansion modules (in drive enclosures), and disks, and install new versions.
TIP: To ensure success of an online update, select a period of low I/O activity. This helps the update
complete as quickly as possible and avoids disruptions to host and applications due to timeouts. Attempting to update a storage system that is processing a large, I/O-intensive batch job will likely cause hosts to lose connectivity with the storage system.
NOTE: If a vdisk is quarantined, attempting to update firmware is not permitted due to the risk of losing
unwritten data that remains in cache for the vdisk’s volumes. If you want to retain such data you must dequarantine the vdisk and then clear the cache. If it’s not necessary to retain the data, you can delete the vdisk. After doing either of these you can then update firmware.
Updating controller module firmware
A controller enclosure can contain one or two controller modules. In a dual-controller system, both controllers should run the same firmware version. You can update the firmware in each controller module by loading a firmware file obtained from the HP web download site, http://www.hp.com/go/p2000 install an HP ROM Flash Component or firmware Smart Component, follow the instructions on the HP web site; otherwise, to install a firmware binary file, follow the steps below.
. To
If you have a dual-controller system and the Partner Firmware Update option is enabled, when you update one controller the system automatically updates the partner controller. If Partner Firmware Update is disabled, after updating software on one controller you must log into the partner controller's IP address and perform this firmware update on that controller also.
To update controller module firmware
1. Obtain the appropriate firmware file and download it to your computer or network.
2. If the system has a single controller, stop I/O to vdisks before starting the firmware update.
3. In the Configuration View panel, right-click the system and select Tools > Update Firmware. The table
titled Current Controller Versions shows the currently installed versions.
4. Click Browse and select the firmware file to install.
5. Click Install Controller-Module Firmware File. It takes approximately 10 minutes for the firmware to load
and for the automatic restart to complete on the controller you are connected to. Wait for the progress messages to specify that the update has completed.
If the controller enclosure has attached drive enclosures, allow additional time for each expansion module to be updated. It typically takes 4.5 minutes to update an EMP in each D2700 drive enclosure, 9 minutes to update an EMP in each MSA70 drive enclosure, 2.5 minutes to update an EMP in each P2000 drive enclosure, or 3 minutes to update an EMP in each MSA2000 drive enclosure. Wait for the progress messages to specify that the update has completed.
CAUTION: Do not perform a power cycle or controller restart during a firmware update. If the
update is interrupted or there is a power failure, the module might become inoperative. If this occurs, contact technical support. The module might need to be returned to the factory for reprogramming.
6. Clear your web browser’s cache and log back in to SMU.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 79
Page 80
7. If Partner Firmware Update is enabled, allow an additional 20 minutes for the partner controller to be
updated.
8. Verify that the proper firmware version appears for each controller module.
Updating expansion module firmware
A drive enclosure can contain one or two expansion modules. Each expansion module contains an enclosure management processor (EMP). All modules of the same model should run the same firmware version.
You can update the firmware in each expansion module by loading a firmware file obtained from the HP web download site, http://www.hp.com/go/ Smart Component, follow the instructions on the HP web site; otherwise, to install a firmware binary file, follow the steps below.
To update expansion module firmware
1. Obtain the appropriate firmware file and download it to your computer or network.
2. If the system has a single controller, stop I/O to vdisks before starting the firmware update.
3. In the Configuration View panel, right-click the system and select Tools > Update Firmware. The table
titled Current Versions of All Expansion Modules (EMPs) shows the currently installed versions.
4. Select the expansion modules to update.
5. Click Browse and select the firmware file to install.
6. Click Install Expansion-Module Firmware File. It typically takes 4.5 minutes to update an EMP in each
D2700 drive enclosure, 9 minutes to update an EMP in each MSA70 drive enclosure, 2.5 minutes to update an EMP in each P2000 drive enclosure, or 3 minutes to update an EMP in each MSA2000 drive enclosure. Wait for the progress messages to specify that the update has completed.
p2000. To install an HP ROM Flash Component or firmware
CAUTION: Do not perform a power cycle or controller restart during the firmware update. If the
update is interrupted or there is a power failure, the module might become inoperative. If this occurs, contact technical support. The module might need to be returned to the factory for reprogramming.
7. If you updated firmware in an HP MSA70 drive enclosure, power cycle that enclosure to complete the
update process.
8. Verify that the proper firmware version appears for each updated expansion module.
Updating disk firmware
You can update disk firmware by loading a firmware file obtained from the HP web download site,
http://www.hp.com/go/p2000
follow the instructions on the HP web site; otherwise, to install a firmware binary file, follow the steps below.
A dual-ported disk can be updated from either controller. A single-ported disk that is in a vdisk or is a dedicated spare for a vdisk must be updated from the controller that owns the vdisk. Attempting to update a single-ported disk from the non-owning controller will not cause any change to the disk.
Disks in single-ported MSA70 drive enclosures must be updated from the controller to which the MSA70 is connected.
NOTE: Disks of the same model in the storage system must have the same firmware revision.
. To install an HP ROM Flash Component or firmware Smart Component,
80 Using system tools
Page 81
To update disk firmware
1. Obtain the appropriate firmware file and download it to your computer or network.
2. Check the disk manufacturer’s documentation to determine whether disks must be power cycled after
firmware update.
3. Stop I/O to the storage system. During the update all volumes will be temporarily inaccessible to hosts.
If I/O is not stopped, mapped hosts will report I/O errors. Volume access is restored after the update completes.
4. In the Configuration View panel, right-click the system and select Tools > Update Firmware. The table
titled Current Versions (Revisions) of All Disk Drives shows the currently installed versions.
5. Select the disks to update.
6. Click Install Disk Firmware File. It typically takes several minutes for the firmware to load. Wait for the
progress messages to specify that the update has completed.
CAUTION: Do not power cycle enclosures or restart a controller during the firmware update. If the
update is interrupted or there is a power failure, the disk might become inoperative. If this occurs, contact technical support.
7. If the updated disks must be power cycled:
a. Shut down both controllers; see Restarting or shutting down controllers on page 83. b. Power cycle all enclosures as described in your product’s user guide.
8. Verify that each disk has the correct firmware revision.
Saving logs
To help service personnel diagnose a system problem, you might be asked to provide system log data. Using SMU, you can save log data to a compressed zip file. The file will contain the following data:
Device status summary, which includes basic status and configuration data for the system
Each controller's event log
Each controller's debug log
Each controller's boot log, which shows the startup sequence
Critical error dumps from each controller, if critical errors have occurred
CAPI traces from each controller
NOTE: The controllers share one memory buffer for gathering log data and for loading firmware. Do not
try to perform more than one save-logs operation at a time, or to perform a firmware-update operation while performing a save-logs operation.
To save logs
NOTE: If you loaded firmware to a Seagate 750-Gbyte Barracuda ES SATA drive, after spin-up it
will be busy for about 50 seconds completing its update. Then it will be ready for host I/O.
1. In the Configuration View panel, right-click the system and select Tools > Save Logs.
2. In the main panel:
a. Enter your name, email address, and phone number so support personnel will know who provided
the log data.
b. Enter comments, describing the problem and specifying the date and time when the problem
occurred. This information helps service personnel when they analyze the log data. Comment text can be 500 bytes long.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 81
Page 82
3. Click Save Logs.
NOTE: In Microsoft Internet Explorer if the download is blocked by a security bar, select its
Download File option. If the download does not succeed the first time, return to the Save Logs panel
and retry the save operation.
Log data is collected, which takes several minutes.
4. When prompted to open or save the file, click Save.
• If you are using Firefox and have a download directory set, the file store.zip is saved there.
• Otherwise, you are prompted to specify the file location and name. The default file name is store.zip. Change the name to identify the system, controller, and date.
NOTE: Because the file is compressed, you must uncompress it before you can view the files it contains.
To examine diagnostic data, first view store_yyyy_mm_dd__hh_mm_ss.logs.
Resetting a host port
Making a configuration or cabling change on a host might cause the storage system to stop accepting I/O requests from that host. For example, this problem can occur after moving host cables from one HBA to another on the host. To fix such a problem you might need to reset controller host ports (channels).
For a Fibre Channel host port configured to use FC-AL (loop) topology, a reset issues a loop initialization primitive (LIP). For iSCSI, resetting a port might reset other ports. For SAS, resetting a host port issues a COMINIT/COMRESET sequence and might reset other ports.
To reset a host port
1. In the Configuration View panel, right-click the system and select Tools > Reset Host Port.
2. Select the port to reset. For example, to reset controller A port 1, select A1.
3. Click Reset Host Port.
Rescanning disk channels
A rescan forces a rediscovery of disks and enclosures in the storage system. If two Storage Controllers are online, rescan also reassigns the enclosure IDs of attached enclosures based on controller A's enclosure cabling order. A manual rescan may be needed after system power-up to display enclosures in the proper order. A manual rescan temporarily pauses all I/O processes, then resumes normal operation. It can take up to two minutes for the enclosure IDs to be corrected.
A manual rescan is not needed after inserting or removing disks; the controllers automatically detect these changes. When disks are inserted they are detected after a short delay, which allows the disks to spin up.
To rescan disk channels
1. Verify that both controllers are operating normally.
2. In the Configuration View panel, right-click the system and select Tools > Rescan Disk Channels.
3. Click Rescan.
Restoring system defaults
If the system is not working properly and you cannot determine why, you can restore its default configuration settings. You then can reconfigure the settings that are necessary to use the system.
To restore defaults, use the CLI’s restore defaults command, as described in the CLI reference guide.
82 Using system tools
Page 83
Clearing disk metadata
Each disk has metadata that identifies whether the disk is a member of a vdisk, and identifies other members of that vdisk. If a disk's metadata says the disk is a member of a vdisk but other members' metadata say the disk isn't a member, the disk becomes a leftover. The system overview and enclosure overview pages show the disk's How Used value as LEFTOVR. A leftover disk’s Fault/UID LED is illuminated amber.
Before you can use the disk in a new vdisk or as a spare, you must clear the disk's metadata.
To clear metadata from leftover disks
1. In the Configuration View panel, right-click the system and then select Tools > Clear Disk Metadata.
2. In the main panel, select disks to clear metadata from.
3. Click Clear Metadata. When processing is complete a success dialog appears.
4. Click OK.
Restarting or shutting down controllers
You can restart the processors in a controller module when SMU informs you that you have changed a configuration setting that requires restarting or when the controller is not working properly. Shut down the processors in a controller module before you remove it from an enclosure, or before you power off its enclosure for maintenance, repair, or a move.
A restart can be performed on either the Storage Controller processor or the Management Controller processor. A shut down affects both processors.
Restarting
If you restart a Storage Controller, it attempts to shut down with a proper failover sequence, which includes stopping all I/O operations and flushing the write cache to disk, and then the controller restarts. The Management Controller is not restarted so it can provide status information to external interfaces.
If you restart a Management Controller, communication with it is lost until it successfully restarts. If the restart fails, the partner MC remains active with full ownership of operations and configuration information.
CAUTION: If you restart both controller modules, you and users lose access to the system and its data until
the restart is complete.
To perform a restart
1. In the Configuration View panel, right-click the local system and select Tools > Shut Down or Restart
2. In the main panel, set the options:
3. Click Restart now. A confirmation dialog appears.
4. Click Yes to continue; otherwise, click No. If you clicked Yes, a second confirmation dialog appears.
5. Click Yes to continue; otherwise, click No. If you clicked Yes, a message describes restart activity.
Controller.
• Select the Restart operation.
• Select the type of controller processor to restart.
• Select whether to restart the processor in controller A, B, or both.
NOTE: If an iSCSI port is connected to a Microsoft Windows host, the following event is recorded
in the Windows event log: A connection to the target was lost, but Initiator successfully reconnected to the target.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 83
Page 84
Shutting down
Shutting down the Storage Controller in a controller module ensures that a proper failover sequence is used, which includes stopping all I/O operations and writing any data in write cache to disk. If the Storage Controller in both controller modules is shut down, hosts cannot access the system's data. Perform a shut down before removing a controller module or powering down the system.
CAUTION: You can continue to use the CLI when either or both Storage Controllers are shut down, but
information shown might be invalid.
To perform a shut down
1. In the Configuration View panel, right-click the local system and select Tools > Shut Down or Restart
Controller.
2. In the main panel, set the options:
• Select the Shut down operation.
• Select whether to restart the processor in controller A, B, or both.
3. Click Shut down now. A confirmation dialog appears.
4. Click Yes to continue; otherwise, click No. If you clicked Yes, a second confirmation dialog appears.
5. Click Yes to continue; otherwise, click No. If you clicked Yes, a message describes shutdown activity.
NOTE: If an iSCSI port is connected to a Microsoft Windows host, the following event is recorded
in the Windows event log: Initiator failed to connect to the target.
Testing event notification
You can send a test message to verify that email addresses and/or SNMP trap hosts are properly configured to receive event-notification messages. In order to receive messages, the email or SNMP configuration settings must include a notification level other than “none (disabled).”
To send a test message
1. In the Configuration View panel, right-click the local system and select Tools > Send Test Notification.
2. Click Send. If the task succeeds, verify that the test message reached the destinations.
Expanding a vdisk
You can expand the capacity of a vdisk by adding disks to it, up to the maximum number of disks that the storage system supports. Host I/O to the vdisk can continue while the expansion proceeds. You can then create or expand a volume to use the new free space, which becomes available when the expansion is complete. You can expand only one vdisk at a time. The RAID level determines whether the vdisk can be expanded and the maximum number of disks the vdisk can have.
NOTE: Expansion can take hours or days to complete, depending on the vdisk's RAID level and size, disk
speed, utility priority, and other processes running on the storage system. You can stop expansion only by deleting the vdisk.
Before expanding a vdisk
Back up the vdisk's data so that if you need to stop expansion and delete the vdisk, you can move the data into a new, larger vdisk.
84 Using system tools
Page 85
To expand a vdisk
1. In the Configuration View panel, right-click a vdisk and select Tools > Expand Vdisk. Information
appears about the selected vdisk and all disks in the system.
• In the Disk Selection Sets table, the number of white slots in the vdisk's Disks field shows how many disks you can add to the vdisk.
• In the enclosure view or list, only suitable available disks are selectable.
2. Select disks to add.
3. Click Expand Vdisk. A processing dialog appears.
4. Click OK. The expansion’s progress is shown in the View > Overview panel.
Verifying a vdisk
If you suspect that a redundant (mirror or parity) vdisk has a problem, you can run the Verify utility to check the vdisk's integrity. For example, if the storage system was operating outside the normal temperature range, you might want to verify its vdisks. The Verify utility checks whether the redundancy data in the vdisk is consistent with the user data in the vdisk. For RAID 3, 5, 6, and 50, the utility checks all parity blocks to find data-parity mismatches. For RAID 1 and 10, the utility compares the primary and secondary disks to find data inconsistencies.
Verification can last over an hour, depending on the size of the vdisk, the utility priority, and the amount of I/O activity. When verification is complete, the number of inconsistencies found is reported with event code 21 in the event log. Such inconsistencies can indicate that a disk in the vdisk is going bad. For information about identifying a failing disk, use the SMART option (see Configuring SMART on page 49). You can use a vdisk while it is being verified.
If too many utilities are running for verification to start, either wait until those utilities have completed and try again, or abort a utility to free system resources. If you abort verification, you cannot resume it; you must start it over.
To verify a vdisk
1. In the Configuration View panel, right-click a redundant vdisk and select Tools > Verify Vdisk.
2. Click Start Verify Utility. A message confirms that verification has started.
3. Click OK. The panel shows the verification's progress.
To abort vdisk verification
1. In the Configuration View panel, right-click a redundant vdisk and select Tools > Verify Vdisk.
2. Click Abort Verify Utility. A message confirms that verification has been aborted.
3. Click OK.
Scrubbing a vdisk
The system-level Vdisk Scrub option (see Configuring background scrub for vdisks on page 53) automatically checks all vdisks for disk defects. If this option is disabled, you can still perform a scrub on a selected vdisk.
The scrub utility analyzes a vdisk to detect, report, and store information about disk defects. Vdisk-level errors reported include: hard errors, media errors, and bad block replacements (BBRs). Disk-level errors reported include: metadata read errors, SMART events during scrub, bad blocks during scrub, and new disk defects during scrub. For RAID 3, 5, 6, and 50, the utility checks all parity blocks to find data-parity mismatches. For RAID 1 and 10, the utility compares the primary and secondary disks to find data inconsistencies. For NRAID and RAID 0, the utility checks for media errors. This utility does not fix defects.
You can use a vdisk while it is being scrubbed. A scrub can last over an hour, depending on the size of the vdisk, the utility priority, and the amount of I/O activity. However, a “foreground” scrub performed by Media Scrub Vdisk is typically faster than a background scrub performed by Vdisk Scrub.
When a scrub is complete, an event with code 207 is logged that specifies whether errors were found. For details, see the Event Descriptions Reference Guide.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 85
Page 86
To scrub a vdisk
1. In the Configuration View panel, right-click a vdisk and select Tools > Media Scrub Vdisk.
2. Click Start Media Scrub Utility. A message confirms that the scrub has started.
3. Click OK. The panel shows the scrub's progress.
To abort a vdisk scrub
1. In the Configuration View panel, right-click a vdisk and select Tools > Media Scrub Vdisk.
NOTE: If the vdisk is being scrubbed but the Abort Media Scrub Utility button is grayed out, a
background scrub is in progress. To stop the background scrub, disable the Vdisk Scrub option as described in Configuring background scrub for vdisks on page 53.
2. Click Abort Media Scrub Utility. A message confirms that the scrub has been aborted.
3. Click OK.
Removing a vdisk from quarantine
A vdisk having a fault-tolerant RAID level becomes quarantined if at least one of its disks is missing after the storage system is powered up. Quarantine does not occur for NRAID or RAID-0 vdisks; if known-failed disks are missing; or if disks are missing after failover or recovery.
Quarantine isolates the vdisk from host access, and prevents the system from making the vdisk critical and starting reconstruction. Examples of when quarantine might occur are:
The system is powered up and at least one disk is slow to spin up. A vdisk requiring any of these disks
will be quarantined, then automatically dequarantined when all disks are ready.
The system is powered up but one drive enclosure is not powered up. If a vdisk has disks in the missing
enclosure, the vdisk becomes quarantined. When the drive enclosure is powered up, the vdisk is automatically dequarantined.
Certain transient back-end conditions occur that require a short time to stabilize before vdisks can be
used.
Quarantine status is determined by the number of missing disks:
Quarantined offline (QTOF): The vdisk is offline and quarantined because multiple disks are missing
and user data is incomplete.
Quarantined critical (QTCR): The vdisk is offline and quarantined because at least one disk is missing;
however, the vdisk could be accessed. For instance, one disk is missing from a mirror or RAID-5.
Quarantined with down disks (QTDN): The vdisk is offline and quarantined because at least one disk
is missing; however, the vdisk could be accessed and would be fault tolerant. For instance, one disk is missing from a RAID-6.
A quarantined vdisk’s disks are write-locked and the vdisk is not available to hosts until the vdisk is dequarantined. A vdisk can remain quarantined indefinitely without risk of data loss.
NOTE: The only tasks allowed for a quarantined vdisk are Dequarantine Vdisk and Delete Vdisk. If you
delete a quarantined vdisk and its missing disks are later found, the vdisk reappears as quarantined or offline and you must delete it again (to clear those disks).
A vdisk is dequarantined when it is brought back online, which can occur in three ways:
The missing disks are found and the vdisk is automatically dequarantined.
A vdisk with QTCR or QTDN status is automatically dequarantined after one minute. The missing disks
are failed and the vdisk status changes to critical or degraded.
Dequarantine Vdisk is used to manually dequarantine the vdisk. If the missing disks are later found,
they are marked as leftovers.
86 Using system tools
Page 87
A quarantined vdisk can be fully recovered if the missing disks are restored. Make sure that all disks are properly seated, that no disks have been inadvertently removed, and that no cables have been unplugged. Sometimes not all disks in the vdisk power up. Check that all enclosures have restarted after a power failure. If these problems are found and then fixed, the vdisk recovers and no data is lost.
If the missing disks cannot be restored (for example, they failed), you can dequarantine the vdisk to restore operation in some cases. If you dequarantine a vdisk that is not missing too many disks, its status changes to critical. Then, if spares of the appropriate size are available, reconstruction begins.
If a replacement disk is missing on power up, the vdisk becomes quarantined; when the disk is found, the vdisk is dequarantined and reconstruction starts. If reconstruction was in process, it continues where it left off.
CAUTION: If the vdisk does not have enough disks to continue operation, when the vdisk is removed from
quarantine it goes offline and its data cannot be recovered. To continue operation, a RAID-3 or RAID-5 vdisk can be missing one disk; a RAID-6 vdisk can be missing one or two disks; a RAID-10 or RAID-50 vdisk can be missing one disk per sub-vdisk. For example, a 16-disk RAID-10 vdisk can remain online (critical) with 8 disks missing if one disk per mirror is missing.
To remove a vdisk from quarantine
1. In the Configuration View panel, right-click a quarantined vdisk and select Tools > Dequarantine Vdisk.
2. Click Dequarantine Vdisk. Depending on the number of disks that remain active in the vdisk, its health
might change to Degraded (RAID 6 only) and its status changes to FTOL, CRIT, or FTDN. For status descriptions, see Vdisk properties on page 92.
Expanding a snap pool
By default, snap pools are configured to automatically expand when they become 90% full.
However, if a snap pool’s policy is not set to Auto Expand and the snap pool is running out of free space, you can manually expand the snap pool.
For expansion to succeed, the vdisk must have free space and sufficient resources. Because expansion does not require I/O to be stopped, the snap pool can continue to be used during expansion.
To expand a snap pool
1. In the Configuration View panel, right-click a volume and select Tools > Expand Snap Pool.
2. In the main panel, specify the amount of free space to add to the snap pool.
3. Click Expand Snap Pool. If the task succeeds, the snap pool's size is updated in the Configuration View
panel.
Checking links to a remote system
After a remote system has been added, you can check the connectivity of host ports in the local system to host ports in that remote system. A port in the local system can only link to ports with the same host interface, such as Fibre Channel (FC), in a remote system. When you check links, this panel will show this information for each connected host port in the local system:
The link type
The ID of the port in the local system
The ID of each accessible port in the remote system
If a host port is not shown then either:
It is disconnected
Its link type is not supported by both systems
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 87
Page 88
To check links to a remote system
1. In the Configuration View panel, right-click a remote system and then select Tools > Check Remote
System Links.
2. Click Check Links.
88 Using system tools
Page 89
5 Viewing system status
Viewing information about the system
In the Configuration View panel, right-click the system and select View > Overview. The System Overview table shows:
The system’s health:
OK. The system is operating normally. Degraded. At least one component is degraded. Fault. At least one component has a fault. N/A. Health status is not available.
The system's total storage space
The health, quantity, and storage space of enclosures, disks, and vdisks
The quantity and storage space of volumes and snap pools
The quantity of snapshots and task schedules
Configuration limits, licensed features, and versions of controller firmware and hardware
NOTE: If an I/O module in an MSA70 drive enclosure has a firmware revision below 2.18, the
enclosure's health is shown as degraded and the health reason identifies the I/O module that needs to be updated.
For descriptions of storage-space color codes, see About storage-space color codes on page 33.
Select a component to see more information about it.
System properties
When you select the System component a table shows the system's health, name, contact, location, information (description), vendor name, product ID, product brand, SCSI vendor ID, and supported locales (languages).
A second table shows the system's redundancy mode and status, and each controller's status.
Enclosure properties
When you select the Enclosure component a table shows each enclosure's health, ID, WWN, vendor, model, and quantity of disk slots.
Disk properties
When you select the Disks component a table shows each disk's health, enclosure ID, slot number, serial number, vendor, model, firmware revision, type, usage, status, and size. How Used values are described in the disk properties section of Viewing information about a vdisk on page 92.
Vdisk properties
When you select the Vdisks component a table shows each vdisk's health, name, size, free space, RAID level, status, and disk type. Status values are described in the vdisk properties section of Viewing
information about a vdisk on page 92.
Volume properties
When you select the Volumes component a table shows each volume's name, serial number, size, and vdisk name.
Snap-pool properties
When you select the Snap Pools component a table shows each snap pool's name, serial number, size, free space, master volumes, snapshots, and vdisk name.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 89
Page 90
Snapshot properties
When you select the Snapshots component a table shows each snapshot's name, serial number, source volume, snap-pool name, amounts of snap data, unique data, and shared data, and vdisk name.
Snap data is the total amount of data associated with the specific snapshot (data copied from a source
volume to a snapshot and data written directly to a snapshot).
Unique data is the amount of data that has been written to the snapshot since the last snapshot was
taken. If the snapshot has not been written or is deleted, this value is zero bytes.
Shared data is the amount of data that is potentially shared with other snapshots and the associated
amount of space that will be freed if the snapshot is deleted. This represents the amount of data written directly to the snapshot. It also includes data copied from the source volume to the storage area for the oldest snapshot, since that snapshot does not share data with any other snapshot. For a snapshot that is not the oldest, if the modified data is deleted or if it had never been written to, this value is zero bytes.
Schedule properties
When you select the Schedules component a table shows each schedule's name, specification, status, next run time, task type, task status, and task state.
For the selected schedule, three tables appear. The first table shows schedule details and the second table shows task details. For a task of type TakeSnapshot, the third table shows the name and serial number of each snapshot that the task has taken and is retaining.
Configuration limits
When you select the Configuration Limits component a table shows the maximum quantities of vdisks, volumes, LUNs, disks, and host ports that the system supports.
Licensed features
When you select the Licensed Features component a table shows the status of licensed features.
Version properties
When you select the Versions component a table shows the versions of firmware and hardware in the system.
Viewing the system event log
In the Configuration View panel, right-click the system and select View > Event Log. The System Events panel shows the 100 most recent events that have been logged by either controller. All events are logged, regardless of event-notification settings. Click the buttons above the table to view all events, or only critical, warning, or informational events.
The event log table shows the following information:
Severity.
Critical. A failure occurred that may cause a controller to shut down. Correct the problem
immediately.
Error. A failure occurred that may affect data integrity or system stability. Correct the problem as
soon as possible.
Warning. A problem occurred that may affect system stability but not data integrity. Evaluate the
problem and correct it if necessary.
Informational. A configuration or state change occurred, or a problem occurred that the system
corrected. No action is required.
Time. Date and time when the event occurred, shown as year-month-day hour:minutes:seconds in
Coordinated Universal Time (UTC). Time stamps have one-second granularity.
Event ID. An identifier for the event. The prefix A or B identifies the controller that logged the event.
Code. Event code that helps you and support personnel diagnose problems. For event-code
descriptions and recommended actions, see the event descriptions reference guide.
90 Viewing system status
Page 91
Message. Information about the event.
NOTE: If you are having a problem with the system or a vdisk, check the event log before calling
technical support. Event messages might enable you to resolve the problem.
When reviewing events, do the following:
1. For any critical, error, or warning events, look for recommended actions in the event descriptions
reference guide. Identify the primary events and any that might be the cause of the primary event. For example, an
over-temperature event could cause a disk failure.
2. View the event log and locate other critical/error/warning events in the sequence for the controller that
reported the event. Repeat this step for the other controller if necessary.
3. Review the events that occurred before and after the primary event.
During this review you are looking for any events that might indicate the cause of the critical/error/warning event. You are also looking for events that resulted from the critical/error/warning event, known as secondary events.
4. Review the events following the primary and secondary events.
You are looking for any actions that might have already been taken to resolve the problems reported by the events.
Viewing information about all vdisks
In the Configuration View panel, right-click Vdisks and select View > Overview. The Vdisks Overview table shows the overall health, quantity, capacity, and space usage of existing vdisks. For descriptions of storage-space color codes, see About storage-space color codes on page 33.
For each vdisk, the Vdisks table shows the following details:
Health.
OK. The vdisk is online with all disks working.
Degraded. The vdisk is being reconstructed, as shown by its Current Job property; or, a RAID-6 vdisk has degraded performance due to one missing disk but remains fault tolerant. You can use a degraded RAID-6 vdisk but resolve the problem as soon as possible.
Fault. The vdisk can perform I/O functions for hosts but is not fault tolerant. Review the status information and take the appropriate action, such as replacing a disk. You can use the vdisk but resolve the problem as soon as possible.
Shut down. The vdisk is shut down (stopped).
N/A. Health status is not available.
Name. Vdisk name.
Size. Total storage space in the vdisk.
Free. Available space in the vdisk.
RAID. RAID level of the vdisk and all of its volumes.
Status.
• CRIT: Critical. The vdisk is online but isn't fault tolerant because some of its disks are down.
• FTDN: Fault tolerant with down disks. The vdisk is online and fault tolerant, but some of its disks are
down.
• FTOL: Fault tolerant and online.
• OFFL: Offline. Either the vdisk is using offline initialization, or its disks are down and data may be
lost.
• QTCR: Quarantined critical. The vdisk is offline and quarantined because at least one disk is
missing; however, the vdisk could be accessed. For instance, one disk is missing from a mirror or RAID-5.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 91
Page 92
• QTDN: Quarantined with down disks. The vdisk is offline and quarantined because at least one disk is missing; however, the vdisk could be accessed and would be fault tolerant. For instance, one disk is missing from a RAID-6.
• QTOF: Quarantined offline. The vdisk is offline and quarantined because multiple disks are missing and user data is incomplete.
• UNKN: The vdisk is shut down (stopped).
• UP: Up. The vdisk is online and does not have fault-tolerant attributes.
Disk Type. SAS (dual port), SAS-S (single port), SATA (dual port), or SATA-S (single port).
Preferred Owner. Controller that owns the vdisk and its volumes during normal operation.
Current Owner. Either the preferred owner during normal operation or the partner controller when the
preferred owner is offline.
Disks. Quantity of disks in the vdisk.
Spares. Quantity of dedicated spares in the vdisk.
Viewing information about a vdisk
In the Configuration View panel, right-click a vdisk and select View > Overview. The Vdisks Overview table shows:
The overall health, capacity, and space usage of the vdisk
The overall health, quantity, capacity, and space usage of disks in the vdisk
The quantity, capacity, and space usage of volumes in the vdisk
The quantity, capacity, and space usage of snap pools in the vdisk
For descriptions of storage-space color codes, see About storage-space color codes on page 33.
Select a component to see more information about it.
Vdisk properties
When you select the Vdisk component, the Properties for Vdisk table shows:
Health.
OK. The vdisk is online with all disks working.
Degraded. The vdisk is being reconstructed, as shown by its Current Job property; or, a RAID-6 vdisk has degraded performance due to one missing disk but remains fault tolerant. You can use a degraded RAID-6 vdisk but resolve the problem as soon as possible.
Fault. The vdisk can perform I/O functions for hosts but is not fault tolerant. Review the status information and take the appropriate action, such as replacing a disk. You can use the vdisk but resolve the problem as soon as possible.
Shut down. The vdisk is shut down (stopped).
N/A. Health status is not available.
Health Reason. Shows more information about the vdisk's status.
Name. Vdisk name.
Size. Total storage space in the vdisk.
Free. Available space in the vdisk.
Current Owner. Either the preferred owner during normal operation or the partner controller when the
preferred owner is offline.
Preferred Owner. Controller that owns the vdisk and its volumes during normal operation.
Serial Number. Vdisk serial number.
RAID. RAID level of the vdisk and all of its volumes.
Disks. Quantity of disks in the vdisk.
Spares. Quantity of dedicated spares in the vdisk.
92 Viewing system status
Page 93
Chunk Size.
• For RAID levels except NRAID, RAID 1, and RAID 50, the configured chunk size for the vdisk.
• For NRAID and RAID 1, chunk size has no meaning and is therefore shown as not applicable (N/A).
• For RAID 50, the vdisk chunk size calculated as: configured-chunk-size x (subvdisk-members - 1). For a vdisk configured to use 32-KB chunk size and 4-disk sub-vdisks, the value would be 96k (32KB x 3).
Created. Date and time when the vdisk was created.
Minimum Disk Size. Capacity of the smallest disk in the vdisk.
Status.
• CRIT: Critical. The vdisk is online but isn't fault tolerant because some of its disks are down.
• FTDN: Fault tolerant with down disks. The vdisk is online and fault tolerant, but some of its disks are down.
• FTOL: Fault tolerant and online.
• OFFL: Offline. Either the vdisk is using offline initialization, or its disks are down and data may be lost.
• QTCR: Quarantined critical. The vdisk is offline and quarantined because at least one disk is missing; however, the vdisk could be accessed. For instance, one disk is missing from a mirror or RAID-5.
• QTDN: Quarantined with down disks. The vdisk is offline and quarantined because at least one disk is missing; however, the vdisk could be accessed and would be fault tolerant. For instance, one disk is missing from a RAID-6.
• QTOF: Quarantined offline. The vdisk is offline and quarantined because multiple disks are missing and user data is incomplete.
• UNKN: The vdisk is shut down (stopped).
• UP: Up. The vdisk is online and does not have fault-tolerant attributes.
Current Job. If a utility is running on the vdisk, this field shows the utility's name and progress.
Drive Spin Down Vdisk Enable. Shows whether drive spin down is enabled or disabled for this vdisk.
Disk properties
When you select the Disks component, a Disk Sets table and enclosure view appear. The Disk Sets table shows:
Total Space. Total storage space in the vdisk, followed by a color-coded measure of how the space is
used.
Type. For RAID 10 or RAID 50, the sub-vdisk that the disk is in; for other RAID levels, the disk's RAID
level; or SPARE.
Disk Type. SAS (dual port), SAS-S (single port), SATA (dual port), or SATA-S (single port).
Disks. Quantity of disks in the vdisk or sub-vdisk.
Size. Total capacity of the disks in the vdisk or sub-vdisk.
The enclosure view table has two tabs. The Tabular tab shows:
Health. Shows whether the disk is healthy or has a problem.
OK. The disk is operating normally. Fault. The disk has failed. Degraded. The disk's operation is degraded. If you find no related event in the event log, this may
indicate a hardware problem.
N/A. Health status is not available.
• Name. System-defined disk name using the format Disk-enclosure-number.disk-slot-number.
Type. SAS (dual port), SAS-S (single port), SATA (dual port), or SATA-S (single port).
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 93
Page 94
State. Shows how the disk is used:
• If the disk is in a vdisk, its RAID level
• AVAIL: Available
• SPARE: Spare assigned to a vdisk
• GLOBAL SP: Global spare
• LEFTOVR: Leftover Also shows any job running on the disk:
• EXPD: The vdisk is being expanded
• INIT: The vdisk is being initialized
• RCON: The vdisk is being reconstructed
• VRFY: The vdisk is being verified
• VRSC: The vdisk is being scrubbed
Size. Disk capacity.
Enclosure. Name of the enclosure containing the disk.
Serial Number. Disk serial number.
Status. Up (operational) or Not Present.
The Graphical tab shows the locations of the vdisk's disks in system enclosures and each disk’s Health and State.
Volume properties
When you select the Volumes component, the Volumes table shows:
The volume’s name, serial number, and size
The name of the vdisk containing the volume
Snap-pool properties
When you select the Snap Pools component, the Snap Pools table shows:
The snap pool's name, serial number, size, and free space
The quantity of master volumes and snapshots associated with the snap pool
The name of the vdisk containing the snap pool
Viewing information about a volume
In the Configuration View panel, right-click a volume and select View > Overview. The Volume Overview table shows:
The capacity and space usage of the volume
The quantity of mappings for the volume
The quantity of task schedules for the volume
As described in Viewing replication properties, addresses, and images for a volume on page 118: the
quantities of replication addresses and replication images for the volume
For descriptions of storage-space color codes, see About storage-space color codes on page 33.
Select a component to see more information about it.
Volume properties
When you select the Volume component, the Properties for Volume table shows:
Vdisk Name. Name of the vdisk that the volume is in.
Name. Volume name.
Size. Volume size.
Preferred Owner. Controller that owns the vdisk and its volumes during normal operation.
94 Viewing system status
Page 95
Current Owner. Either the preferred owner during normal operation or the partner controller when the
preferred owner is offline.
Serial Number. Volume serial number.
Cache Write Policy. Write-back or write-through. See Using write-back or write-through caching on
page 25.
Cache Optimization. Standard or super-sequential. See Optimizing read-ahead caching on page 25.
Read Ahead Size. See Optimizing read-ahead caching on page 25.
Type. Standard volume, master volume, or snapshot.
Progress. If the volume is being created by a volume-copy operation, the percent complete.
Volume Description. For OpenVMS, a numeric value (set in SMU) that identifies the volume to an
OpenVMS host. For HP-UX, a text value (set in-band by a host application) that identifies the volume. Blank if not set.
Mapping properties
When you select the Maps component, the Maps for Volume table shows:
Type. Explicit or Default. Settings for an explicit mapping override the default mapping.
Host ID. WWPN or IQN.
Name. Host name.
Ports. Controller host ports through which the volume is mapped to the host.
LUN. Volume identifier presented to the host.
Access. Volume access type: read-write, read-only, no-access (masked), or not-mapped.
Schedule properties
If any schedules exist for this volume, when you select the Schedules component, the Schedules table shows each schedule's name, specification, status, next run time, task type, task status, and task state. For the selected schedule, two tables appear.
The Schedule Details table shows:
Schedule Name. Schedule name.
Schedule Specification. The schedule's start time and recurrence or constraint settings.
Status.
• Uninitialized: Schedule is not yet ready to run.
• Ready: Schedule is ready to run.
• Suspended: Schedule is suspended.
• Expired: Schedule has expired.
• Invalid: Schedule is invalid.
Next Time.
The Task Details table shows different properties depending on the task type. Properties shown for all task types are:
Task Name. Task name.
Task Type. ReplicateVolume, ResetSnapshot, TakeSnapshot, or VolumeCopy.
Status.
• Uninitialized: Task is not yet ready to run.
• Ready: Task is ready to run.
• Active: Task is running.
• Error: Task has an error.
• Invalid: Task is invalid.
Task State. Current step of task processing. Steps vary by task type.
Source Volume. Name of the volume to snap, copy, or replicate.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 95
Page 96
Source Volume Serial. Source volume serial number.
Destination Vdisk. Name of the destination vdisk for a volume copy.
Destination Vdisk Serial. Destination vdisk serial number.
Prefix. Label that identifies snapshots, volume copies, or replication images created by this task.
Count. Number of snapshots to retain with this prefix. When a new snapshot exceeds this limit, the
oldest snapshot with the same prefix is deleted.
Last Created. Name of the last snapshot, volume copy, or replication image created by the task.
Last Used Snapshot. For a task whose replication mode is last-snapshot, the name of the last snapshot
used for replication.
Snapshot Name. Name of the snapshot to reset.
Snapshot Serial. Snapshot serial number.
Mode. Replication mode:
• new-snapshot: Replicate a new snapshot of the primary volume.
• last-snapshot: Replicate the most recent existing snapshot of the primary volume.
For a TakeSnapshot task, the Retained Set table shows the name and serial number of each snapshot that the task has taken and is retaining.
Viewing information about a snapshot
In the Configuration View panel, right-click a snapshot and select View > Overview. The Snapshot Overview table shows:
The capacity and space usage of the snapshot
The quantity of mappings for the snapshot
The quantity of task schedules for the snapshot
For descriptions of storage-space color codes, see About storage-space color codes on page 33.
Select a component to see more information about it.
Snapshot properties
When you select the Snapshot component, the Properties for Snapshot table shows:
Vdisk Name.
Serial Number. Snapshot serial number.
Name. Snapshot name.
Creation Date/Time.
Status.
Status-Reason.
Master Volume Name. Name of the volume that the snapshot was taken of.
Snap-pool Name.
Snap Data. The total amount of data associated with the specific snapshot (data copied from a source
volume to a snapshot and data written directly to a snapshot).
UniqueData. The amount of data that has been written to the snapshot since the last snapshot was
taken. If the snapshot has not been written or is deleted, this value is zero bytes.
SharedData. The amount of data that is potentially shared with other snapshots and the associated
amount of space that will be freed if the snapshot is deleted. This represents the amount of data written directly to the snapshot. It also includes data copied from the source volume to the storage area for the oldest snapshot, since that snapshot does not share data with any other snapshot. For a snapshot that is not the oldest, if the modified data is deleted or if it had never been written to, this value is zero bytes.
96 Viewing system status
Page 97
Mapping properties
When you select the Maps component, the Maps for Volume table shows:
Type. Explicit or Default. Settings for an explicit mapping override the default mapping.
Host ID. WWPN or IQN.
Name. Host name.
Ports. Controller host ports through which the volume is mapped to the host.
LUN. Volume identifier presented to the host.
Access. Volume access type: read-write, read-only, no-access (masked), or not-mapped.
Schedule properties
If any schedules exist for the snapshot, when you select the Schedules component, the Schedules table shows information about each schedule. For the selected schedule, the Schedule Details table shows:
Schedule Name.
Schedule Specification.
Schedule Status.
Next Time.
Task Type.
Task Status.
Task State.
Source Volume.
Source Volume Serial.
Prefix.
Count.
Last Created.
Viewing information about a snap pool
In the Configuration View panel, right-click a snap pool and select View > Overview. The Snap Pool Overview table shows:
The capacity and space usage of the snap pool
The quantity of volumes using the snap pool
The quantity of snapshots in the snap pool
For descriptions of storage-space color codes, see About storage-space color codes on page 33.
Select a component to see more information about it.
Snap-pool properties
When you select the Snap Pool component, two tables appear. The first table shows the snap pool’s name, serial number, size (total capacity), vdisk name, and free space, the number of snapshots in the snap pool, and its status. The status values are:
Available: The snap pool is available for use.
Offline: The snap pool is not available for use, as in the case where its disks are not present.
Corrupt: The snap pool's data integrity has been compromised; the snap pool can no longer be used.
The second table shows the snap pool's threshold values and associated policies. Three thresholds are defined:
Warning: The snap pool is moderately full. When this threshold is reached, an event is generated to
alert the administrator.
Error: The snap pool is nearly full and unless corrective action is taken, snapshot data loss is probable.
When this threshold is reached, an event is generated to alert the administrator and the associated snap-pool policy is triggered.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 97
Page 98
Critical: The snap pool is 99% full and data loss is imminent. When this threshold is reached, an event
is generated to alert the administrator and the associated snap-pool policy is triggered.
The following policies are defined:
Auto Expand: Automatically expand the snap pool by the indicated expansion-size value. This is the
default policy for the Error threshold. If the snap pool’s space usage reaches the percentage specified by its error threshold, the system will
log Warning event 230 and will try to automatically expand the snap pool by the snap pool’s expansion-size value. If the snap pool cannot be expanded because there is not enough available space in its vdisk, the system will log Warning event 444 and will automatically delete the oldest snapshot that is not a current sync point.
Delete Oldest Snapshot: Delete the oldest snapshot.
Delete Snapshots: Delete all snapshots. This is the default policy for the Critical threshold.
Halt Writes: Halt writes to the snap pool.
Notify Only: Generates an event to notify the administrator. This is the only policy for the Warning
threshold.
No Change: Take no action.
NOTE: For details about setting snap-pool thresholds and policies, see the CLI reference guide.
Volume properties
When you select the Client Volumes component, a table shows each volume's name, serial number, size, vdisk name, and vdisk serial number.
Snapshot properties
When you select the Resident Snapshots component, a table shows each volume's name, serial number, and amounts of snap data, unique data, and shared data.
Snap data is the total amount of data associated with the specific snapshot (data copied from a source volume to a snapshot and data written directly to a snapshot).
Unique data is the amount of data that has been written to the snapshot since the last snapshot was taken. If the snapshot has not been written or is deleted, this value is zero bytes.
Shared data is the amount of data that is potentially shared with other snapshots and the associated amount of space that will be freed if the snapshot is deleted. This represents the amount of data written directly to the snapshot. It also includes data copied from the source volume to the storage area for the oldest snapshot, since that snapshot does not share data with any other snapshot. For a snapshot that is not the oldest, if the modified data is deleted or if it had never been written to, this value is zero bytes.
Viewing information about all hosts
In the Configuration View panel, right-click Hosts and select View > Overview. The Hosts table shows the quantity of hosts configured in the system.
For each host, the Hosts Overview table shows the following details:
Host ID. WWPN or IQN.
Name. User-defined nickname for the host.
Discovered. If the host was discovered and its entry was automatically created, Yes. If the host entry
was manually created, No.
Mapped. If volumes are mapped to the host, Yes; otherwise, No.
Host Type. FC or iSCSI.
98 Viewing system status
Page 99
Profile.
• Standard: LUN 0 can be assigned to a mapping.
• OpenVMS: LUN 0 cannot be assigned to a mapping.
• HP-UX: LUN 0 can be assigned to a mapping and the host uses Flat Space Addressing.
Viewing information about a host
In the Configuration View panel, right-click a host and select View > Overview. The Host Overview table shows:
Host properties
The quantity of mappings for the host
Select a component to see more information about it.
Host properties
When you select the Host component, the Properties for Host table shows:
Host ID. WWPN or IQN.
Name. User-defined nickname for the host.
Discovered. If the host was discovered and its entry was automatically created, Yes. If the host entry
was manually created, No.
Mapped. If volumes are mapped to the host, Yes; otherwise, No.
Host Type. FC or iSCSI.
Profile.
• Standard: LUN 0 can be assigned to a mapping.
• OpenVMS: LUN 0 cannot be assigned to a mapping.
• HP-UX: LUN 0 can be assigned to a mapping and the host uses Flat Space Addressing.
Mapping properties
When you select the Maps component, the Maps for Host table shows:
Type. Explicit or Default. Settings for an explicit mapping override the default mapping.
Name. Volume name.
Serial Number. Volume serial number.
Ports. Controller host ports through which the volume is mapped to the host.
LUN. Volume identifier presented to the host.
Access. Volume access type: read-write, read-only, no-access (masked), or not-mapped.
Viewing information about an enclosure
In the Configuration View panel, right-click an enclosure and select View > Overview. You can view information about the enclosure and its components in a front or rear graphical view, or in a front or rear tabular view.
Front Graphical. Shows a graphical view of the front of each enclosure and its disks.
Front Tabular. Shows a tabular view of each enclosure and its disks.
Rear Graphical. Shows a graphical view of components at the rear of the enclosure.
Rear Tabular. Shows a tabular view of components at the rear of the enclosure.
In any of these views, select a component to see more information about it. Components vary by enclosure model. If any components are unhealthy, a table at the bottom of the panel identifies them.
HP StorageWorks P2000 G3 MSA System SMU Reference Guide 99
Page 100
Enclosure properties
When you select an enclosure, a table shows:
Health.
OK. The enclosure is operating normally. Degraded. At least one component is degraded. Fault. At least one component has a fault. N/A. Health status is not available.
Health Reason.
Enclosure ID.
Vendor.
Model.
Disk Slots.
Enclosure WWN.
Mid-plane Serial Number.
Part Number.
Manufacturing Date.
Manufacturing Location.
Revision.
EMP A Revision. Firmware revision of the Enclosure Management Processor in controller module A’s
Expander Controller.
EMP B Revision. Firmware revision of the Enclosure Management Processor in controller module B’s
Expander Controller.
EMP A Bus ID.
EMP B Bus ID.
EMP A Target ID.
EMP B Target ID.
Enclosure Power (watts).
Disk properties
When you select a disk, a table shows:
Health.
OK. The disk is operating normally. Degraded. The disk’s operation is degraded. If you find no related event in the event log, this may
indicate a hardware problem.
Fault. The disk has failed. N/A. Health status is not available.
Health Reason.
Enclosure ID.
Slot.
How Used.
• AVAIL: Available.
• GLOBAL SP: Global spare.
• LEFTOVR: Leftover.
• VDISK: Used in a vdisk.
• VDISK SP: Spare assigned to a vdisk.
Status. Up, Spun Down, Warning, Error, Not Present, Unknown, or Disconnected.
100 Viewing system status
Loading...