Dell PowerVault MD3820f User Manual

Dell PowerVault MD 34XX/38XX Series Storage Arrays
Administrator's Guide
Notes, Cautions, and Warnings
NOTE: A NOTE indicates important information that helps you make better use of your computer.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
intellectual property laws. Dell™ and the Dell logo are trademarks of Dell Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
2014 - 02
Rev. A01
Contents
1 Introduction..............................................................................................................15
Dell PowerVault Modular Disk Storage Manager .............................................................................. 15
User Interface.......................................................................................................................................15
Enterprise Management Window....................................................................................................... 16
Inheriting The System Settings......................................................................................................17
Array Management Window................................................................................................................17
Dell PowerVault Modular Disk Configuration Utility..........................................................................18
Other Information You May Need...................................................................................................... 18
2 About Your MD Series Storage Array................................................................. 21
Physical Disks, Virtual Disks, And Disk Groups................................................................................... 21
Physical Disks.................................................................................................................................21
Physical Disk States........................................................................................................................21
Virtual Disks And Disk Groups.......................................................................................................22
Virtual Disk States.......................................................................................................................... 22
Disk Pools............................................................................................................................................ 23
Thin Virtual Disks.................................................................................................................................23
RAID Levels..........................................................................................................................................23
Maximum Physical Disk Support Limitations............................................................................... 23
RAID Level Usage.......................................................................................................................... 24
RAID 0............................................................................................................................................24
RAID 1.............................................................................................................................................24
RAID 5............................................................................................................................................ 24
RAID 6............................................................................................................................................ 24
RAID 10.......................................................................................................................................... 25
Segment Size.......................................................................................................................................25
Virtual Disk Operations....................................................................................................................... 25
Virtual Disk Initialization................................................................................................................ 25
Consistency Check........................................................................................................................25
Media Verification..........................................................................................................................26
Cycle Time.....................................................................................................................................26
Virtual Disk Operations Limit........................................................................................................ 26
Disk Group Operations....................................................................................................................... 26
RAID Level Migration.....................................................................................................................26
Segment Size Migration.................................................................................................................27
Virtual Disk Capacity Expansion....................................................................................................27
Disk Group Expansion................................................................................................................... 27
Disk Group Defragmentation........................................................................................................27
Disk Group Operations Limit.........................................................................................................27
RAID Background Operations Priority................................................................................................28
Virtual Disk Migration And Disk Roaming...........................................................................................28
Disk Migration................................................................................................................................28
Disk Roaming.................................................................................................................................29
Host Server-To-Virtual Disk Mapping.......................................................................................... 30
Host Types.....................................................................................................................................30
Advanced Features..............................................................................................................................30
Types Of Snapshot Functionality Supported............................................................................... 30
Snapshot Virtual Disks (Legacy).....................................................................................................31
Snapshot (Legacy) Repository Virtual Disk................................................................................... 31
Virtual Disk Copy............................................................................................................................31
Virtual Disk Recovery.....................................................................................................................32
Using Snapshot And Virtual Disk Copy Together.........................................................................32
Multi-Path Software............................................................................................................................ 33
Preferred And Alternate Controllers And Paths............................................................................33
Virtual Disk Ownership..................................................................................................................33
Load Balancing....................................................................................................................................34
Monitoring System Performance....................................................................................................... 34
Interpreting Performance Monitor Data.......................................................................................35
Viewing Real-time Graphical Performance Monitor Data...........................................................38
Customizing the Performance Monitor Dashboard.................................................................... 38
Specifying Performance Metrics...................................................................................................39
Viewing Real-time Textual Performance Monitor.......................................................................40
Saving Real-time Textual Performance Data............................................................................... 41
Starting and Stopping Background Performance Monitor.......................................................... 41
Viewing Information about the Current Background Performance Monitor Session................42
Viewing Current Background Performance Monitor Data..........................................................42
Saving the Current Background Performance Monitor Data......................................................43
Viewing Saved Background Performance Monitor Data.............................................................43
What are invalid objects in the Performance Monitor?...............................................................44
3 Discovering And Managing Your Storage Array............................................. 47
Out-Of-Band Management................................................................................................................47
In-Band Management......................................................................................................................... 47
Access Virtual Disk........................................................................................................................ 48
Storage Arrays..................................................................................................................................... 48
Automatic Discovery Of Storage Arrays.......................................................................................48
Manual Addition Of A Storage Array.............................................................................................48
Setting Up Your Storage Array............................................................................................................49
Locating Storage Arrays................................................................................................................50
Naming Or Renaming Storage Arrays.......................................................................................... 50
Setting A Password........................................................................................................................ 51
Adding Or Editing A Comment To An Existing Storage Array......................................................51
Removing Storage Arrays..............................................................................................................52
Enabling Premium Features.......................................................................................................... 52
Displaying Failover Alert................................................................................................................52
Changing The Cache Settings On The Storage Array..................................................................53
Changing Expansion Enclosure ID Numbers............................................................................... 53
Changing The Enclosure Order....................................................................................................53
Configuring Alert Notifications...........................................................................................................54
Configuring E-mail Alerts..............................................................................................................54
Configuring SNMP Alerts...............................................................................................................55
Battery Settings................................................................................................................................... 58
Changing The Battery Settings..................................................................................................... 59
Setting The Storage Array RAID Controller Module Clocks.............................................................. 59
4 Using iSCSI................................................................................................................61
Changing The iSCSI Target Authentication........................................................................................61
Entering Mutual Authentication Permissions..................................................................................... 61
Creating CHAP Secrets....................................................................................................................... 62
Initiator CHAP Secret.................................................................................................................... 62
Target CHAP Secret...................................................................................................................... 62
Valid Characters For CHAP Secrets.............................................................................................. 62
Changing The iSCSI Target Identification..........................................................................................63
Changing The iSCSI Target Discovery Settings................................................................................. 63
Configuring The iSCSI Host Ports...................................................................................................... 64
Advanced iSCSI Host Port Settings.....................................................................................................65
Viewing Or Ending An iSCSI Session.................................................................................................. 65
Viewing iSCSI Statistics And Setting Baseline Statistics.....................................................................66
Edit, Remove, Or Rename Host Topology.........................................................................................66
5 Event Monitor..........................................................................................................69
Enabling Or Disabling The Event Monitor..........................................................................................69
Windows........................................................................................................................................69
Linux...............................................................................................................................................70
6 About Your Host......................................................................................................71
Configuring Host Access.....................................................................................................................71
Using The Host Mappings Tab............................................................................................................72
Defining A Host..............................................................................................................................72
Removing Host Access....................................................................................................................... 73
Managing Host Groups....................................................................................................................... 73
Creating A Host Group........................................................................................................................73
Adding A Host To A Host Group...................................................................................................74
Removing A Host From A Host Group......................................................................................... 74
Moving A Host To A Different Host Group...................................................................................74
Removing A Host Group............................................................................................................... 75
Host Topology...............................................................................................................................75
Starting Or Stopping The Host Context Agent.............................................................................75
I/O Data Path Protection.................................................................................................................... 76
Managing Host Port Identifiers........................................................................................................... 76
7 Disk Groups, Standard Virtual Disks, And Thin Virtual Disks.......................79
Creating Disk Groups And Virtual Disks............................................................................................. 79
Creating Disk Groups....................................................................................................................80
Locating A Disk Group...................................................................................................................81
Creating Standard Virtual Disks.....................................................................................................81
Changing The Virtual Disk Modification Priority..........................................................................82
Changing The Virtual Disk Cache Settings...................................................................................83
Changing The Segment Size Of A Virtual Disk.............................................................................84
Changing The IO Type..................................................................................................................85
Thin Virtual Disks.................................................................................................................................86
Advantages Of Thin Virtual Disks..................................................................................................86
Physical Vs Virtual Capacity On A Thin Virtual Disk.....................................................................86
Thin Virtual Disk Requirements And Limitations.......................................................................... 87
Thin Volume Attributes................................................................................................................. 87
Thin Virtual Disk States..................................................................................................................88
Comparison—Types Of Virtual Disks And Copy Services............................................................88
Rollback On Thin Virtual Disks......................................................................................................89
Initializing A Thin Virtual Disk........................................................................................................89
Changing A Thin Virtual Disk To A Standard Virtual Disk............................................................ 92
Choosing An Appropriate Physical Disk Type....................................................................................92
Physical Disk Security With Self Encrypting Disk............................................................................... 92
Creating A Security Key.................................................................................................................94
Changing A Security Key...............................................................................................................96
Saving A Security Key.................................................................................................................... 96
Validate Security Key..................................................................................................................... 97
Unlocking Secure Physical Disks.................................................................................................. 97
Erasing Secure Physical Disks....................................................................................................... 97
Configuring Hot Spare Physical Disks................................................................................................98
Hot Spares And Rebuild................................................................................................................ 99
Global Hot Spares......................................................................................................................... 99
Hot Spare Operation.....................................................................................................................99
Hot Spare Drive Protection.........................................................................................................100
Enclosure Loss Protection................................................................................................................100
Drawer Loss Protection.....................................................................................................................101
Host-To-Virtual Disk Mapping..........................................................................................................102
Creating Host-To-Virtual Disk Mappings...................................................................................102
Modifying And Removing Host-To-Virtual Disk Mapping.........................................................103
Changing Controller Ownership Of The Virtual Disk................................................................ 104
Removing Host-To-Virtual Disk Mapping..................................................................................105
Changing The RAID Controller Module Ownership Of A Disk Group...................................... 105
Changing The RAID Level Of A Disk Group............................................................................... 105
Removing A Host-To-Virtual Disk Mapping Using Linux DMMP.............................................. 106
Restricted Mappings..........................................................................................................................107
Storage Partitioning.......................................................................................................................... 108
Disk Group And Virtual Disk Expansion............................................................................................109
Disk Group Expansion.................................................................................................................109
Virtual Disk Expansion.................................................................................................................109
Using Free Capacity.................................................................................................................... 109
Using Unconfigured Capacity..................................................................................................... 110
Disk Group Migration........................................................................................................................ 110
Export Disk Group....................................................................................................................... 110
Import Disk Group........................................................................................................................111
Storage Array Media Scan..................................................................................................................112
Changing Media Scan Settings....................................................................................................112
Suspending The Media Scan....................................................................................................... 113
8 Disk Pools And Disk Pool Virtual Disks............................................................115
Difference Between Disk Groups And Disk Pools............................................................................ 115
Disk Pool Restrictions........................................................................................................................115
Creating A Disk Pool Manually..........................................................................................................116
Automatically Managing The Unconfigured Capacity In Disk Pools............................................... 117
Locating Physical Disks In A Disk Pool............................................................................................. 118
Renaming A Disk Pool....................................................................................................................... 118
Configuring Alert Notifications For A Disk Pool............................................................................... 119
Adding Unassigned Physical Disks To A Disk Pool...........................................................................119
Configuring The Preservation Capacity Of A Disk Pool.................................................................. 120
Changing The Modification Priority Of A Disk Pool........................................................................ 120
Changing The RAID Controller Module Ownership Of A Disk Pool................................................121
Checking Data Consistency.............................................................................................................. 121
Deleting A Disk Pool..........................................................................................................................122
Viewing Storage Array Logical Components And Associated Physical Components ...................123
Secure Disk Pools..............................................................................................................................124
Changing Capacity On Existing Thin Virtual Disks...........................................................................124
Creating A Thin Virtual Disk From A Disk Pool.................................................................................125
9 Using SSD Cache...................................................................................................127
How SSD Cache Works..................................................................................................................... 127
Benefits Of SSD Cache......................................................................................................................127
Choosing SSD Cache Parameters.....................................................................................................127
SSD Cache Restrictions.....................................................................................................................128
Creating An SSD Cache.................................................................................................................... 128
Viewing Physical Components Associated With An SSD Cache.....................................................129
Locating Physical Disks In An SSD Cache........................................................................................ 129
Adding Physical Disks To An SSD Cache..........................................................................................129
Removing Physical Disks From An SSD Cache................................................................................ 130
Suspending Or Resuming SSD Caching...........................................................................................130
Changing I/O Type In An SSD Cache...............................................................................................130
Renaming An SSD Cache.................................................................................................................. 131
Deleting An SSD Cache..................................................................................................................... 131
Using The Performance Modeling Tool............................................................................................131
10 Premium Feature—Snapshot Virtual Disk.................................................... 133
Snapshot Virtual Disk Vs. Snapshot Virtual Disk (Legacy)................................................................ 133
Snapshot Images And Groups.......................................................................................................... 133
Snapshot Virtual Disk Read/Write Properties................................................................................... 134
Snapshot Groups And Consistency Groups.....................................................................................134
Snapshot Groups.........................................................................................................................134
Snapshot Consistency Groups....................................................................................................135
Understanding Snapshot Repositories............................................................................................. 135
Consistency Group Repositories................................................................................................ 135
Ranking Repository Candidates..................................................................................................136
Using Snapshot Consistency Groups With Remote Replication...............................................136
Creating Snapshot Images................................................................................................................136
Creating A Snapshot Image.........................................................................................................137
Canceling A Pending Snapshot Image....................................................................................... 138
Deleting A Snapshot Image.........................................................................................................138
Scheduling Snapshot Images............................................................................................................139
Creating A Snapshot Schedule................................................................................................... 139
Editing A Snapshot Schedule......................................................................................................140
Performing Snapshot Rollbacks........................................................................................................141
Snapshot Rollback Limitations.................................................................................................... 141
Starting A Snapshot Rollback......................................................................................................142
Resuming A Snapshot Image Rollback.......................................................................................142
Canceling A Snapshot Image Rollback.......................................................................................143
Viewing The Progress Of A Snapshot Rollback..........................................................................143
Changing Snapshot Rollback Priority.........................................................................................144
Creating A Snapshot Group..............................................................................................................144
Creating A Consistency Group Repository (Manually).............................................................. 146
Changing Snapshot Group Settings........................................................................................... 147
Renaming A Snapshot Group..................................................................................................... 148
Deleting A Snapshot Group........................................................................................................ 148
Converting A Snapshot Virtual Disk To Read-Write........................................................................ 148
Viewing Associated Physical Components Of An Individual Repository Virtual Disk.................... 149
Creating A Consistency Group.........................................................................................................149
Creating A Consistency Group Repository (Manually)...............................................................151
Renaming A Consistency Group.................................................................................................152
Deleting A Consistency Group....................................................................................................152
Changing The Settings Of A Consistency Group.......................................................................153
Adding A Member Virtual Disk To A Consistency Group...........................................................153
Removing A Member Virtual Disk From A Consistency Group................................................. 154
Creating A Snapshot Virtual Disk Of A Snapshot Image.................................................................. 155
Snapshot Virtual Disk Limitations................................................................................................155
Creating A Snapshot Virtual Disk................................................................................................ 156
Creating A Snapshot Virtual Disk Repository..............................................................................157
Changing The Settings Of A Snapshot Virtual Disk....................................................................158
Disabling A Snapshot Virtual Disk Or Consistency Group Snapshot Virtual Disk..................... 158
Re-creating A Snapshot Virtual Disk Or Consistency Group Snapshot Virtual Disk.................159
Renaming A Snapshot Virtual Disk Or Consistency Group Snapshot Virtual Disk................... 160
Deleting A Snapshot Virtual Disk Or Consistency Group Snapshot Virtual Disk.......................161
Creating A Consistency Group Snapshot Virtual Disk......................................................................161
Creating A Consistency Group Snapshot Virtual Disk Repository (Manually)...........................163
Disabling A Snapshot Virtual Disk Or Consistency Group Snapshot Virtual Disk..................... 165
Re-creating A Snapshot Virtual Disk Or Consistency Group Snapshot Virtual Disk.................166
Changing The Modification Priority Of An Overall Repository Virtual Disk.............................. 167
Changing The Media Scan Setting Of An Overall Repository Virtual Disk................................ 167
Changing The Pre-read Consistency Check Setting Of An Overall Repository Virtual Disk... 168
Increasing The Capacity Of An Overall Repository................................................................... 168
Decreasing The Capacity Of The Overall Repository................................................................ 170
Performing A Revive Operation...................................................................................................171
11 Premium Feature—Snapshot Virtual Disks (Legacy).................................. 173
Scheduling A Snapshot Virtual Disk..................................................................................................174
Common Reasons For Scheduling A Snapshot Virtual Disk......................................................174
Guidelines for Creating Snapshot Schedules.............................................................................174
Creating A Snapshot Virtual Disk Using The Simple Path................................................................ 175
About The Simple Path................................................................................................................175
Preparing Host Servers To Create The Snapshot Using The Simple Path.................................175
Creating A Snapshot Virtual Disk Using The Advanced Path........................................................... 177
About The Advanced Path...........................................................................................................177
Preparing Host Servers To Create The Snapshot Using The Advanced Path............................177
Creating The Snapshot Using The Advanced Path.................................................................... 179
Specifying Snapshot Virtual Disk Names..........................................................................................180
Snapshot Repository Capacity......................................................................................................... 180
Re-Creating Snapshot Virtual Disks..................................................................................................182
Disabling A Snapshot Virtual Disk.....................................................................................................182
Preparing Host Servers To Re-Create A Snapshot Virtual Disk................................................. 183
Re-Creating A Snapshot Virtual Disk................................................................................................184
12 Premium Feature—Virtual Disk Copy............................................................ 185
Using Virtual Disk Copy With Snapshot Or Snapshot (Legacy) Premium Feature......................... 186
Types Of Virtual Disk Copies............................................................................................................ 186
Offline Copy................................................................................................................................ 186
Online Copy................................................................................................................................ 186
Creating A Virtual Disk Copy For An MSCS Shared Disk..................................................................187
Virtual Disk Read/Write Permissions.................................................................................................187
Virtual Disk Copy Restrictions...........................................................................................................187
Creating A Virtual Disk Copy............................................................................................................ 188
Setting Read/Write Permissions On Target Virtual Disk............................................................ 188
Before You Begin........................................................................................................................ 189
Virtual Disk Copy And Modification Operations........................................................................ 189
Create Copy Wizard....................................................................................................................189
Failed Virtual Disk Copy.............................................................................................................. 189
Preferred RAID Controller Module Ownership................................................................................189
Failed RAID Controller Module.........................................................................................................190
Copy Manager...................................................................................................................................190
Copying The Virtual Disk.................................................................................................................. 190
Storage Array Performance During Virtual Disk Copy..................................................................... 191
Setting Copy Priority..........................................................................................................................191
Stopping A Virtual Disk Copy............................................................................................................192
Recopying A Virtual Disk...................................................................................................................192
Preparing Host Servers To Recopy A Virtual Disk...................................................................... 192
Recopying The Virtual Disk......................................................................................................... 193
Removing Copy Pairs........................................................................................................................194
13 Device Mapper Multipath For Linux...............................................................195
Overview............................................................................................................................................195
Using DM Multipathing Devices........................................................................................................195
Prerequisites................................................................................................................................ 196
Device Mapper Configuration Steps................................................................................................ 196
Scan For Newly Added Virtual Disks........................................................................................... 197
Display The Multipath Device Topology Using The Multipath Command................................197
Create A New fdisk Partition On A Multipath Device Node...................................................... 198
Add A New Partition To Device Mapper.....................................................................................198
Create A File System On A Device Mapper Partition................................................................. 198
Mount A Device Mapper Partition.............................................................................................. 199
Ready For Use..............................................................................................................................199
Linux Host Server Reboot Best Practices.........................................................................................199
Important Information About Special Partitions..............................................................................199
Limitations And Known Issues......................................................................................................... 200
Troubleshooting................................................................................................................................201
14 Configuring Asymmetric Logical Unit Access............................................ 203
ALUA Performance Considerations................................................................................................. 203
Automatic Transfer Of Ownership...................................................................................................203
Native ALUA Support On Microsoft Windows And Linux................................................................203
Enabling ALUA On VMware ESXi......................................................................................................204
Manually Adding SATP Rule In ESXi 5.x......................................................................................204
Verifying ALUA On VMware ESXi......................................................................................................204
Verifying If Host Server Is Using ALUA For MD Storage Array........................................................ 204
Setting Round-Robin Load Balancing Policy On ESXi-Based Storage Arrays................................205
15 Premium Feature—Remote Replication.......................................................207
About Asynchronous Remote Replication.......................................................................................207
Remote Replicated Pairs And Replication Repositories.................................................................. 207
Types Of Remote Replication.......................................................................................................... 208
Differences Between Remote Replication Features..................................................................208
Upgrading To Asynchronous Remote Replication From Remote Replication (Legacy)......... 208
Remote Replication Requirements And Restrictions......................................................................209
Restrictions On Using Remote Replication............................................................................... 209
Setting Up Remote Replication........................................................................................................209
Activating Remote Replication Premium Features..........................................................................209
Deactivating Remote Replication.....................................................................................................210
Remote Replication Groups..............................................................................................................211
Purpose Of A Remote Replication Group...................................................................................211
Remote Replication Group Requirements And Guidelines........................................................211
Creating A Remote Replication Group....................................................................................... 211
Replicated Pairs................................................................................................................................. 212
Guidelines for Choosing Virtual Disks in a Replicated Pair........................................................212
Guidelines For Choosing Virtual Disks In A Replicated Pair.......................................................212
Creating Replicated Pairs............................................................................................................ 213
Removing A Replicated Pair From A Remote Replication Group..............................................214
16 Management Firmware Downloads...............................................................215
Downloading RAID Controller And NVSRAM Packages.................................................................. 215
Downloading Both RAID Controller And NVSRAM Firmware..........................................................215
Downloading Only NVSRAM Firmware............................................................................................ 217
Downloading Physical Disk Firmware.............................................................................................. 218
Downloading MD3060e Series Expansion Module EMM Firmware............................................... 219
Self-Monitoring Analysis And Reporting Technology (SMART)...................................................... 220
Media Errors And Unreadable Sectors.............................................................................................220
17 Firmware Inventory............................................................................................223
Viewing The Firmware Inventory......................................................................................................223
18 System Interfaces............................................................................................... 225
Virtual Disk Service............................................................................................................................225
Volume Shadow-Copy Service........................................................................................................ 225
19 Storage Array Software..................................................................................... 227
Start-Up Routine............................................................................................................................... 227
Device Health Conditions.................................................................................................................227
Trace Buffers.....................................................................................................................................230
Retrieving Trace Buffers............................................................................................................. 230
Collecting Physical Disk Data........................................................................................................... 231
Creating A Support Data Collection Schedule...........................................................................231
Suspending Or Resuming A Support Data Collection Schedule...............................................231
Removing A Support Data Collection Schedule........................................................................232
Event Log...........................................................................................................................................232
Viewing The Event Log............................................................................................................... 233
Recovery Guru.................................................................................................................................. 233
Storage Array Profile......................................................................................................................... 233
Viewing The Physical Associations...................................................................................................235
Recovering From An Unresponsive Storage Array Condition.........................................................235
Locating A Physical Disk................................................................................................................... 237
Locating An Expansion Enclosure.................................................................................................... 237
Capturing The State Information..................................................................................................... 238
SMrepassist Utility............................................................................................................................. 238
Unidentified Devices.........................................................................................................................239
Recovering From An Unidentified Storage Array.............................................................................239
Starting Or Restarting The Host Context Agent Software.............................................................. 240
Starting The SMagent Software In Windows............................................................................. 240
Starting The SMagent Software In Linux.................................................................................... 241
20 Getting Help........................................................................................................243
Contacting Dell.................................................................................................................................243
14

Introduction

CAUTION: See the Safety, Environmental, and Regulatory Information document for important safety information before following any procedures listed in this document.
The following MD Series systems are supported by the latest version of Dell PowerVault Modular Disk Manager (MDSM):
2U MD Series systems:
– Dell PowerVault MD 3400/3420
– Dell PowerVault MD 3800i/3820i
– Dell PowerVault MD 3800f/3820f
4U (dense) MD Series systems:
– Dell PowerVault MD 3460
– Dell PowerVault MD 3860i
– Dell PowerVault MD 3860f
NOTE: Your Dell MD Series storage arrays supports two expansion enclosures (180 physical disks) after you install the Additional Physical Disk Support Premium Feature Key. To order the
Additional Physical Disk Support Premium Feature Key, contact Dell Support.
1

Dell PowerVault Modular Disk Storage Manager

Dell PowerVault Modular Disk Storage Manager (MD Storage Manager) is a graphical user interface (GUI) application used to configure and manage one or more MD Series storage arrays. The MD Storage Manager software is located on the MD Series resource DVD.

User Interface

The Storage Manager screen is divided into two primary windows:
Enterprise Management Window (EMW) — The EMW provides high-level management of multiple
storage arrays. You can launch the Array Management Windows for the storage arrays from the EMW.
Array Management Window (AMW) — The AMW provides management functions for a single storage
array.
The EMW and the AMW consist of the following:
The title bar at the top of the window — Shows the name of the application.
The menu bar, beneath the title bar — You can select menu options from the menu bar to perform
tasks on a storage array.
The toolbar, beneath the menu bar — You can select options in the toolbar to perform tasks on a
storage array.
15
NOTE: The toolbar is available only in the EMW.
The tabs, beneath the toolbar — Tabs are used to group the tasks that you can perform on a storage
array.
The status bar, beneath the tabs — The status bar shows status messages and status icons related to
the storage array.
NOTE: By default, the toolbar and status bar are not displayed. To view the toolbar or the status bar, select ViewToolbar or ViewStatus Bar.

Enterprise Management Window

The EMW provides high-level management of storage arrays. When you start the MD Storage Manager, the EMW is displayed. The EMW has the:
Devices tab — Provides information about discovered storage arrays.
Setup tab — Presents the initial setup tasks that guide you through adding storage arrays and
configuring alerts.
The Devices tab has a Tree view on the left side of the window that shows discovered storage arrays, unidentified storage arrays, and the status conditions for the storage arrays. Discovered storage arrays are managed by the MD Storage Manager. Unidentified storage arrays are available to the MD Storage Manager but not configured for management. The right side of the Devices tab has a Table view that shows detailed information for the selected storage array.
In the EMW, you can:
Discover hosts and managed storage arrays on the local sub-network.
Manually add and remove hosts and storage arrays.
Blink or locate the storage arrays.
Name or rename discovered storage arrays.
Add comments for a storage array in the Table view.
Schedule or automatically save a copy of the support data when the client monitor process detects an
event.
Store your EMW view preferences and configuration data in local configuration files. The next time
you open the EMW, data from the local configuration files is used to show customized view and
preferences.
Monitor the status of managed storage arrays and indicate status using appropriate icons.
Add or remove management connections.
Configure alert notifications for all selected storage arrays through e-mail or SNMP traps.
Report critical events to the configured alert destinations.
Launch the AMW for a selected storage array.
Run a script to perform batch management tasks on specific storage arrays.
Import the operating system theme settings into the MD Storage Manager.
Upgrade firmware on multiple storage arrays concurrently.
Obtain information about the firmware inventory including the version of the RAID controller
modules, physical disks, and the enclosure management modules (EMMs) in the storage array.
16

Inheriting The System Settings

Use the Inherit System Settings option to import the operating system theme settings into the MD Storage Manager. Importing system theme settings affects the font type, font size, color, and contrast in the MD Storage Manager.
1. From the EMW, open the Inherit System Settings window in one of these ways:
– Select ToolsInherit System Settings. – Select the Setup tab, and under Accessibility, click Inherit System Settings.
2. Select Inherit system settings for color and font.
3. Click OK.

Array Management Window

You can launch the AMW from the EMW. The AMW provides management functions for a single storage array. You can have multiple AMWs open simultaneously to manage different storage arrays.
In the AMW, you can:
Select storage array options — For example, renaming a storage array, changing a password, or
enabling a background media scan.
Configure virtual disks and disk pools from the storage array capacity, define hosts and host groups,
and grant host or host group access to sets of virtual disks called storage partitions.
Monitor the health of storage array components and report detailed status using applicable icons.
Perform recovery procedures for a failed logical component or a failed hardware component.
View the Event Log for a storage array.
View profile information about hardware components, such as RAID controller modules and physical
disks.
Manage RAID controller modules — For example, changing ownership of virtual disks or placing a
RAID controller module online or offline.
Manage physical disks — For example, assignment of hot spares and locating the physical disk.
Monitor storage array performance.
To launch the AMW:
1. In the EMW, on the Devices tab, right-click on the relevant storage array.
The context menu for the selected storage is displayed.
2. In the context menu, select Manage Storage Array.
The AMW for the selected storage array is displayed.
NOTE: You can also launch the AMW by:
– Double-clicking on a storage array displayed in the Devices tab of the EMW. – Selecting a storage array displayed in the Devices tab of the EMW, and then selecting Tools
Manage Storage Array.
The AMW has the following tabs:
Summary tab — You can view the following information about the storage array:
– Status
17
– Hardware
– Storage and copy services
– Hosts and mappings
– Information on storage capacity
– Premium features
Performance tab — You can track a storage array’s key performance data and identify performance
bottlenecks in your system. You can monitor the system performance in the following ways:
– Real-time graphical
– Real-time textual
– Background (historical)
Storage & Copy Services tab — You can view and manage the organization of the storage array by
virtual disks, disk groups, free capacity nodes, and any unconfigured capacity for the storage array.
Host Mappings tab — You can define the hosts, host groups, and host ports. You can change the
mappings to grant virtual disk access to host groups and hosts and create storage partitions.
Hardware tab — You can view and manage the physical components of the storage array.
Setup tab — Shows a list of initial setup tasks for the storage array.

Dell PowerVault Modular Disk Configuration Utility

NOTE: Dell PowerVault Modular Disk Configuration Utility (MDCU) is supported only on MD Series storage arrays that use the iSCSI protocol.
MDCU is an iSCSI Configuration Wizard that can be used in conjunction with MD Storage Manager to simplify the configuration of iSCSI connections. The MDCU software is available on the MD Series resource media.

Other Information You May Need

WARNING: See the safety and regulatory information that shipped with your system. Warranty information may be included within this document or as a separate document.
NOTE: All the documents, unless specified otherwise, are available at dell.com/support/manuals.
The Getting Started Guide provides an overview of setting up and cabling your storage array.
The Deployment Guide provides installation and configuration instructions for both software and
hardware.
The Owner’s Manual provides information about system features and describes how to troubleshoot
the system and install or replace system components.
The CLI Guide provides information about using the command line interface (CLI).
The MD Series resource media contains all system management tools.
The Dell PowerVault MD Series Support Matrix provides information on supported software and
hardware for MD systems.
Information Updates or readme files are included to provide last-minute updates to the enclosure or
documentation or advanced technical reference material intended for experienced users or
technicians.
For video resources on PowerVault MD storage arrays, go to dell.com/techcenter.
For the full name of an abbreviation or acronym used in this document, see the Glossary at dell.com/
support/manuals.
18
NOTE: Always check for updates on dell.com/support/manuals and read the updates first because they often supersede information in other documents.
19
20
2

About Your MD Series Storage Array

This chapter describes the storage array concepts, which help in configuring and operating the Dell MD Series storage arrays.

Physical Disks, Virtual Disks, And Disk Groups

Physical disks in your storage array provide the physical storage capacity for your data. Before you can begin writing data to the storage array, you must configure the physical storage capacity into logical components, called disk groups and virtual disks.
A disk group is a set of physical disks upon which multiple virtual disks are created. The maximum number of physical disks supported in a disk group is:
96 disks for RAID 0, RAID 1, and RAID 10
30 disks for RAID 5 and RAID 6
You can create disk groups from unconfigured capacity on your storage array. A virtual disk is a partition in a disk group that is made up of contiguous data segments of the physical
disks in the disk group. A virtual disk consists of data segments from all physical disks in the disk group. All virtual disks in a disk group support the same RAID level. The storage array supports up to 255 virtual
disks (minimum size of 10 MB each) that can be assigned to host servers. Each virtual disk is assigned a Logical Unit Number (LUN) that is recognized by the host operating system.
Virtual disks and disk groups are set up according to how you plan to organize your data. For example, you can have one virtual disk for inventory, a second virtual disk for financial and tax information, and so on.

Physical Disks

Only Dell supported physical disks are supported in the storage array. If the storage array detects unsupported physical disks, it marks the disk as unsupported and the physical disk becomes unavailable for all operations.
For the list of supported physical disks, see the Support Matrix at dell.com/support/manuals.

Physical Disk States

The following describes the various states of the physical disk, which are recognized by the storage array and reported in the MD Storage Manager.
Status Mode Description
Optimal Assigned The physical disk in the indicated slot is
configured as part of a disk group.
Optimal Unassigned The physical disk in the indicated slot is unused
and available to be configured.
21
Status Mode Description
Optimal Hot Spare Standby The physical disk in the indicated slot is
configured as a hot spare.
Optimal Hot Spare in use The physical disk in the indicated slot is in use as a
hot spare within a disk group.
Failed Assigned, Unassigned, Hot
Spare in use, or Hot Spare Standby
Replaced Assigned The physical disk in the indicated slot has been
Pending Failure Assigned, Unassigned, Hot
Spare in use, or Hot Spare Standby
Offline Not applicable The physical disk has either been spun down or
Identify Assigned, Unassigned, Hot
Spare in use, or Hot Spare Standby
The physical disk in the indicated slot has failed because of an unrecoverable error, an incorrect drive type or drive size, or by its operational state being set to failed.
replaced and is ready to be, or is actively being, configured into a disk group.
A Self-Monitoring Analysis and Reporting Technology (SMART) error has been detected on the physical disk in the indicated slot.
had a rebuild aborted by user request.
The physical disk is being identified.

Virtual Disks And Disk Groups

When configuring a storage array, you must:
Organize the physical disks into disk groups.
Create virtual disks within these disk groups.
Provide host server access.
Create mappings to associate the virtual disks with the host servers.
NOTE: Host server access must be created before mapping virtual disks.
Disk groups are always created in the unconfigured capacity of a storage array. Unconfigured capacity is the available physical disk space not already assigned in the storage array.
Virtual disks are created within the free capacity of a disk group. Free capacity is the space in a disk group that has not been assigned to a virtual disk.

Virtual Disk States

The following table describes the various states of the virtual disk, recognized by the storage array.
Table 1. RAID Controller Virtual Disk States
State Description
Optimal The virtual disk contains physical disks that are online.
Degraded The virtual disk with a redundant RAID level contains
an inaccessible physical disk. The system can still
22
State Description
function properly, but performance may be affected and additional disk failures may result in data loss.
Offline A virtual disk with one or more member disks in an
inaccessible (failed, missing, or offline) state. Data on the virtual disk is no longer accessible.
Force online The storage array forces a virtual disk that is in an
Offline state to an Optimal state. If all the member physical disks are not available, the storage array forces the virtual disk to a Degraded state. The storage array can force a virtual disk to an Online state only when a sufficient number of physical disks are available to support the virtual disk.

Disk Pools

Disk pooling allows you to distribute data from each virtual disk randomly across a set of physical disks. Although there is no limit on the maximum number of physical disks that can comprise a disk pool, each disk pool must have a minimum of 11 physical disks. Additionally, the disk pool cannot contain more physical disks than the maximum limit for each storage array.

Thin Virtual Disks

Thin virtual disks can be created from an existing disk pool. Creating thin virtual disks allows you to set up a large virtual space, but only use the actual physical space as you need it.

RAID Levels

RAID levels determine the way in which data is written to physical disks. Different RAID levels provide different levels of accessibility, redundancy, and capacity.
Using multiple physical disks has the following advantages over using a single physical disk:
Placing data on multiple physical disks (striping) allows input/output (I/O) operations to occur
simultaneously and improve performance.
Storing redundant data on multiple physical disks using mirroring or parity supports reconstruction of
lost data if an error occurs, even if that error is the failure of a physical disk.
Each RAID level provides different performance and protection. You must select a RAID level based on the type of application, access, fault tolerance, and data you are storing.
The storage array supports RAID levels 0, 1, 5, 6, and 10. The maximum and minimum number of physical disks that can be used in a disk group depends on the RAID level:
120 (180 with PFK) for RAID 0, 1, and 10
30 for RAID 5 and 6

Maximum Physical Disk Support Limitations

Although PowerVault MD Series storage arrays with premium feature kit can support up to 180 physical disks, RAID 0 and RAID 10 configurations with more than 120 physical disks are not supported. MD
23
Storage Manager does not enforce 120-physical disk limit when you setup a RAID 0 or RAID 10 configuration. Exceeding the 120-physical disk limit may cause your storage array to be unstable.

RAID Level Usage

To ensure best performance, you must select an optimal RAID level when you create a system physical disk. The optimal RAID level for your disk array depends on:
Number of physical disks in the disk array
Capacity of the physical disks in the disk array
Need for redundant access to the data (fault tolerance)
Disk performance requirements

RAID 0

CAUTION: Do not attempt to create virtual disk groups exceeding 120 physical disks in a RAID 0 configuration even if premium feature is activated on your storage array. Exceeding the 120­physical disk limit may cause your storage array to be unstable.
RAID 0 uses disk striping to provide high data throughput, especially for large files in an environment that requires no data redundancy. RAID 0 breaks the data down into segments and writes each segment to a separate physical disk. I/O performance is greatly improved by spreading the I/O load across many physical disks. Although it offers the best performance of any RAID level, RAID 0 lacks data redundancy. Choose this option only for non-critical data, because failure of one physical disk results in the loss of all data. Examples of RAID 0 applications include video editing, image editing, prepress applications, or any application that requires high bandwidth.

RAID 1

RAID 1 uses disk mirroring so that data written to one physical disk is simultaneously written to another physical disk. RAID 1 offers fast performance and the best data availability, but also the highest disk overhead. RAID 1 is recommended for small databases or other applications that do not require large capacity. For example, accounting, payroll, or financial applications. RAID 1 provides full data redundancy.

RAID 5

RAID 5 uses parity and striping data across all physical disks (distributed parity) to provide high data throughput and data redundancy, especially for small random access. RAID 5 is a versatile RAID level and is suited for multi-user environments where typical I/O size is small and there is a high proportion of read activity such as file, application, database, web, e-mail, news, and intranet servers.

RAID 6

RAID 6 is similar to RAID 5 but provides an additional parity disk for better redundancy. RAID 6 is the most versatile RAID level and is suited for multi-user environments where typical I/O size is small and there is a high proportion of read activity. RAID 6 is recommended when large size physical disks are used or large number of physical disks are used in a disk group.
24

RAID 10

CAUTION: Do not attempt to create virtual disk groups exceeding 120 physical disks in a RAID 10 configuration even if premium feature is activated on your storage array. Exceeding the 120­physical disk limit may cause your storage array to be unstable.
RAID 10, a combination of RAID 1 and RAID 0, uses disk striping across mirrored disks. It provides high data throughput and complete data redundancy. Utilizing an even number of physical disks (four or more) creates a RAID level 10 disk group and/or virtual disk. Because RAID levels 1 and 10 use disk mirroring, half of the capacity of the physical disks is utilized for mirroring. This leaves the remaining half of the physical disk capacity for actual storage. RAID 10 is automatically used when a RAID level of 1 is chosen with four or more physical disks. RAID 10 works well for medium-sized databases or any environment that requires high performance and fault tolerance and moderate-to-medium capacity.

Segment Size

Disk striping enables data to be written across multiple physical disks. Disk striping enhances performance because striped disks are accessed simultaneously.
The segment size or stripe element size specifies the size of data in a stripe written to a single disk. The storage array supports stripe element sizes of 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, and 256 KB. The default stripe element size is 128 KB.
Stripe width, or depth, refers to the number of disks involved in an array where striping is implemented. For example, a four-disk group with disk striping has a stripe width of four.
NOTE: Although disk striping delivers excellent performance, striping alone does not provide data redundancy.

Virtual Disk Operations

Virtual Disk Initialization

Every virtual disk must be initialized. Initialization can be done in the foreground or the background. A maximum of four virtual disks can be initialized concurrently on each RAID controller module.
Background initialization — The storage array executes a background initialization when the virtual
disk is created to establish parity, while allowing full host server access to the virtual disks. Background
initialization does not run on RAID 0 virtual disks. The background initialization rate is controlled by
MD Storage Manager. To change the rate of background initialization, you must stop any existing
background initialization. The rate change is implemented when the background initialization restarts
automatically.
Foreground Initialization — The storage array executes a background initialization when the virtual
disk is created to establish parity, while allowing full host server access to the virtual disks. Background
initialization does not run on RAID 0 virtual disks. The background initialization rate is controlled by
MD Storage Manager. To change the rate of background initialization, you must stop any existing
background initialization. The rate change is implemented when the background initialization restarts
automatically.

Consistency Check

A consistency check verifies the correctness of data in a redundant array (RAID levels 1, 5, 6, and 10). For example, in a system with parity, checking consistency involves computing the data on one physical disk and comparing the results to the contents of the parity physical disk.
25
A consistency check is similar to a background initialization. The difference is that background initialization cannot be started or stopped manually, while consistency check can.
NOTE: It is recommended that you run data consistency checks on a redundant array at least once a month. This allows detection and automatic replacement of unreadable sectors. Finding an unreadable sector during a rebuild of a failed physical disk is a serious problem, because the system does not have the redundancy to recover the data.

Media Verification

Another background task performed by the storage array is media verification of all configured physical disks in a disk group. The storage array uses the Read operation to perform verification on the space configured in virtual disks and the space reserved for the metadata.

Cycle Time

The media verification operation runs only on selected disk groups, independent of other disk groups. Cycle time is the time taken to complete verification of the metadata region of the disk group and all virtual disks in the disk group for which media verification is configured. The next cycle for a disk group starts automatically when the current cycle completes. You can set the cycle time for a media verification operation between 1 and 30 days. The storage controller throttles the media verification I/O accesses to disks based on the cycle time.
The storage array tracks the cycle for each disk group independent of other disk groups on the controller and creates a checkpoint. If the media verification operation on a disk group is preempted or blocked by another operation on the disk group, the storage array resumes after the current cycle. If the media verification process on a disk group is stopped due to a RAID controller module restart, the storage array resumes the process from the last checkpoint.

Virtual Disk Operations Limit

The maximum number of active, concurrent virtual disk processes per RAID controller module installed in the storage array is four. This limit is applied to the following virtual disk processes:
Background initialization
Foreground initialization
Consistency check
Rebuild
Copy back
If a redundant RAID controller module fails with existing virtual disk processes, the processes on the failed controller are transferred to the peer controller. A transferred process is placed in a suspended state if there are four active processes on the peer controller. The suspended processes are resumed on the peer controller when the number of active processes falls below four.

Disk Group Operations

RAID Level Migration

You can migrate from one RAID level to another depending on your requirements. For example, fault­tolerant characteristics can be added to a stripe set (RAID 0) by converting it to a RAID 5 set. The MD Storage Manager provides information about RAID attributes to assist you in selecting the appropriate
26
RAID level. You can perform a RAID level migration while the system is still running and without rebooting, which maintains data availability.

Segment Size Migration

Segment size refers to the amount of data (in kilobytes) that the storage array writes on a physical disk in a virtual disk before writing data on the next physical disk. Valid values for the segment size are 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, and 256 KB.
Dynamic segment size migration enables the segment size of a given virtual disk to be changed. A default segment size is set when the virtual disk is created, based on such factors as the RAID level and expected usage. You can change the default value if segment size usage does not match your needs.
When considering a segment size change, two scenarios illustrate different approaches to the limitations:
If I/O activity stretches beyond the segment size, you can increase it to reduce the number of disks
required for a single I/O. Using a single physical disk for a single request frees disks to service other
requests, especially when you have multiple users accessing a database or storage environment.
If you use the virtual disk in a single-user, large I/O environment (such as for multimedia application
storage), performance can be optimized when a single I/O request is serviced with a single data stripe
(the segment size multiplied by the number of physical disks in the disk group used for data storage).
In this case, multiple disks are used for the same request, but each disk is only accessed once.

Virtual Disk Capacity Expansion

When you configure a virtual disk, you select a capacity based on the amount of data you expect to store. However, you may need to increase the virtual disk capacity for a standard virtual disk by adding free capacity to the disk group. This creates more unused space for new virtual disks or to expand existing virtual disks.

Disk Group Expansion

Because the storage array supports hot-swappable physical disks, you can add two physical disks at a time for each disk group while the storage array remains online. Data remains accessible on virtual disk groups, virtual disks, and physical disks throughout the operation. The data and increased unused free space are dynamically redistributed across the disk group. RAID characteristics are also reapplied to the disk group as a whole.

Disk Group Defragmentation

Defragmenting consolidates the free capacity in the disk group into one contiguous area. Defragmentation does not change the way in which the data is stored on the virtual disks.

Disk Group Operations Limit

The maximum number of active, concurrent disk group processes per installed RAID controller module is one. This limit is applied to the following disk group processes:
Virtual disk RAID level migration
Segment size migration
Virtual disk capacity expansion
Disk group expansion
Disk group defragmentation
27
If a redundant RAID controller module fails with an existing disk group process, the process on the failed controller is transferred to the peer controller. A transferred process is placed in a suspended state if there is an active disk group process on the peer controller. The suspended processes are resumed when the active process on the peer controller completes or is stopped.
NOTE: If you try to start a disk group process on a controller that does not have an existing active process, the start attempt fails if the first virtual disk in the disk group is owned by the other controller and there is an active process on the other controller.

RAID Background Operations Priority

The storage array supports a common configurable priority for the following RAID operations:
Background initialization
Rebuild
Copy back
Virtual disk capacity expansion
Raid level migration
Segment size migration
Disk group expansion
Disk group defragmentation
The priority of each of these operations can be changed to address performance requirements of the environment in which the operations are to be executed.
NOTE: Setting a high priority level impacts storage array performance. It is not advisable to set priority levels at the maximum level. Priority must also be assessed in terms of impact to host server access and time to complete an operation. For example, the longer a rebuild of a degraded virtual disk takes, the greater the risk for potential secondary disk failure.

Virtual Disk Migration And Disk Roaming

Virtual disk migration is moving a virtual disk or a hot spare from one array to another by detaching the physical disks and re-attaching them to the new array. Disk roaming is moving a physical disk from one slot to another on the same array.

Disk Migration

You can move virtual disks from one array to another without taking the target array offline. However, the disk group being migrated must be offline prior to performing the disk migration. If the disk group is not offline prior to migration, the source array holding the physical and virtual disks within the disk group marks them as missing. However, the disk groups themselves migrate to the target array.
An array can import a virtual disk only if it is in an optimal state. You can move virtual disks that are part of a disk group only if all members of the disk group are being migrated. The virtual disks automatically become available after the target array has finished importing all the disks in the disk group.
When you migrate a physical disk or a disk group from:
One MD storage array to another MD storage array of the same type (for example, from an MD3460
storage array to another MD3460 storage array), the MD storage array you migrate to, recognizes any
data structures and/or metadata you had in place on the migrating MD storage array.
28
Any storage array different from the MD storage array you migrate to (for example, from an MD3460
storage array to an MD3860i storage array), the receiving storage array (MD3860i storage array in the
example) does not recognize the migrating metadata and that data is lost. In this case, the receiving
storage array initializes the physical disks and marks them as unconfigured capacity.
NOTE: Only disk groups and associated virtual disks with all member physical disks present can be migrated from one storage array to another. It is recommended that you only migrate disk groups that have all their associated member virtual disks in an optimal state.
NOTE: The number of physical disks and virtual disks that a storage array supports limits the scope of the migration.
Use either of the following methods to move disk groups and virtual disks:
Hot virtual disk migration — Disk migration with the destination storage array power turned on.
Cold virtual disk migration — Disk migration with the destination storage array power turned off.
NOTE: To ensure that the migrating disk groups and virtual disks are correctly recognized when the target storage array has an existing physical disk, use hot virtual disk migration.
When attempting virtual disk migration, follow these recommendations:
Moving physical disks to the destination array for migration — When inserting drives into the
destination storage array during hot virtual disk migration, wait for the inserted physical disk to be
displayed in the MD Storage Manager, or wait for 30 seconds (whichever occurs first), before inserting
the next physical disk.
WARNING: Without the interval between drive insertions, the storage array may become unstable and manageability may be temporarily lost.
Migrating virtual disks from multiple storage arrays into a single storage array — When migrating
virtual disks from multiple or different storage arrays into a single destination storage array, move all
of the physical disks from the same storage array as a set into the new destination storage array.
Ensure that all of the physical disks from a storage array are migrated to the destination storage array
before starting migration from the next storage array.
NOTE: If the drive modules are not moved as a set to the destination storage array, the newly relocated disk groups may not be accessible.
Migrating virtual disks to a storage array with no existing physical disks — Turn off the destination
storage array, when migrating disk groups or a complete set of physical disks from a storage array to
another storage array that has no existing physical disks. After the destination storage array has been
turned on and has successfully recognized the newly migrated physical disks, migration operations
can continue.
NOTE: Disk groups from multiple storage arrays must not be migrated at the same time to a storage array that has no existing physical disks. Use cold virtual disk migration for the disk groups from one storage array.
Enabling premium features before migration — Before migrating disk groups and virtual disks, enable
the required premium features on the destination storage array. If a disk group is migrated from a
storage array that has a premium feature enabled and the destination array does not have this feature
enabled, an Out of Compliance error message can be generated.

Disk Roaming

You can move physical disks within an array. The RAID controller module automatically recognizes the relocated physical disks and logically places them in the proper virtual disks that are part of the disk group. Disk roaming is permitted when the RAID controller module is either online or powered off.
NOTE: The disk group must be exported before moving the physical disks.
29

Host Server-To-Virtual Disk Mapping

The host server attached to a storage array accesses various virtual disks on the storage array through its host ports. Specific virtual disk-to-LUN mappings to an individual host server can be defined. In addition, the host server can be part of a host group that shares access to one or more virtual disks. You can manually configure a host server-to-virtual disk mapping. When you configure host server-to-virtual disk mapping, consider these guidelines:
You can define one host server-to-virtual disk mapping for each virtual disk in the storage array.
Host server-to-virtual disk mappings are shared between RAID controller modules in the storage
array.
A unique LUN must be used by a host group or host server to access a virtual disk.
Not every operating system has the same number of LUNs available for use.

Host Types

A host server is a server that accesses a storage array. Host servers are mapped to the virtual disks and use one or more iSCSI initiator ports. Host servers have the following attributes:
Host name — A name that uniquely identifies the host server.
Host group (used in Cluster solutions only) — Two or more host servers associated together to share
access to the same virtual disks.
NOTE: This host group is a logical entity you can create in the MD Storage Manager. All host servers in a host group must be running the same operating system.
Host type — The operating system running on the host server.

Advanced Features

The RAID enclosure supports several advanced features:
Virtual Disk Snapshots.
Virtual Disk Copy.
NOTE: The premium features listed above must be activated separately. If you have purchased these features, an activation card is supplied that contains instructions for enabling this functionality.

Types Of Snapshot Functionality Supported

The following types of virtual disk snapshot premium features are supported on the MD storage array:
Snapshot Virtual Disks using multiple point-in-time (PiT) groups — This feature also supports snapshot
groups, snapshot images, and consistency groups.
Snapshot Virtual Disks (Legacy) using a separate repository for each snapshot
For more information, see Premium Feature---Snapshot Virtual Disk and Premium Feature—Snapshot
Virtual Disks (Legacy).
Snapshot Virtual Disks, Snapshot Images, And Snapshot Groups
A snapshot image is a logical image of the content of an associated base virtual disk created at a specific point-in-time. This type of image is not directly readable or writable to a host since the snapshot image is used to save data from the base virtual disk only. To allow the host to access a copy of the data in a snapshot image, you must create a snapshot virtual disk. This snapshot virtual disk contains its own
30
repository, which is used to save subsequent modifications made by the host application to the base virtual disk without affecting the referenced snapshot image.
Snapshot images can be created manually or automatically by establishing a schedule that defines the date and time you want to create the snapshot image. The following objects can be included in a snapshot image:
Standard virtual disks
Thin provisioned virtual disks
Consistency groups
To create a snapshot image, you must first create a snapshot group and reserve snapshot repository space for the virtual disk. The repository space is based on a percentage of the current virtual disk reserve.
You can delete the oldest snapshot image in a snapshot group either manually or you can automate the process by enabling the Auto-Delete setting for the snapshot group. When a snapshot image is deleted, its definition is removed from the system, and the space occupied by the snapshot image in the repository is released and made available for reuse within the snapshot group.

Snapshot Virtual Disks (Legacy)

A snapshot is a point-in-time image of a virtual disk. The snapshot provides an image of the virtual disk at the time the snapshot was created. You create a snapshot so that an application (for example, a backup application) can access the snapshot and read the data while the source virtual disk remains online and user-accessible. When the backup is completed, the snapshot virtual disk is no longer needed. You can create up to four snapshots per virtual disk.
Snapshots are used to recover previous versions of files that have changed since the snapshot was taken. Snapshots are implemented using a copy on write algorithm, which makes a backup copy of data the instant a write occurs to the virtual disk. Data on a virtual disk is copied to the snapshot repository before it is modified. Snapshots are instantaneous and take up less overhead than a full physical copy process.

Snapshot (Legacy) Repository Virtual Disk

When you create a snapshot virtual disk, it automatically creates a snapshot repository virtual disk. A snapshot repository is a virtual disk created in the storage array as a resource for a snapshot virtual disk. A snapshot repository virtual disk contains snapshot virtual disk metadata and copy-on-write data for a particular snapshot virtual disk. The repository supports one snapshot only.
You cannot select a snapshot repository virtual disk as a source virtual disk or as a target virtual disk in a virtual disk copy. If you select a Snapshot source virtual disk as the target virtual disk of a virtual disk copy, you must disable all snapshot virtual disks associated with the source virtual disk.
CAUTION: Before using the Snapshot Virtual Disks Premium Feature in a Windows Clustered configuration, you must map the snapshot virtual disk to the cluster node that owns the source virtual disk. This ensures that the cluster nodes correctly recognize the snapshot virtual disk.
Mapping the snapshot virtual disk to the node that does not own the source virtual disk before the snapshot enabling process is completed can result in the operating system misidentifying the snapshot virtual disk. This can result in data loss or an inaccessible snapshot.

Virtual Disk Copy

Virtual disk copy is a premium feature you can use to:
31
Back up data.
Copy data from disk groups that use smaller-capacity physical disks to disk groups using greater
capacity physical disks.
Restore snapshot virtual disk data to the source virtual disk.
Virtual disk copy generates a full copy of data from the source virtual disk to the target virtual disk in a storage array.
Source virtual disk — When you create a virtual disk copy, a copy pair consisting of a source virtual
disk and a target virtual disk is created on the same storage array. When a virtual disk copy is started,
data from the source virtual disk is copied completely to the target virtual disk.
Target virtual disk — When you start a virtual disk copy, the target virtual disk maintains a copy of the
data from the source virtual disk. You can choose whether to use an existing virtual disk or create a
new virtual disk as the target virtual disk. If you choose an existing virtual disk as the target, all data on
the target is overwritten. A target virtual disk can be a standard virtual disk or the source virtual disk of
a failed or disabled snapshot virtual disk.
NOTE: The target virtual disk capacity must be equal to or greater than the source virtual disk capacity.
When you begin the disk copy process, you must define the rate at which the copy is completed.
Giving the copy process top priority slightly impacts I/O performance, while giving it lowest priority
makes the copy process longer to complete. You can modify the copy priority while the disk copy is
in progress. For more information, see the online help.

Virtual Disk Recovery

You can use the Edit host server-to-virtual disk mappings feature to recover data from the backup virtual disk. This functionality enables you to unmap the original source virtual disk from its host server, then map the backup virtual disk to the same host server.
Ensure that you record the LUN used to provide access to the source virtual disk. You need this information when you define a host server-to-virtual disk mapping for the target (backup) virtual disk. Also, be sure to stop all I/O activity to the source virtual disk before beginning the virtual disk recovery procedure.

Using Snapshot And Virtual Disk Copy Together

You can use the Snapshot Virtual Disk or Snapshot Virtual Disk (Legacy) and Virtual Disk Copy premium features together to back up data on the same storage array, or to restore the data on the snapshot virtual disk to its original source virtual disk.
You can copy data from a virtual disk in one of the two ways:
By taking a point-in-time snapshot of the data
By copying the data to another virtual disk using a virtual disk copy
You can select a snapshot virtual disk as the source virtual disk for a virtual disk copy. This configuration is one of the best ways you can apply the snapshot virtual disk feature, since it enables complete backups without any impact to the storage array I/O.
You cannot use a snapshot repository virtual disk as a source virtual disk or as a target virtual disk in a virtual disk copy. If you select the source virtual disk as the target virtual disk of a virtual disk copy, you must disable all snapshot virtual disks associated with the source virtual disk.
32

Multi-Path Software

Multi-path software (also referred to as the failover driver) is the software resident on the host server that provides management of the redundant data path between the host server and the storage array. For the multi-path software to correctly manage a redundant path, the configuration must have redundant iSCSI connections and cabling.
The multi-path software identifies the existence of multiple paths to a virtual disk and establishes a preferred path to that disk. If any component in the preferred path fails, the multi-path software automatically re-routes I/O requests to the alternate path so that the storage array continues to operate without interruption.
NOTE: Multi-path software is available on the MD Series storage arrays resource DVD.

Preferred And Alternate Controllers And Paths

A preferred controller is a RAID controller module designated as the owner of a virtual disk or disk group. The preferred controller is automatically selected by the MD Storage Manager when a virtual disk is created. You can change the preferred RAID controller module owner of a virtual disk after it is created. If a host is connected to only one RAID controller module, the preferred owner must manually be assigned to the RAID controller module that the host can access.
Ownership of a virtual disk is moved from the preferred controller to the secondary controller (also called the alternate controller) when the preferred controller is:
Physically removed
Updating firmware
Involved in an event that caused failover to the alternate controller
Paths used by the preferred RAID controller module to access either the disks or the host server are called the preferred paths; redundant paths are called the alternate paths. If a failure causes the preferred path to become inaccessible, the storage array automatically uses the alternate path to access data, and the enclosure status LED blinks amber.

Virtual Disk Ownership

The MD Storage Manager can be used to automatically build and view virtual disks. It uses optimal settings to stripe the disk group. Virtual disks are assigned to alternating RAID controller modules when they are created. This default assignation provides a simple means for load balancing the workload of the RAID controller modules.
Ownership can later be modified to balance workload according to actual usage. If virtual disk ownership is not manually balanced, it is possible for one controller to have the majority of the work, while the other controller is idle. Limit the number of virtual disks in a disk group. If multiple virtual disks are in a disk group, consider:
The impact each virtual disk has on other virtual disks in the same disk group.
The patterns of usage for each virtual disk.
Different virtual disks have higher usage at different times of day.
33

Load Balancing

A load balance policy is used to determine which path is used to process I/O. Multiple options for setting the load balance policies let you optimize I/O performance when mixed host interfaces are configured.
You can choose one of these load balance policies to optimize I/O performance:
Round-robin with subset — The round-robin with subset I/O load balance policy routes I/O requests,
in rotation, to each available data path to the RAID controller module that owns the virtual disks. This
policy treats all paths to the RAID controller module that owns the virtual disk equally for I/O activity.
Paths to the secondary RAID controller module are ignored until ownership changes. The basic
assumption for the round-robin policy is that the data paths are equal. With mixed host support, the
data paths may have different bandwidths or different data transfer speeds.
Least queue depth with subset — The least queue depth with subset policy is also known as the least
I/Os or least requests policy. This policy routes the next I/O request to a data path that has the least
outstanding I/O requests queued. For this policy, an I/O request is simply a command in the queue.
The type of command or the number of blocks that are associated with the command are not
considered. The least queue depth with subset policy treats large block requests and small block
requests equally. The data path selected is one of the paths in the path group of the RAID controller
module that owns the virtual disk.
Least path weight with subset (Windows operating systems only) — The least queue depth with subset
policy is also known as the least I/Os or least requests policy. This policy routes the next I/O request
to a data path that has the least outstanding I/O requests queued. For this policy, an I/O request is
simply a command in the queue. The type of command or the number of blocks that are associated
with the command are not considered. The least queue depth with subset policy treats large block
requests and small block requests equally. The data path selected is one of the paths in the path
group of the RAID controller module that owns the virtual disk.

Monitoring System Performance

Performance Monitor allows you to track a storage array’s key performance data and identify performance bottlenecks in your system. You can use Performance Monitor to perform these tasks:
View in real time the values of the data collected for a monitored device. This capability helps you to
determine if the device is experiencing any problems.
See a historical view of a monitored device to identify when a problem started or what caused a
problem.
Specify the performance metric and the objects that you want to monitor.
View data in tabular format (actual values of the collected metrics) or graphical format (as line graphs),
or export the data to a file.
Three types of performance monitoring exist:
Real-time graphical – Plots performance data on a graph in near real-time.
Real-time textual – Shows performance data in a table in near real-time.
Background (historical) – Plots graphical performance data over a longer period of time. You can
view background performance data for a session that is currently in progress or for a session that you
previously saved.
This table shows some specific characteristics of each type of performance monitoring:
34
Type of Performance Monitoring
Real-time graphical
Real-time textual
Background 10 min 7 day rolling
Sampling Interval
5 sec 5 min rolling
5-3600 sec Most current
Length of Time Displayed
window
value
window
Maximum Number of Objects Displayed
5 No Starts
No limit Yes Starts and stops
5 Yes Starts and stops
Ability to Save Data
How Monitoring Starts and Stops
automatically when AMW opens. Stops automatically when AMW closes.
manually. Also stops when View Real-time Textual Performance Monitor dialog closes or AMW closes.
manually. Also stops when EMW closes or firmware download starts.
Keep these guidelines in mind when using Performance Monitor:
Each time the sampling interval elapses, the Performance Monitor queries the storage array again and
updates the data. The impact to storage array performance is minimal.
The background monitoring process samples and stores data for a seven-day time period. If a
monitored object changes during this time, the object will not have a complete set of data points
spanning the full seven days. For example, virtual disk sets can change as virtualDisks are created,
deleted, mapped, or unmapped or physical disks can be added, removed, or failed.
Performance data is collected and displayed only for an I/O host visible (mapped) virtual disk, a
snapshot group repository virtual disk, and a consistency group repository virtual disk. Data for a
snapshot (legacy) repository virtual disk or a replication repository virtual disk is not collected.
The values reported for a RAID controller module or storage array might be greater than the sum of
the values reported for all of the virtual disks. The values reported for a RAID controller module or
storage array include both host I/Os and I/Os internal to the storage array (metadata reads and writes),
whereas the values reported for a virtual disk include only host I/O

Interpreting Performance Monitor Data

Performance Monitor provides you with data about devices. You can use this data to make storage array performance tuning decisions, as described in the following table.
Performance Data Implications for Performance Tuning
Total I/Os This data is useful for monitoring the I/O activity of
a specific RAID controller module and a specific virtual disk, which can help identify possible high­traffic I/O areas.
35
Performance Data Implications for Performance Tuning
You might notice a disparity in the total I/Os (workload) of RAID controller modules. For example, the workload of one RAID controller module is heavy or is increasing over time while that of the other RAID controller module is lighter or more stable. In this case, you might want to change the RAID controller module ownership of one or more virtual disks to the RAID controller module with the lighter workload. Use the virtual disk total I/O statistics to determine which virtual disks to move.
You might want to monitor the workload across the storage array. Monitor the Total I/Os in the background performance monitor. If the workload continues to increase over time while application performance decreases, you might need to add additional storage arrays. By adding storage arrays to your enterprise, you can continue to meet application needs at an acceptable performance level.
IOs/sec Factors that affect input/output operations per
second (IOs/sec or IOPS) include these items:
Access pattern (random or sequential)
I/O size
RAID level
Cache block size
Whether read caching is enabled
Whether write caching is enabled
Dynamic cache read prefetch
Segment size
The number of physical disks in the disk groups or storage array
36
The transfer rates of the RAID controller module are determined by the application I/O size and the I/O rate. Generally, small application I/O requests result in a lower transfer rate but provide a faster I/O rate and shorter response time. With larger application I/O requests, higher throughput rates are possible. Understanding your typical application I/O patterns can help you determine the maximum I/O transfer rates for a specific storage array.
You can see performance improvements caused by changing the segment size in the IOPS statistics for a virtual disk. Experiment to determine the optimal segment size, or use the file system size or database block size. For more information about segment size and performance, see the related topics listed at the end of this topic.
The higher the cache hit rate, the higher I/O rates will be. Higher write I/O rates are experienced with write caching enabled compared to disabled. In deciding whether to enable write caching for an
Performance Data Implications for Performance Tuning
individual virtual disk, look at the current IOPS and the maximum IOPS. You should see higher rates for sequential I/O patterns than for random I/O patterns. Regardless of your I/O pattern, enable write caching to maximize the I/O rate and to shorten the application response time. For more information about read/write caching and performance, see the related topics listed at the end of this topic.
MBs/sec See IOs/sec.
I/O Latency, ms Latency is useful for monitoring the I/O activity of a
specific physical disk and a specific virtual disk and can help you identify physical disks that are bottlenecks.
Physical disk type and speed influence latency. With random I/O, faster spinning physical disks spend less time moving to and from different locations on the disk.
Too few physical disks result in more queued commands and a greater period of time for the physical disk to process the command, increasing the general latency of the system.
Larger I/Os have greater latency due to the additional time involved with transferring data.
Higher latency might indicate that the I/O pattern is random in nature. Physical disks with random I/O will have greater latency than those with sequential streams.
If a disk group is shared among several virtual disks, the individual virtual disks might need their own disk groups to improve the sequential performance of the physical disks and decrease latency.
If a disparity exists with physical disks of a common disk group. This condition might indicate a slow physical disk.
With disk pools, larger latencies are introduced and uneven workloads might exist between physical disks making the latency values less meaningful and in general higher.
Cache Hit Percentage A higher cache hit percentage is desirable for
optimal application performance. A positive correlation exists between the cache hit percentage and the I/O rates.
The cache hit percentage of all of the virtual disks might be low or trending downward. This trend might indicate inherent randomness in access patterns. In addition, at the storage array level or the RAID controller module level, this trend might indicate the need to install more RAID controller module cache memory if you do not have the maximum amount of memory installed.
If an individual virtual disk is experiencing a low cache hit percentage, consider enabling dynamic cache read prefetch for that virtual disk. Dynamic
37
Performance Data Implications for Performance Tuning
cache read prefetch can increase the cache hit percentage for a sequential I/O workload.

Viewing Real-time Graphical Performance Monitor Data

You can view real-time graphical performance as a single graph or as a dashboard that shows six graphs on one screen.
A real-time performance monitor graph plots a single performance metric over time for up to five objects. The x-axis of the graph represents time. The y-axis of the graph represents the metric value. When the metric value exceeds 99,999, it appears in thousands (K), beginning with 100K until the number reaches 9999K, at which time it appears in millions (M). For amounts greater than 9999K but less than 100M, the value appears in tenths (for example, 12.3M).
1. To view the dashboard, in the Array Management Window (AMW), click the Performance tab. The Performance tab opens showing six graphs.
2. To view a single performance graph, in the Array Management Window (AMW), select Monitor HealthMonitor PerformanceReal-time performance monitor View graphical.
The View Real-time Graphical Performance Monitor dialog opens.
3. In the Select metric drop-down list, select the performance data that you want to view.
You can select only one metric.
4. In the Select an object(s) list, select the objects for which you want to view performance data. You can select up to five objects to monitor on one graph.
Use Ctrl-Click and Shift-Click to select multiple objects. Each object is plotted as a separate line on the graph.
NOTE: If you do not see a line that you defined on the graph, it might be overlapping another line.
5. When you are done viewing the performance graph, click Close.

Customizing the Performance Monitor Dashboard

The dashboard on the Performance tab initially contains five predefined portlets and one undefined portlet. You can customize all of the portlets to display the performance data that is most meaningful to you.
1. In the Array Management Window (AMW), select the Performance tab.
2. Do one of the following actions:
– Double-click the portlet that you want to change. – Or, click the Maximize icon on the portlet that you want to change. – In Portlet 6, select the Create new real-time performance graph link. This option is only available
if Portlet 6 is undefined.
The View Real-time Graphical Performance Monitor dialog appears.
3. In the Select metric drop-down list, select the performance data that you want to view. You can select only one metric at a time. If you opened the dialog from an existing graph, the current
metric and object are preselected.
38
4. In the Select an object(s) list, select the objects for which you want to view performance data. You can select up to five objects to monitor on one graph. Use Ctrl-Click and Shift-Click to select
multiple objects. Each object is plotted on a separate line on the graph.
NOTE: If you do not see a line that you defined on the graph, it might be overlapping another line.
5. To save the changed portlet to the dashboard, click Save to Dashboard, and then click OK. The Save to Dashboard option is not available if you did not make any changes, if both a metric and
an object are not selected, or if the dialog was not invoked from a portlet on the dashboard. The dashboard on the Performance tab updates with the new portlet.
6. To close the dialog, click Cancel.

Specifying Performance Metrics

You can collect the following performance data:
Total I/Os – Total I/Os performed by this object since the beginning of the polling session.
I/Os per second – The number of I/O requests serviced per second during the current polling interval
(also called an I/O request rate).
MBs per second – The transfer rate during the current polling interval. The transfer rate is the amount
of data in megabytes that can be moved through the I/O data connection in a second (also called throughput).
NOTE: A kilobyte is equal to 1024 bytes and a megabyte is equal to 1024 x 1024 bytes. Some applications calculate kilobytes as 1,000 bytes and megabytes as 1,000,000 bytes. The numbers reported by the monitor might be lower by this difference.
I/O Latency – The time it takes for an I/O request to complete, in milliseconds. For physical disks, I/O
latency includes seek, rotation, and transfer time.
Cache Hit Percentage – The percentage of total I/Os that are processed with data from the cache
rather than requiring I/O from disk. Includes read requests that find all the data in the cache and write requests that cause an overwrite of cache data before it has been committed to disk.
SSD Cache Hit Percentage – The percentage of read I/Os that are processed with data from the SSD
physical disks.
The metrics available include the current value, minimum value, maximum value, and average value. The current value is the most recent data point collected. The minimum, maximum, and average values are determined based on the start of performance monitoring. For real-time performance monitoring, the start is when the Array Management Window (AMW) opened. For background performance monitoring, the start is when background performance monitoring started.
Performance metrics at the storage array level are the sum of metrics on the RAID controller modules. Metrics for the RAID controller module and disk group are computed by aggregating the data retrieved for each virtual disk at the disk group/owning RAID controller module level. The values reported for a RAID controller module or a storage array might be greater than the sum of the values reported for all of the virtual disks. The values reported for a RAID controller module or storage array include both host I/Os and I/Os internal to the storage array (metadata reads and writes), whereas the values reported for a virtual disk include only host I/Os.
On a performance monitor graph, you can specify one metric and up to five objects. Not all metrics apply to all objects. The following table specifies the metrics that apply to each object.
39
Metric Storage
Array
Total I/Os X X X X X X
IOs/sec X X X X X X
MBs/sec X X X X X X
I/O Latency – X X X X
Cache hit %X X X X X X
RAID Controller Modules
Virtual Disks
Snapshot Virtual Disks
Thin Virtual Disks
Disk Groups or Disk Pools
Physical Disks

Viewing Real-time Textual Performance Monitor

1. In the Array Management Window (AMW), do one of the following:
– Click the Performance tab, and then click the Launch real-time textual performance monitor
link.
– Select MonitorHealthMonitor PerformanceReal-time performance monitorView
textual.
The View Real-time Textual Performance Monitor dialog appears.
2. To select the objects to monitor and the sampling interval, click the Settings button. The Settings button is available only when the real-time textual performance monitor is not started.
The Performance Summary Settings dialog appears.
3. In the Select an object(s) list, select the objects for which you want to view performance data. You can select as many objects as you want. Use Ctrl-Click and Shift-Click to select multiple
objects. To select all objects, select the Select All checkbox.
4. In Sampling Interval list, select the sampling interval that you want. The sampling interval can be from 5 seconds to 3600 seconds. Select a short sampling interval, such
as 5 seconds, for a near-real-time picture of performance; however, be aware that this short sampling interval can affect performance. Select a longer interval, such as 30 seconds to 60 seconds, if you are saving the results to a file to look at later to minimize the system overhead and performance impact.
5. Click OK.
6. To start collecting performance data, click Start.
Data collection begins.
NOTE: For an accurate elapsed time, do not use the Synchronize RAID controller module Clocks option while using Performance Monitor. If you do, it is possible for the elapsed time to be negative.
7. To stop collecting performance data, click Stop, and then click Close.
40

Saving Real-time Textual Performance Data

A feature that real-time textual performance monitoring has that real-time graphical performance monitoring does not have is that you can save the data. Saving the data saves only one set of data from the most recent sampling interval.
1. In the Array Management Window (AMW), do one of the following:
– Click the Performance tab, and then click the Launch real-time textual performance monitor
link.
– Select MonitorHealthMonitor PerformanceReal-time performance monitorView
textual.
The View Real-time Textual Performance Monitor dialog appears.
2. To select the objects to monitor and the sampling interval, click the Settings button. The Settings button is available only when the real-time textual performance monitor is not started. The Performance Summary Settings dialog appears.
3. In the Select an object(s) list, select the objects for which you want to view performance data. You can select as many objects as you want. Use Ctrl-Click and Shift-Click to select multiple
objects. To select all objects, select the Select All checkbox.
4. In Sampling Interval list, select the sampling interval that you want. The sampling interval can be from 5 seconds to 3600 seconds. Select a short sampling interval, such
as 5 seconds, for a near-real-time picture of performance; however, be aware that this short sampling interval can affect performance. Select a longer interval, such as 30 seconds to 60 seconds, if you are saving the results to a file to look at later to minimize the system overhead and performance impact.
5. Click OK.
6. To start collecting performance data, click Start.
Data collection begins.
7. Continue data collection for the desired period of time.
8. To stop collecting performance data, click Stop.
9. To save the performance data, click Save As.
The Save As button is enabled only when performance monitoring is stopped. The Save Performance Statistics dialog appears.
10. Select a location, enter a filename, and then click Save. You can save the file either as a text file with a default extension of .perf which you can open with
any text editor or as a comma separated values file with a default extension of .csv which you can open with any spreadsheet application.
11. To close the dialog, click Close.

Starting and Stopping Background Performance Monitor

1. In the Array Management Window (AMW), click the Performance tab.
2. Click the Launch background performance monitor link. The View Current Background Performance Monitor dialog appears.
3. Click the Start link.
A warning appears stating that performance data is available for a maximum period of seven days and older data is deleted.
41
4. To confirm, click OK. To indicate that background performance monitoring is in progress, the Start link changes to Stop,
and the system shows an In Progress icon next to the Stop link.
NOTE: For accurate data, do not change the system date or time while using background performance monitor. If you must change the system date, stop and restart the background performance monitor.
5. To manually stop background performance monitoring, click the Stop link. Background performance monitoring automatically stops when you close the Enterprise
Management Window (EMW). Background performance monitoring also might stop when you start a firmware download. You are prompted to save the background performance monitoring data when this happens.
NOTE: When you close the EMW, you might be monitoring more than one storage array. Performance data is not saved for any storage array that is in the Unresponsive state.
A dialog appears asking you whether you want to save the performance data.
6. Do you want to save the current Performance Monitor data?
Yes – Click Yes, select a directory, enter a filename, and then click Save. – No – Click No.
7. To close the View Current Background Performance Monitor dialog, click Close.

Viewing Information about the Current Background Performance Monitor Session

Before performing this task make sure that the background performance monitoring is in progress. You can tell that background performance monitoring is in progress by the presence of the In Progress icon next to the Stop link in the View Current Background Performance Monitor dialog.
1. In the Array Management Window (AMW), click the Performance tab.
2. Click the Launch background performance monitor link. The View Current Background Performance Monitor dialog appears.
3. Hold the pointer over the Stop link.
A tooltip appears displaying the time background performance monitoring was started, the length of time background performance monitoring has been in progress, and the sampling interval.
NOTE: For an accurate elapsed time, do not use the Synchronize RAID controller module Clocks option while using Performance Monitor. If you do, it is possible for the elapsed time to be negative.

Viewing Current Background Performance Monitor Data

A background performance monitor graph plots a single performance metric over time for up to five objects. The x-axis of the graph represents time. The y-axis of the graph represents the metric value. When the metric value exceeds 99,999, it appears in thousands (K), beginning with 100K until the number
42
reaches 9999K, at which time it appears in millions (M). For amounts greater than 9999K but less than 100M, the value appears in tenths (for example, 12.3M).
1. In the Array Management Window (AMW), click the Performance tab.
2. Click the Launch background performance monitor link.
The View Current option is available only when performance monitoring is in progress. You can tell that background performance monitoring is in progress by the presence of the In Progress icon next to the Stop link. The View Current Background Performance Monitor dialog appears.
3. In the Select metric drop-down list, select the performance data that you want to view. You can select only one metric at a time.
4. In the Select an object(s) list, select the objects for which you want to view performance data. You can select up to five objects to monitor on one graph. Use Ctrl-Click and Shift-Click to select
multiple objects. Each object is plotted on a separate line on the graph. The resulting graph shows all of the data points from the current background performance
monitoring session.
NOTE: If you do not see a line that you defined on the graph, it might be overlapping another line. If you perform the View Current option before the first sampling interval elapses (10 minutes), the graph will show that it is initializing.
5. (Optional) To change the time period plotted on the graph, make selections in the Start Date, Start Time, End Date, and End Time fields.
6. To close the dialog, click Close.

Saving the Current Background Performance Monitor Data

1. In the Array Management Window (AMW), click the Performance tab.
2. Click the Launch background performance monitor link. The View Current Background Performance Monitor dialog appears.
3. Click the Save link.
The Save link is enabled only when performance data exists in the buffer. The Save Background Performance Data dialog appears.
4. You can save the file in the default location with the default filename that uses the name of the storage array and a timestamp, or you can select a location, enter a filename, and then click Save.
The file is saved as a comma separated values file with a default extension of .csv. You can open a comma separated values file with any spreadsheet application. Be aware that your spreadsheet application might have a limit on the number of rows a file can have.

Viewing Saved Background Performance Monitor Data

The physical disk or network location that contains the saved performance data file must contain some free space, otherwise the file will not load. A background performance monitor graph plots a single performance metric over time for up to five objects. The x-axis of the graph represents time. The y-axis of the graph represents the metric value. When the metric value exceeds 99,999, it appears in thousands
43
(K), beginning with 100K until the number reaches 9999K, at which time it appears in millions (M). For amounts greater than 9999K but less than 100M, the value appears in tenths (for example, 12.3M).
1. In the Array Management Window (AMW), click the Performance tab.
2. Click the Launch background performance monitor link. The View Current Background Performance Monitor dialog appears.
3. Click the Launch saved background performance monitor link. The Load Background Performance dialog appears.
4. Navigate to the .csv file that you want to open, and then click Open. The View Saved Background Performance Monitor dialog opens.
5. In the Select metric drop-down list, select the performance data that you want to view.
You can select only one metric at a time.
6. In the Select an object(s) list, select the objects for which you want to view background performance data.
You can select up to five objects to monitor on one graph. Use Ctrl-Click and Shift-Click to select multiple objects. Each object is plotted as a separate line on the graph. The graph shows all of the data points in the saved file.
NOTE: If you do not see a line that you defined on the graph, it might be overlapping another line.
7. (Optional) To change the time period plotted on the graph, make selections in the Start Date, Start
, End Date, and End Time drop-down lists.
Time
8. To close the dialog, click Close.

What are invalid objects in the Performance Monitor?

When viewing a performance graph, you might see objects marked with an asterisk (*). An asterisk indicates that the object is no longer valid. When an object becomes invalid, the performance graph contains missing data points. The data that was collected before the object became invalid is still available for viewing.
If the invalid object returns, the Performance Monitor resumes collecting data for the object. If the invalid object represents a deleted object, its performance graph no longer updates. When this
event happens, you should redefine the graph to monitor a valid object. Invalid objects can be caused by a number of factors:
The virtual disk was deleted.
The virtual disk was unmapped.
A disk group that is being imported.
The RAID controller module is in simplex mode.
The RAID controller module is offline.
The RAID controller module failed.
The RAID controller module was removed.
The physical disk failed.
The physical disk was removed.
Sometimes, it is possible to have two objects with the same name. Two virtual disks can have the same name if you delete a virtual disk and then later create another virtual disk with the same name. The original virtual disk’s name contains an asterisk indicating that the virtual disk no longer exists. The new virtual disk has the same name, but without an asterisk. Two physical disks will have the same name if you
44
replace a physical disk. The original physical disk’s name contains an asterisk indicating that it is invalid and no longer exists. The new physical disk has the same name without an asterisk.
45
46
3

Discovering And Managing Your Storage Array

You can manage a storage array in two ways:
Out-of-band management
In-band management

Out-Of-Band Management

In the out-of-band management method, data is separate from commands and events. Data travels through the host-to-controller interface, while commands and events travel through the management port Ethernet cables.
This management method lets you configure the maximum number of virtual disks that are supported by your operating system and host adapters.
A maximum of eight storage management stations can concurrently monitor an out-of-band managed storage array. This limit does not apply to systems that manage the storage array through the in-band management method.
When you use out-of-band management, you must set the network configuration for each RAID controller module’s management Ethernet port. This includes the Internet Protocol (IP) address, subnetwork mask (subnet mask), and gateway. If you are using a Dynamic Host Configuration Protocol (DHCP) server, you can enable automatic network configuration, but if you are not using a DHCP server, you must enter the network configuration manually.
NOTE: RAID controller module network configurations can be assigned using a DHCP server (the default setting). However, if a DHCP server is not available for 150 seconds, the RAID controller modules assign static IP addresses. By default, the addresses assigned are 192.168.128.101 for controller 0 and 192.168.128.102 for controller 1.

In-Band Management

Using in-band-management, commands, events, and data travel through the host-to-controller interface. Unlike out-of-band management, commands and events are mixed with data.
NOTE: For detailed information on setting up in-band and out-of-band management see your system’s Deployment Guide at dell.com/support/manuals.
When you add storage arrays by using this management method, specify only the host name or IP address of the host. After you add the specific host name or IP address, the host-agent software automatically detects any storage arrays that are connected to that host.
NOTE: Some operating systems can be used only as storage management stations. For more information about the operating system that you are using, see the MD PowerVault Support Matrix at dell.com/support/manuals.
For more information, see the online help topics.
47

Access Virtual Disk

Each RAID controller module in an MD Series storage array maintains a special virtual disk, called the access virtual disk. The host-agent software uses the access virtual disk to communicate management requests and event information between the storage management station and the RAID controller module in an in-band-managed storage array and cannot be removed without deleting the entire virtual disk, virtual disk group or virtual disk pair. The access virtual disk is not available for application data storage and cannot be removed without deleting the entire virtual disk, virtual disk group, or virtual disk pair. The default LUN is 31.

Storage Arrays

You must add the storage arrays to the MD Storage Manager before you can set up the storage array for optimal use.
NOTE: You can add storage arrays only in the EMW.
You can:
Automatically discover storage arrays.
Manually add storage arrays.
NOTE: Verify that your host or management station network configuration— including station IP address, subnet mask, and default gateway—is correct before adding a new storage array using the Automatic option.
NOTE: For Linux, set the default gateway so that broadcast packets are sent to 255.255.255.0. For Red Hat Enterprise Linux, if no gateway exists on the network, set the default gateway to the IP address of the NIC.
NOTE: The MD Storage Manager uses TCP/UDP port 2463 for communication to the MD storage array.

Automatic Discovery Of Storage Arrays

The Automatic Discovery process sends out a broadcast message across the local subnet and adds any storage array that responds to the message. The Automatic Discovery process finds both in-band and out-of-band storage arrays.
NOTE: The Automatic Discovery option and the Rescan Hosts option in the EMW provide automatic methods for discovering managed storage arrays.

Manual Addition Of A Storage Array

Use Manual addition if the storage array resides outside of the local subnet. This process requires specific identification information to manually add a storage array.
To add a storage array that uses out-of-band management, specify the host name or management port IP address of each controller in the storage array.
To add an in-band storage array, add the host through which the storage array is attached to the network.
48
NOTE: It can take several minutes for the MD Storage Manager to connect to the specified storage array.
To add a storage array manually:
1. In the EMW, select EditAdd Storage Array.
2. Select the relevant management method:
– Out-of-band management — Enter a DNS/Network name, IPv4 address, or IPv6 address for the
RAID Controller Module in the storage array.
In-band management — Enter a name or a DNS/Network name, IPv4 address, or IPv6 address for
the Host through which the storage array is attached to the network.
NOTE: When adding a storage array using in-band management with iSCSI, a session must first be established between the initiator on the host server and the storage array. For more information, see Using iSCSI.
NOTE: The host agent must be restarted before in-band management communication can be established. See Starting Or Restarting The Host Context Agent Software.
3. Click Add.
4. Use one of these methods to name a storage array:
– In the EMW, select the Setup tab, and select Name/Rename Storage Arrays. – In the AMW, select the Setup tab, and select Rename Storage Array. – In the EMW, right-click the icon corresponding to the array and select Rename.

Setting Up Your Storage Array

A list of initial setup tasks is displayed on the Setup tab in the AMW. Using the tasks outlined in the Initial Setup Tasks area, ensures that the basic setup steps are completed.
Use the Initial Setup Tasks list the first time that you set up a storage array and perform the following tasks:
Locate the storage array — Find the physical location of the storage array on your network by turning
on the system identification indicator.
Give a new name to the storage array — Use a unique name that identifies each storage array.
Set a storage array password — Configure the storage array with a password to protect it from
unauthorized access. The MD Storage Manager prompts for the password when an attempt is made to change the storage array configuration, such as when a virtual disk is created or deleted.
Configure iSCSI host ports — Configure network parameters for each iSCSI host port automatically or
specify the configuration information for each iSCSI host port.
Configure the storage array — Create disk groups, virtual disks, and hot spare physical disks by using
the Automatic configuration method or the Manual configuration method. For more information, see the online help topics.
Map virtual disks — Map virtual disks to hosts or host groups.
Save configuration — Save the configuration parameters in a file that you can use to restore the
configuration, or reuse the configuration on another storage array. For more information, see the online help topics.
After you complete the basic steps for configuring the storage array, you can perform these optional tasks:
Manually define hosts — Define the hosts and the host port identifiers that are connected to the
storage array. Use this option only if the host is not automatically recognized and shown in the Host Mappings tab.
49
Configure Ethernet management ports — Configure the network parameters for the Ethernet
management ports on the RAID controller modules if you are managing the storage array by using the out-of-band management connections.
View and enable premium features — Your MD Storage Manager may include premium features. View
the premium features that are available and the premium features that are already started. You can start available premium features that are currently stopped.
Manage iSCSI settings — You can configure iSCSI settings for authentication, identification, and
discovery.

Locating Storage Arrays

You can use the Blink option to physically locate and identify a storage array. To locate the storage array:
1. Select the relevant storage array and do one of the following:
– In the EMW, right-click the appropriate storage array, and select Blink Storage Array. – In the AMW, select the Setup tab, and click Blink Storage Array. – In the AMW, select HardwareBlinkStorage Array.
The LEDs on the physical disks in the storage array blink.
2. After locating the storage array, click OK. The LEDs stop blinking.
3. If the LEDs do not stop blinking, select HardwareBlinkStop All Indications.

Naming Or Renaming Storage Arrays

You can name, rename, and add comments to a storage array to facilitate identification of the storage array.
Follow these guidelines to name a storage array:
Each storage array must be assigned a unique alphanumeric name up to 30 characters long.
A name can consist of letters, numbers, and the special characters underscore (_), dash (–), and
pound sign (#). No other special characters are allowed.
To rename a selected storage array:
1. Perform one of these actions:
– In the AMW, select SetupRename Storage Array. – In the EMW, select Devices tab Tree view, select EditRename. – In the EMW, Devices tab Tree view, right-click the desired array icon and select Rename.
The Rename Storage Array dialog is displayed.
2. Type the new name of the storage array.
NOTE: Avoid arbitrary names or names that may lose meaning in the future.
3. Click OK. A message is displayed warning you about the implications of changing the storage array name.
4. Click Yes. The new storage array name is displayed in the EMW.
5. Repeat step 1 through step 4 to name or rename additional storage arrays.
50

Setting A Password

You can configure each storage array with a password to protect it from unauthorized access. The MD Storage Manager prompts for the password when an attempt is made to change the storage array configuration, such as, when a virtual disk is created or deleted. View operations do not change the storage array configuration and do not require a password. You can create a new password or change an existing password.
To set a new password or change an existing password:
1. In the EMW, select the relevant storage array and open the AMW for that storage array. The AMW for the selected storage array is displayed.
2. In the AMW, select the Setup tab, and click Set a Storage Array Password. The Set Password dialog is displayed.
3. If you are resetting the password, type the Current password.
NOTE: If you are setting the password for the first time, leave the Current password blank.
4. Type the New password.
NOTE: It is recommended that you use a long password with at least 15 alphanumeric characters to increase security. For more information on secure passwords, see Password
Guidelines.
5. Re-type the new password in Confirm new password.
6. Click OK.
NOTE: You are not prompted for a password when you attempt to change the storage array configuration in the current management session.
Password Guidelines
Use secure passwords for your storage array. A password should be easy for you to remember but
difficult for others to determine. Consider using numbers or special characters in the place of letters, such as a 1 in the place of the letter I, or the at sign (@) in the place of the letter 'a'.
For increased protection, use a long password with at least 15 alphanumeric characters. The
maximum password length is 30 characters.
Passwords are case sensitive.
NOTE: You can attempt to enter a password up to ten times before the storage array enters a lockout state. Before you can try to enter a password again, you must wait 10 minutes for the storage array to reset. To reset the password, press the password reset switch on your RAID controller module.

Adding Or Editing A Comment To An Existing Storage Array

A descriptive comment, with an applicable storage array name, is a helpful identification tool. You can add or edit a comment for a storage array in the EMW only.
To add or edit a comment:
1. In the EMW, select the Devices tab and select the relevant managed storage array.
2. Select EditComment.
The Edit Comment dialog is displayed.
51
3. Type a comment.
NOTE: The number of characters in the comment must not exceed 60 characters.
4. Click OK. This option updates the comment in the Table view and saves it in your local storage management
station file system. The comment does not appear to administrators who are using other storage management stations.

Removing Storage Arrays

You can remove a storage array from the list of managed arrays if you no longer want to manage it from a specific storage management station. Removing a storage array does not affect the storage array or its data in any way. Removing a storage array only removes it from the list of storage arrays displayed in the Devices tab of the EMW.
To remove the storage array:
1. In the EMW, select the Devices tab and select the relevant managed storage array.
2. Select EditRemoveStorage Array.
You can also right-click on a storage array and select RemoveStorage Array. A message prompts you to confirm if the selected storage array is to be removed.
3. Click Yes. The storage array is removed from the list.

Enabling Premium Features

You can enable premium features on the storage array. To enable the premium features, you must obtain a feature key file specific to the premium feature that you want to enable from your storage supplier.
To enable premium features:
1. From the menu bar in the AMW, select Storage ArrayPremium Features. The Premium Features and Feature Pack Information window is displayed.
2. Click Use Key File. The Select Feature Key File window opens, which lets you select the generated key file.
3. Navigate to the relevant folder, select the appropriate key file, and click OK. The Confirm Enable Premium Fetaures dailog is displayed.
4. Click Yes. The required premium feature is enabled on your storage array.
5. Click Close. For more information, see the online help topics.

Displaying Failover Alert

You can change the failover alert delay for a storage array. The failover alert delay lets you delay the logging of a critical event if the multi-path driver transfers virtual disks to the non-preferred controller. If the multi-path driver transfers the virtual disks back to the preferred controller within the specified delay period, a critical event is not logged. If the transfer exceeds this delay period, then a virtual disk-not-on­preferred-path alert is issued as a critical event. You can also use this option to minimize multiple alerts when more than one virtual disk fails over because of a system error, such as a failed host adapter. For more information, see the online help topics.
52
To configure a failover alert delay:
1. In the AMW, on the menu bar, select Storage ArrayChangeFailover Alert Delay. The Failover Alert Delay window is displayed.
2. In Failover alert delay, enter a value between 0 and 60 minutes.
3. Click OK.
4. If you have set a password for the selected storage array, the Enter Password dialog is displayed.
Type the current password for the storage array.

Changing The Cache Settings On The Storage Array

To change the storage array cache settings:
1. In the AMW, select Storage ArrayChangeCache Settings. The Change Cache Settings window is displayed.
2. In Start demand cache flushing , select or enter the percentage of unwritten data in the cache to trigger a cache flush .
3. Select the appropriate Cache block size. A smaller cache size is a good choice for file-system use or database-application use. A larger cache
size is a good choice for applications that generate sequential I/O, such as multimedia.
4. If you have set a password for the selected storage array, the Enter Password dialog is displayed. Type the current password for the storage array and click OK.

Changing Expansion Enclosure ID Numbers

When an MD3060e Series expansion enclosure is connected to an MD Series storage array for the first time, an enclosure ID number is assigned and maintained by the expansion enclosure. This enclosure ID number is also shown in the MD Storage Manager and can be changed if required.
To change the enclosure ID numbers:
1. In the AMW, from the menu bar, select HardwareEnclosureChangeID.
2. Select a new enclosure ID number from the Change Enclosure ID list.
The enclosure ID must be between 0 and 99 (inclusive).
3. To save the changed enclosure ID, click OK.

Changing The Enclosure Order

You can change the order of the RAID controller modules and the expansion enclosures to match the hardware configuration in your storage array. The enclosure order change remains in effect until it is modified again.
To change the enclosure order:
1. In the AMW, from the menu bar, select HardwareEnclosure ChangeHardware View Order.
2. From the enclosures list, select the enclosure you want to move and click either Up or Down to
move the enclosure to the new position.
3. Click OK.
4. If you have set a password for the selected storage array, the Enter Password dialog is displayed.
Type the current password for the storage array.
5. Click OK.
53

Configuring Alert Notifications

The MD Storage Manager can send an alert for any condition on the storage array that requires your attention. Alerts can be sent as e-mail messages or as Simple Network Management Protocol (SNMP) trap messages. You can configure alert notifications either for all the storage arrays or a single storage array.
To configure alert notifications:
1. For all storage arrays, in the EMW: a) Select the Setup tab.
b) Select Configure Alerts. c) Select All storage arrays. d) Click OK.
The Configure Alerts dialog is displayed.
2. For a single storage array: a) Select the Devices tab.
b) Select the relevant storage array, then select EditConfigure Alerts.
The Configure Alerts dialog is displayed.
3. Configure e-mail or SNMP alerts. For more information, see Configuring E-mail Alerts or Configuring SNMP Alerts.

Configuring E-mail Alerts

1. Open the Configure Alerts dialog by performing one of these actions in the EMW:
– On the Devices tab, select a node and then on the menu bar, select EditConfigure Alerts. Go
to step 3.
NOTE: This option enables you to set up alerts for all the storage arrays connected to the host.
– On the Setup, select Configure Alerts. Go to step 2.
2. Select one of the following radio buttons to specify an alert level:
All storage arrays — Select this option to send an e-mail alert about events on all storage arrays.
An individual storage array — Select this option to send an e-mail alert about events that occur
on only a specified storage array.
These results occur, depending on your selection:
– If you select All storage arrays, the Configure Alerts dialog is displayed. – If you select An individual storage array, the Select Storage Array dialog is displayed. Select the
storage array for which you want to receive e-mail alerts and click OK. The Configure Alerts dialog is displayed.
– If you do not know location of the selected storage array, click Blink to turn on the LEDs of the
storage array.
54
3. In the Configure Alerts dialog, select the Mail Server tab and do the following: a) Type the name of the Simple Mail Transfer Protocol (SMTP) mail server.
The SMTP mail server is the name of the mail server that forwards the e-mail alert to the configured e-mail addresses.
b) In Email sender address, type the e-mail address of the sender. Use a valid e-mail address.
The e-mail address of the sender (the network administrator) is displayed on each e-mail alert sent to the destination.
c) (Optional) To include the contact information of the sender in the e-mail alert, select Include
contact information with the alerts
4. Select the Email tab to configure the e-mail destinations:
– Adding an e-mail address — In Email address, type the e-mail address, and click Add. – Replacing an e-mail address — In the Configured email addresses area, select the e-mail address
to be replaced, type the replacement e-mail address in Email address, and click Replace.
– Deleting an e-mail address — In the Configured email addresses area, select the e-mail address,
and click Delete.
– Validating an e-mail address — Type the e-mail address in Email address or select the e-mail
address in the Configured email addresses area, and click Test. A test e-mail is sent to the selected e-mail address. A dialog with the results of the test and any error is displayed.
The newly added e-mail address is displayed in the Configured e-mail addresses area.
5. For the selected e-mail address in the Configured e-mail addresses area, in the Information To Send list, select:
Event Only — The e-mail alert contains only the event information. By default, Event Only is
selected.
Event + Profile — The e-mail alert contains the event information and the storage array profile. – Event + Support — The e-mail alert contains the event information and a compressed file that
contains complete support information for the storage array that has generated the alert.
6. For the selected e-mail address in the Configured e-mail addresses area, in the Frequency list, select:
, and type the contact information.
Every event — Sends an e-mail alert whenever an event occurs. By default, Every event is
selected.
Every x hours — Sends an e-mail alert after the specified time interval if an event has occurred
during that time interval. You can select this option only if you have selected either Event +
Profile or Event + Support in the Information To Send list.
7. Click OK.
An alert icon is displayed next to each node in the Tree view where an alert is set.
8. If required, verify if the e-mail is sent successfully:
– Provide an SMTP mail server name and an e-mail sender address for the e-mail addresses to
work.
– Ensure that the e-mail addresses that you had previously configured appear in the Configured e-
mail addresses area.
– Use fully qualified e-mail addresses; for example, name@mycompany.com. – Configure multiple e-mail addresses before you click OK.

Configuring SNMP Alerts

You can configure SNMP alerts that originate from:
55
The storage array
The event monitor
1. Open the Configure Alerts dialog by performing one of these actions in the EMW:
– On the Devices tab, select a node and then on the menu bar, select EditConfigure Alerts. Go
to step 3.
NOTE: This option enables you to set up alerts for all the storage arrays connected to the host.
– On the Setup, select Configure Alerts. Go to step 2.
2. Select one of the following options to specify an alert level:
All storage arrays — Select this option to send an alert notification about events on all storage
arrays.
An individual storage array — Select this option to send an alert notification about events that
occur in only a specified storage array.
These results occur, depending on your selection:
– If you selected All storage arrays, the Configure Alerts dialog is displayed. – If you selected An individual storage array, the Select Storage Array dialog is displayed. Select
the storage array for which you want to receive alert notifications and click OK. The Configure Alerts dialog is displayed.
NOTE: If you do not know location of the selected storage array, click Blink to turn on the LEDs of the storage array.
3. To configure an SNMP alert originating from the event monitor, see Creating SNMP Alert
Notifications Originating from the Event Monitor.
4. To configure an SNMP alert originating from the storage array, see Creating SNMP Alert Notifications
Originating from the Storage Array.
Creating SNMP Alert Notifications (Originating from the Event Monitor)
The MD storage management software can notify you when the status of a storage array or one of its components changes. This is called an alert notification. You can receive alert notifications by three different methods: email, SNMP traps originating from the storage management station where the event monitor is installed, and SNMP traps originating from the storage array (if available). This topic describes how to create SNMP traps originating from the event monitor.
To configure an SNMP alert notification originating from the event monitor, you specify the community name and the trap destination. The community name is a string that identifies a known set of network management stations and is set by the network administrator. The trap destination is the IP address or the host name of a computer running an SNMP service. At a minimum, the trap destination is the network management station.
Keep these guidelines in mind when configuring an SNMP alert notification:
Host destinations for SNMP traps must be running an SNMP service so that the trap information can
be processed.
To set up alert notifications using SNMP traps, you must copy and compile a management
information base (MIB) file on the designated network management stations.
Global settings are not required for the SNMP trap messages. Trap messages sent to a network
management station or other SNMP servers are standard network traffic, and a system administrator or network administrator handles the security issues.
56
For more specific notifications, you can configure the alert destinations at the storage management
station, host, and storage array levels.
1. Do one of the following actions based on whether you want to configure alerts for a single storage array or for all storage arrays.
Single storage array – In the Enterprise Management Window (EMW), select the Devices tab.
Right-click the storage array that you want to send alerts, and then select Configure Alerts.
All storage arrays – In the EMW, select the Setup tab. Select Configure Alerts, and then select
the All storage arrays radio button, and then click OK.
The Configure Alerts dialog appears.
2. Select the SNMP - Event Monitor Origin Trap tab. Any SNMP addresses that you had previously configured appear in the Configured SNMP addresses
area.
3. In the Community name text box, type the community name. A community name can have a maximum of 20 characters.
4. In the Trap destination text box, type the trap destination, and click Add. You can enter a host name, IPv4 address, or IPv6 address.
5. (Optional) To verify that an SNMP alert is configured correctly, you can send a test message. In the Configured SNMP addresses area, select the SNMP destination that you want to test, and click
A test message is sent to the SNMP address. A dialog appears with the results of the validation and any errors. The Test button is disabled if you have not selected a community name.
6. Click OK. An alert icon appears next to each node in the Tree view for which an alert is set.
Test.
Creating SNMP Alert Notifications (Originating from the Storage Array)
NOTE: The availability of SNMP alerts originating from the storage array varies depending on your RAID controller module model.
The MD storage management software can notify you when the status of a storage array or one of its components changes. This is called an alert notification. You can receive alert notifications by three different methods: email, SNMP traps originating from the storage management station where the event monitor is installed, and SNMP traps originating from the storage array (if available). This topic describes how to create SNMP traps originating from the storage array.
To configure an SNMP alert notification originating from the storage array, you specify the community name and the trap destination. The community name is a string that identifies a known set of network management stations and is set by the network administrator. The trap destination is the IP address or the host name of a computer running an SNMP service. At a minimum, the trap destination is the network management station. Keep these guidelines in mind when configuring SNMP alert notifications:
Host destinations for SNMP traps must be running an SNMP service so that the trap information can
be processed.
Global settings are not required for the SNMP trap messages. Trap messages sent to a network
management station or other SNMP servers are standard network traffic, and a system administrator or network administrator handles the security issues.
1. In the Enterprise Management Window (EMW), select the Devices tab.
2. Right-click the storage array that you want to send alerts, and then select Configure Alerts.
57
3. Select the SNMP - Storage Array Origin Trap tab. The Configure Alerts dialog appears. The Configured communities table is populated with the
currently configured community names and the Configured SNMP addresses table is populated with the currently configured trap destinations.
NOTE: If the SNMP - Storage Array Origin Trap tab does not appear, this feature might not be available on your RAID controller module model.
4. (Optional) If you want to define the SNMP MIB-II variables that are specific to the storage array, perform this step.
You only need to enter this information once for each storage array. An icon appears next to the Configure SNMP MIB-II Variables button if any of the variables are currently set. The storage array returns this information in response to GetRequests.
– The Name field populates the variable sysName. – The Location field populates the variable sysLocation. – The Contact field populates the variable sysContact.
a) Click Configure SNMP MIB-II Variables. b) In the Name text box, the Location text box, and the Contact text box, enter the desired
information. You can enter only printable ASCII characters. Each text string can contain a maximum of 255
characters.
c) Click OK.
5. In the Trap Destination text field, enter the trap destination, and click Add. You can enter a host name, an IPv4 address, or an IPv6 address. If you enter a host name, it is
converted into an IP address for display in the Configured SNMP addresses table. A storage array can have a maximum of 10 trap destinations.
NOTE: This field is disabled if no community names are configured.
6. If you have more than one community name configured, in the Community Name column of the Configured SNMP addresses table, select a community name from the drop-down list.
7. Do you want to send a trap when an authentication failure occurs on the storage array?
Yes – Select the check box in the Send Authentication Failure Trap column of the Configured
SNMP addresses table. Selecting the check box sends an authentication failure trap to the trap destination whenever an SNMP request is rejected because of an unrecognized community name.
No – Clear the check box in the Send Authentication Failure Trap column of the Configured
SNMP addresses table.
8. (Optional) To verify that an SNMP alert is configured correctly, you can send a test message. In the Configured SNMP addresses area, select the SNMP destination that you want to test, and click Test. A test message is sent to the SNMP address. A dialog appears with the results of the validation and any errors. The Test button is disabled if you have not selected a community name.
9. Click OK. An alert icon appears next to each node in the Tree view for which an alert is set.

Battery Settings

A smart battery backup unit (BBU) can perform a learn cycle. The smart BBU module includes the battery, a battery gas gauge, and a battery charger. The learn cycle calibrates the smart battery gas gauge so that it provides a measurement of the charge of the battery module. A learn cycle can only start when the battery is fully charged.
58
The learn cycle completes the following operations:
Discharges the battery to a predetermined threshold
Charges the battery back to full capacity
A learn cycle starts automatically when you install a new battery module. Learn cycles for batteries in both RAID controller modules in a duplex system occur simultaneously.
Learn cycles are scheduled to start automatically at regular intervals, at the same time and on the same day of the week. The interval between cycles is described in weeks.
Use the following guidelines to adjust the interval:
You can use the default interval.
You can run a learn cycle at any time.
You can set the learn cycle earlier than the currently scheduled time.
You cannot set the learn cycle to start more than seven days later than the currently scheduled time.

Changing The Battery Settings

To change the battery settings:
1. In the AMW, from the menu bar, select Hardware EnclosureChange Battery Settings. The Battery Settings dialog is displayed.
2. You can change these details about the battery learn cycle:
Schedule daySchedule time
For more information, see the online help topics.

Setting The Storage Array RAID Controller Module Clocks

You can use the Synchronize Clocks option to synchronize the storage array RAID controller module clocks with the storage management station. This option makes sure that the event timestamps written by the RAID controller modules to the Event Log match the event timestamps written to host log files. The RAID controller modules remain available during synchronization.
To synchronize the RAID controller module clocks with the storage management station:
1. In the AMW, on the menu bar, select HardwareRAID Controller ModuleSynchronize Clocks.
2. If a password is set, in the Enter Password dialog, type the current password for the storage array,
and click The RAID controller module clocks are synchronized with the management station.
Synchronize.
59
60

Using iSCSI

NOTE: The following sections are relevant only to MDxx0i storage arrays that use the iSCSI protocol.

Changing The iSCSI Target Authentication

To change the iSCSI target authentication:
1. In the AMW, select the Setup tab.
2. Select Manage iSCSI Settings.
The Manage iSCSI Settings window is displayed and by default, the Target Authentication tab is selected.
3. To change the authentication settings, select:
– None — If you do not require initiator authentication. If you select None, any initiator can access
the target.
– CHAP — To enable an initiator that tries to authenticate the target using Challenge Handshake
Authentication Protocol (CHAP). Define the CHAP secret only if you want to use mutual CHAP authentication. If you select CHAP, but no CHAP target secret is defined, an error message is displayed. See Creating CHAP Secrets.
4. To enter the CHAP secret, click CHAP secret. The Enter Target CHAP Secret dialog is displayed.
5. Enter the Target CHAP secret. The Target CHAP secret must be at least 12 characters and up to 57 characters.
6. Enter the exact target CHAP secret in Confirm target CHAP secret.
4
NOTE: If you do not want to create a CHAP secret, you can generate a random CHAP secret automatically. To generate a random CHAP secret, click Generate Random CHAP Secret.
7. Click OK.
NOTE: You can select the None and CHAP at the same time, for example, when one initiator may not have CHAP and the other initiator has only CHAP selected.

Entering Mutual Authentication Permissions

Mutual authentication or two-way authentication is a way for a client or a user to verify themselves to a host server, and for the host server to validate itself to the user. This validation is accomplished in such a way that both parties are sure of the other’s identity.
To add mutual authentication permissions:
1. In the AMW, select the Setup tab.
2. Select Manage iSCSI Settings.
The Manage iSCSI Settings window is displayed.
61
3. Select the Remote Initiator Configuration tab.
4. Select an initiator in the Select an Initiator area.
The initiator details are displayed.
5. Click CHAP Secret to enter the initiator CHAP permissions in the dialog that is displayed.
6. Click OK.
7. Click OK in the Manage iSCSI Settings window.
For more information, see the online help topics.

Creating CHAP Secrets

When you set up an authentication method, you can choose to create a CHAP secret. The CHAP secret is a password that is recognized by the initiator and the target. If you are using mutual authentication to configure the storage array, you must enter the same CHAP secret that is defined in the host server iSCSI initiator, and you must define a CHAP secret on the target (the storage array) that must be configured in every iSCSI initiator that connects to the target storage array. For more information on CHAP, see Understanding CHAP Authentication in the storage array's Deployment Guide.

Initiator CHAP Secret

The initiator CHAP secret is set on the host using the iSCSI initiator configuration program provided with the host operating system. If you are using the mutual authentication method, you must define the initiator CHAP secret when you set up the host. This must be the same CHAP secret that is defined for the target when defining mutual authentication settings.

Target CHAP Secret

If you are using CHAP secrets, you must define the CHAP secret for the target.

Valid Characters For CHAP Secrets

The CHAP secret must be between 12 and 57 characters. The CHAP secret supports characters with ASCII values of 32 to 126 decimal. See the following table for a list of valid ASCII characters.
Space ! # $ % & ' ( ) * +
, - . / 0 1 2 3 4 5 6 7
8 9 : ; < = > ? @ A B C
D E F G H I J K L M N O
P Q R S T U V W X Y Z [
\ ] ^ _ a b c d e f g h
I j k l m n o p q r s t
u v w x y z { | } ~
62

Changing The iSCSI Target Identification

You cannot change the iSCSI target name, but you can associate an alias with the target for simpler identification. Aliases are useful because the iSCSI target names are not intuitive. Provide an iSCSI target alias that is meaningful and easy to remember.
To change the iSCSI target identification:
1. In the AMW, select the Setup tab.
2. Select Manage iSCSI Settings. The Manage iSCSI Settings window is displayed.
3. Select the Target Configuration tab.
4. Type the alias in iSCSI alias.
5. Click OK.
NOTE: Aliases can contain up to 30 characters. Aliases can include letters, numbers, and the special characters underscore (_), minus (-), and pound sign (#). No other special characters are permitted.
NOTE: Open iSCSI (which is used by Red Hat Enterprise Linux 5 and SUSE Linux Enterprise Server 10 with SP 1) does not support using target alias.

Changing The iSCSI Target Discovery Settings

To change the iSCSI target discovery settings:
1. In the AMW, select the Setup tab.
2. Select Manage iSCSI Settings. The Manage iSCSI Settings window is displayed.
3. Select the Target Discovery tab.
4. Select Use iSNS to activate iSCSI target discovery.
5. To activate iSCSI target discovery, you can use one of the following methods:
– Select Obtain configuration automatically from DHCP server to automatically activate target
discovery for IPv4 settings using the Dynamic Host Configuration Protocol (DHCP). You can also refresh the DHCP.
– Select Specify Configuration, and type the IPv4 address to activate the target discovery. – Type the iSNS server IP address in the IPv6 settings area to activate the target discovery.
NOTE: After you manually enter an IP address, you can also click Advanced to configure the customized TCP listening ports.
NOTE: If you do not want to allow discovery sessions that are not named, select Disallow un- named discovery sessions.
NOTE: Un-named discovery sessions are discovery sessions that are permitted to run without a target name. With an un-named discovery session, the target name or the target portal group tag is not available to enforce the iSCSI session identifier (ISID) rule.
6. Click OK.
63

Configuring The iSCSI Host Ports

The default method for configuring the iSCSI host ports, for IPv4 addressing, is DHCP. Always use this method unless your network does not have a DHCP server. It is advisable to assign static DHCP addresses to the iSCSI ports to ensure continuous connectivity. For IPv6 addressing, the default method is Stateless auto-configuration. Always use this method for IPv6.
To configure the iSCSI host ports:
1. In the AMW, select the Setup tab.
2. Select Configure iSCSI Host Ports. The Configure iSCSI Ports window is displayed.
3. In the iSCSI port list, select an appropriate RAID controller module and an iSCSI host port.
The connection status between the storage array and the host is displayed in the Status area when you select an iSCSI host port. The connection status is either connected or disconnected. Additionally, the media access control address (MAC) of the selected iSCSI host port is displayed in the MAC address area.
NOTE: For each iSCSI host port, you can use either IPv4 settings or IPv6 settings or both.
4. In the Configured Ethernet port speed list, select a network speed for the iSCSI host port. The network speed values in the Configured Ethernet port speed list depend on the maximum
speed that the network can support. Only the network speeds that are supported are displayed. All of the host ports on a single controller operate at the same speed. An error is displayed if different
speeds are selected for the host ports on the same controller.
5. To use the IPv4 settings for the iSCSI host port, select Enable IPv4 and select the IPv4 Settings tab.
6. To use the IPv6 settings for the iSCSI host port, select Enable IPv6 and select the IPv6 Settings tab.
7. To configure the IPv4 and IPv6 settings, select:
Obtain configuration automatically from DHCP server to automatically configure the settings.
This option is selected by default.
Specify configuration to manually configure the settings.
NOTE: If you select the automatic configuration method, the configuration is obtained automatically using the DHCP for IPv4 settings. Similarly for IPv6 settings, the configuration is obtained automatically based on the MAC address and the IPv6 routers present on the subnetwork.
8. Click Advanced IPv4 Settings and Advanced IPv6 Settings to configure the Virtual Local Area Network (VLAN) support and Ethernet priority.
9. Click the Advanced Port Settings to configure the TCP listening port settings and Jumbo frame settings.
10. To enable the Internet Control Message Protocol (ICMP), select Enable ICMP PING responses. The ICMP setting applies to all the iSCSI host ports in the storage array configured for IPv4
addressing.
NOTE: The ICMP is one of the core protocols of the Internet Protocol suite. The ICMP messages determine whether a host is reachable and how long it takes to get packets to and from that host.
11. Click OK.
64

Advanced iSCSI Host Port Settings

NOTE: Configuring the advanced iSCSI host ports settings is optional.
Use the advanced settings for the individual iSCSI host ports to specify the TCP frame size, the virtual LAN, and the network priority.
Setting Description
Virtual LAN (VLAN) A method of creating independent logical networks within a physical network.
Several VLANs can exist within a network. VLAN 1 is the default VLAN.
NOTE: For more information on creating and configuring a VLAN with MD Support Manager, in the AMW, click the Support tab, then click View Online Help.
Ethernet Priority The network priority can be set from lowest to highest. Although network
managers must determine these mappings, the IEEE has made broad recommendations:
0 — lowest priority (default).
(1–4) — ranges from “loss eligible” traffic to controlled-load applications, such as streaming multimedia and business-critical traffic.
(5–6) — delay-sensitive applications such as interactive video and voice.
7 — highest priority reserved for network-critical traffic.
TCP Listening Port The default Transmission Control Protocol (TCP) listening port is 3260.
Jumbo Frames The maximum transmission units (MTUs). It can be set between 1501 and 9000
Bytes per frame. If the Jumbo Frames are disabled, the default MTU is 1500 Bytes per frame.
NOTE: Changing any of these settings resets the iSCSI port. I/O is interrupted to any host accessing that port. You can access the I/O automatically after the port restarts and the host logs in again.

Viewing Or Ending An iSCSI Session

You may want to end an iSCSI session for the following reasons:
Unauthorized access — If an initiator is logged on whom you consider to not have access, you can end the iSCSI session. Ending the iSCSI session forces the initiator to log off the storage array. The initiator can log on if None authentication method is available.
System downtime — If you need to turn off a storage array and initiators are logged on, you can end the iSCSI session to log off the initiators from the storage array.
To view or end an iSCSI session:
1. In the AMW menu bar, select Storage ArrayiSCSIView/End Sessions.
2. Select the iSCSI session that you want to view in the Current sessions area.
The details are displayed in the Details area.
3. To save the entire iSCSI sessions topology as a text file, click Save As
65
4. To end the session:
a) Select the session that you want to end, and then click End Session.
The End Session confirmation window is displayed.
b) Click Yes to confirm that you want to end the iSCSI session.
NOTE: If you end a session, any corresponding connections terminate the link between the host and the storage array, and the data on the storage array is no longer available.
NOTE: When a session is manually terminated using the MD Storage Manager, the iSCSI initiator software automatically attempts to re-establish the terminated connection to the storage array. This may cause an error message.

Viewing iSCSI Statistics And Setting Baseline Statistics

To view iSCSI statistics and set baseline statistics:
1. In the AMW menu bar, select MonitorHealthiSCSI Statistics.
The View iSCSI Statistics window is displayed.
2. Select the iSCSI statistic type you want to view in the iSCSI Statistics Type area. You can select:
– Ethernet MAC statistics – Ethernet TCP/IP statistics – Target (protocol) statistics – Local initiator (protocol) statistics
3. In the Options area, select:
– Raw statistics — To view the raw statistics. Raw statistics are all the statistics that have been
gathered since the RAID controller modules were started.
– Baseline statistics — To view the baseline statistics. Baseline statistics are point-in-time statistics
that have been gathered since you set the baseline time.
After you select the statistics type and either raw or baseline statistics, the details of the statistics appear in the statistics tables.
NOTE: You can click Save As to save the statistics that you are viewing in a text file.
4. To set the baseline for the statistics:
a) Select Baseline statistics. b) Click Set Baseline. c) Confirm that you want to set the baseline statistics in the dialog that is displayed.
The baseline time shows the latest time you set the baseline. The sampling interval is the difference in time from when you set the baseline until you launch the dialog or click Refresh.
NOTE: You must first set a baseline before you can compare baseline statistics.

Edit, Remove, Or Rename Host Topology

If you give access to the incorrect host or the incorrect host group, you can remove or edit the host topology. Follow the appropriate procedures given in the following table to correct the host topology.
66
Table 2. Host Topology Actions
Desired Action Steps to Complete Action
Move a host Move a host group
1. Click the Host Mappings tab.
2. Select the Host that you want to move, and then select Host Mappings
Move.
3. Select a host group to move the host to and click OK.
Manually delete the host and the host group
1. Click the Host Mappings tab.
2. Select the item that you want to remove and select Host Mappings
Remove.
Rename the host or the host group
1. Click the Host Mappings tab.
2. Select the item that you want to remove and select Host Mappings
Rename.
3. Type a new label for the host and click OK.
For more information about Host, Host Groups, and Host Topology, see About Your Host.
67
68
5

Event Monitor

An event monitor is provided with Dell PowerVault Modular Disk Storage Manager. The event monitor runs continuously in the background and monitors activity on the managed storage arrays. If the event monitor detects any critical problems, it can notify a host or remote system using e-mail, Simple Network Management Protocol (SNMP) trap messages, or both.
For the most timely and continuous notification of events, enable the event monitor on a management station that runs 24 hours a day. Enabling the event monitor on multiple systems or having a combination of an event monitor and MD Storage Manager active can result in duplicate events, but this does not indicate multiple failures on the array.
The Event Monitor is a background task that runs independently of the Enterprise Management Window (EMW).
To use the Event Monitor, perform one of these actions:
Set up alert destinations for the managed device that you want to monitor. A possible alert destination would be the Dell Management Console.
Replicate the alert settings from a particular managed device by copying the emwdata.bin file to every storage management station from which you want to receive alerts.
Each managed device shows a check mark that indicates that alerts have been set.

Enabling Or Disabling The Event Monitor

You can enable or disable the event monitor at any time. Disable the event monitor if you do not want the system to send alert notifications. If you are running the
event monitor on multiple systems, disabling the event monitor on all but one system prevents the sending of duplicate messages.
NOTE: It is recommended that you configure the event monitor to start by default on a management station that runs 24 hours a day.

Windows

To enable or disable the event monitor:
1. Open the Run command in windows. Press the <Windows logo key><R>.
The Run command box is displayed.
2. In Open, type services.msc.
The Services window is displayed.
3. From the list of services, select Modular Disk Storage Manager Event Monitor.
4. Select ActionProperties.
5. To enable the event monitor, in the Service Status area, click Start.
6. To disable the event monitor, in the Service Status area, click Stop.
69

Linux

To enable the event monitor, at the command prompt, type SMmonitor start and press <Enter>. When the program startup begins, the following message is displayed: SMmonitor started.
To disable the event monitor, start terminal emulation application (console ox xterm) and at the command prompt, type SMmonitor stop, and press <Enter>. When the program shutdown is complete, the following message is displayed: Stopping Monitor process.
70
6

About Your Host

Configuring Host Access

Dell PowerVault Modular Disk Storage Manager (MD Storage Manager) is comprised of multiple modules. One of these modules is the Host Context Agent, which is installed as part of the MD Storage Manager installation and runs continuously in the background.
If the Host Context Agent is running on a host, that host and the host ports connected from it to the storage array are automatically detected by the MD Storage Manager. The host ports are displayed in the Host Mappings tab in the Array Management Window (AMW). The host must be manually added under the Default Host Group in the Host Mappings tab.
NOTE: On MD3800i, MD3820i, and MD3860i storage arrays that use the iSCSI protocol, the Host Context Agent is not dynamic and must be restarted after establishing iSCSI sessions to automatically detect them.
Use the Define Host Wizard to define the hosts that access the virtual disks in the storage array. Defining a host is one of the steps required to let the storage array know which hosts are attached to it and to allow access to the virtual disks. For more information on defining the hosts, see Defining A Host.
To enable the host to write to the storage array, you must map the host to the virtual disk. This mapping grants a host or a host group access to a particular virtual disk or to a number of virtual disks in a storage array. You can define the mappings on the Host Mappings tab in the AMW.
On the Summary tab in the AMW, the Host Mappings area indicates how many hosts are configured to access the storage array. Click Configured Hosts in the Host Mappings area to see the names of the hosts.
A collection of elements, such as default host groups, hosts, and host ports, are displayed as nodes in the object tree on the left pane of the Host Mappings tab.
The host topology is reconfigurable. You can perform the following tasks:
Create a host and assign an alias or user label.
Add or associate a new host port identifier to a particular host.
Change the host port identifier alias or user label.
Move or associate a host port identifier to a different host.
Replace a host port identifier with a new host port identifier.
Manually activate an inactive host port so that the port can gain access to host specific or host group specific LUN mappings.
Set the host port type to another type.
Move a host from one host group to another host group.
Remove a host group, a host, or a host port identifier.
Rename a host group, or a host.
71

Using The Host Mappings Tab

In the Host Mappings tab, you can:
Define hosts and hosts groups
Add mappings to the selected host groups
For more information, see the online help topics.

Defining A Host

You can use the Define Host Wizard in the AMW to define a host for a storage array. Either a known unassociated host port identifier or a new host port identifier can be added.
A user label must be specified before the host port identifier may be added (the Add button is disabled until one is entered).
To define a host:
1. In the AMW, select the Host Mappings tab.
2. Perform one of the actions:
– From the menu bar, select Host MappingsDefineHost. – Select the Setup tab, and click Manually Define Hosts. – Select the Host Mappings tab. Right-click the root node (storage array name), Default Group
node, or Host Group node in the object tree to which you want to add the host, and select DefineHost from the pop-up menu.
The Specify Host Name window is displayed.
3. In Host name, enter an alphanumeric name of up to 30 characters.
4. Select the relevant option in Do you plan to use the storage partitions in this storage array? and
Next.
click The Specify Host Port Identifiers window is displayed.
5. Select the relevant option to add a host port identifier to the host, you can select:
Add by selecting a known unassociated host port identifier — In Known unassociated host port
identifier, select the relevant host port identifier.
Add by creating a new host port identifier — In New host port identifier, enter a 16 character
name and an Alias of up to 30 characters for the host port identifier, and click Add.
NOTE: The host port identifier name must contain only the letters A through F.
6. Click Add.
The host port identifier and the alias for the host port identifier is added to the host port identifier table.
7. Click Next.
The Specify Host Type window is displayed.
8. In Host type (operating system), select the relevant operating system for the host.
The Host Group Question window is displayed.
72
9. In the Host Group Question window, you can select:
Yes — This host shares access to the same virtual disks with other hosts. – No — This host does NOT share access to the same virtual disks with other hosts.
10. Click Next.
11. If you select:
Yes — The Specify Host Group window is displayed. – No — Go to step 13.
12. Enter the name of the host group or select an existing host group and click Next.
The Preview window is displayed.
13. Click Finish.
The Creation Successful window is displayed confirming that the new host is created.
14. To create another host, click Yes on the Creation Successful window.

Removing Host Access

To remove host access:
1. In the AMW, select the Host Mappings tab.
2. Select the host node from the object tree on the left pane.
3. Perform one of these actions:
– From the menu bar, select Host MappingsHostRemove. – Right-click the host node, and select Remove from the pop-up menu.
The Remove confirmation dialog is displayed.
4. Type yes.
5. Click OK.

Managing Host Groups

A host group is a logical entity of two or more hosts that share access to specific virtual disks on the storage array. You create host groups using the MD Storage Manager.
All hosts in a host group must have the same host type (operating system). In addition, all hosts in the host group must have special software, such as clustering software, to manage virtual disk sharing and accessibility.
If a host is part of a cluster, every host in the cluster must be connected to the storage array, and every host in the cluster must be added to the host group.

Creating A Host Group

To create a host group:
1. In the AMW, select the Host Mappings tab.
2. In the object tree, select the storage array or the Default Group.
73
3. Perform one of the following actions:
– From the menu bar, select Host MappingsDefineHost Group. – Right-click the storage array or the Default Group, and select DefineHost Group from the
pop-up menu.
The Define Host Group window is displayed.
4. Type the name of the new host group in Enter new host group name.
5. Select the appropriate hosts in the Select hosts to add area.
6. Click Add.
The new host is added in the Hosts in group area.
NOTE: To remove hosts, select the hosts in the Hosts in group area, and click Remove.
7. Click OK.

Adding A Host To A Host Group

You can add a host to an existing host group or a new host group using the Define Host Wizard. For more information, see Defining A Host.
You can also move a host to a different host group. For more information, see Moving A Host To A
Different Host Group.

Removing A Host From A Host Group

You can remove a host from the object tree on the Host Mappings tab of the AMW. For more information, see Removing A Host Group.

Moving A Host To A Different Host Group

To move a host to a different host group:
1. In the AMW, select the Host Mappings tab, select the host node in the object tree.
2. Perform one of these actions:
– From the menu bar, select Host MappingsHostMove. – Right-click the host node, and select Move from the pop-up menu.
The Move Host dialog is displayed.
3. In the Select host group list, select the host group to which you want to move the host.
You can also move the host out of the host group and add it under the default group. The Move Host confirmation dialog is displayed.
4. Click Yes.
The host is moved to the selected host group with the following mappings:
– The host retains the specific virtual disk mappings assigned to it. – The host inherits the virtual disk mappings assigned to the host group to which it is moved. – The host loses the virtual disk mappings assigned to the host group from which it was moved.
74

Removing A Host Group

To remove a host group:
1. In the AMW, select the Host Mappings tab, select the host group node in the object tree.
2. Perform one of these actions:
– From the menu bar, select Host MappingsHost GroupRemove. – Right-click the host group node, and select Remove from the pop-up menu.
The Remove dialog is displayed.
3. Click Yes.
The selected host group is removed.

Host Topology

Host topology is the organization of hosts, host groups, and host interfaces configured for a storage array. You can view the host topology in the Host Mappings tab of the AMW. For more information, see
Using The Host Mappings Tab.
The following tasks change the host topology:
Moving a host or a host connection
Renaming a host group, a host, or a host connection
Adding a host connection
Replacing a host connection
Changing a host type
The MD Storage Manager automatically detects these changes for any host running the host agent software.

Starting Or Stopping The Host Context Agent

The Host Context Agent discovers the host topology. The Host Context Agent starts and stops with the host. The topology discovered by the Host Context Agent can be viewed by clicking Configure Host
Access (Automatic)
You must stop and restart the Host Context Agent to see the changes to the host topology if:
A new storage array is attached to the host server.
A host is added while turning on power to the RAID controller modules.
To start or stop the Host Context Agent on Linux, enter the following commands at the prompt:
SMagent start
SMagent stop
You must stop and then restart SMagent after:
Moving a controller offline or replacing a controller.
Removing host-to-array connections from or attaching host-to-array connections to a Linux host server.
in the Configure tab in the MD Storage Manager.
75
To start or stop the Host Context Agent on Windows:
1. Do one of the following:
– Click StartSettingsControl PanelAdministrative ToolsServices – Click StartAdministrative ToolsServices
2. From the list of services, select Modular Disk Storage Manager Agent.
3. If the Host Context Agent is running, click ActionStop, then wait approximately 5 seconds.
4. Click ActionStart.

I/O Data Path Protection

You can have multiple host-to-array connections for a host. Ensure that you select all the connections to the array when configuring host access to the storage array.
NOTE: See the Deployment Guide for more information on cabling configurations.
NOTE: For more information on configuring hosts, see About Your Host.
If a component such as a RAID controller module or a cable fails, or an error occurs on the data path to the preferred RAID controller module, virtual disk ownership is moved to the alternate non preferred RAID controller module for processing. This failure or error is called failover.
Drivers for multi-path frameworks such as Microsoft Multi-Path IO (MPIO) and Linux Device Mapper (DM) are installed on host systems that access the storage array and provide I/O path failover.
For more information on Linux DM, see Device Mapper Multipath for Linux. For more information on MPIO, see microsoft.com.
NOTE: You must have the multi-path driver installed on the hosts at all times, even in a configuration where there is only one path to the storage system, such as a single port cluster configuration.
During a failover, the virtual disk transfer is logged as a critical event, and an alert notification is sent automatically if you have configured alert destinations for the storage array.

Managing Host Port Identifiers

You can do the following to manage the host port identifiers that are added to the storage array:
Add — Add or associate a new host port identifier to a particular host.
Edit — Change the host port identifier alias or user label. You can move (associate) the host port identifier to a new host.
Replace — Replace a particular host port identifier with another host port identifier.
Remove — Remove the association between a particular host port identifier and the associated host.
To manage a host port identifier:
1. In the AMW, select the Host Mappings tab.
2. Perform one of these actions:
– Right-click the host in the object tree, and select Manage Host Port Identifiers in the pop-up
menu.
– From the menu bar, select Host MappingsManage Host Port Identifiers.
The Manage Host Port Identifiers dialog is displayed.
76
3. To manage the host port identifiers in the Show host port identifiers associated with list:
– For a specific host, select the host from the list of hosts that are associated with the storage array. – For all hosts, select All hosts from the list of hosts that are associated with the storage array.
4. If you are adding a new host port identifier, go to step 5. If you are managing an existing host port
identifier, go to step 10.
5. Click Add.
The Add Host Port Identifier dialog is displayed.
6. Select the appropriate host interface type.
7. Select the method to add a host port identifier to the host. You can select:
Add by selecting a known unassociated host port identifier — Select the appropriate host port
identifier from the existing list of Known unassociated host port identifiers.
Add by creating a new host port identifier — In New host port identifier, enter the name of the
new host port identifier.
8. In Alias, enter an alphanumeric name of up to 30 characters.
9. In Associated with host, select the appropriate host.
The newly added host port identifier is added to the Host port identifier information area.
10. Select the host port identifier that you want to manage from the list of host port identifiers in the
Host port identifier information area.
11. Perform one of these actions for the selected host port identifier:
– To edit the host port identifier — Select the appropriate host port identifier and click Edit. The
Edit Host Port Identifier dialog is displayed. Update User label and Associated with host and
click Save.
– To replace the host port identifier — Select the appropriate host port identifier and click Replace.
The Replace Host Port Identifier dialog is displayed. Replace the current host port identifier with a known unassociated host port identifier or create a new host port identifier, update User label and click Replace.
– To remove the host port identifier — Select the appropriate host port identifier and click Edit. The
Remove Host Port Identifier dialog is displayed. Type yes and click OK.
For more information, see the online help topics.
77
78
7

Disk Groups, Standard Virtual Disks, And Thin Virtual Disks

Creating Disk Groups And Virtual Disks

Disk groups are created in the unconfigured capacity of a storage array, and virtual disks are created in the free capacity of a disk group or disk pool. The maximum number of physical disks supported in a disk group is 120 (180 with the premium feature activated). The hosts attached to the storage array read and write data to the virtual disks.
NOTE: Before you can create virtual disks, you must first organize the physical disks into disk groups and configure host access. Then you can create virtual disks within a disk group.
To create a virtual disk, use one of the following methods:
Create a new disk group from unconfigured capacity. First define the RAID level and free capacity (available storage space) for the disk group, and then define the parameters for the first virtual disk in the new disk group.
Create a new virtual disk in the free capacity of an existing disk group or disk pool. You only need to specify the parameters for the new virtual disk.
A disk group has a set amount of free capacity that is configured when the disk group is created. You can use that free capacity to subdivide the disk group into one or more virtual disks.
You can create disk groups and virtual disks using:
Automatic configuration — Provides the fastest method, but with limited configuration options
Manual configuration — Provides more configuration options
When creating a virtual disk, consider the uses for that virtual disk, and select an appropriate capacity for those uses. For example, if a disk group has a virtual disk that stores multimedia files (which tend to be large) and another virtual disk that stores text files (which tend to be small), the multimedia file virtual disk requires more capacity than the text file virtual disk.
A disk group should be organized according to its related tasks and subtasks. For example, if you create a disk group for the Accounting Department, you can create virtual disks that match the different types of accounting performed in the department: Accounts Receivable (AR), Accounts Payable (AP), internal billing, and so forth. In this scenario, the AR and AP virtual disks probably need more capacity than the internal billing virtual disk.
NOTE: In Linux, the host must be rebooted after deleting virtual disks to reset the /dev entries.
NOTE: Before you can use a virtual disk, you must register the disk with the host systems. See Host-
To-Virtual Disk Mapping.
79

Creating Disk Groups

NOTE: If you have not created disk groups for a storage array, the Disk Pool Automatic Configuration Wizard is displayed when you open the AMW. For more information on creating
storage space from disk pools, see Disk Pools.
NOTE: Thin-provisioned virtual disks can be created from disk pools. If you are not using disk pools, only standard virtual disks can be created. For more information, see Thin Virtual Disks.
You can create disk groups either using Automatic configuration or Manual configuration. To create disk groups:
1. To start the Create Disk Group Wizard, perform one of these actions:
– To create a disk group from unconfigured capacity in the storage array, in the Storage & Copy
Services tab, select a storage array and right-click the Total Unconfigured Capacity node, and select Create Disk Group from the pop-up menu.
– To create a disk group from unassigned physical disks in the storage array — On the Storage &
Copy Services tab, select one or more unassigned physical disks of the same physical disk type, and from the menu bar, select StorageDisk GroupCreate.
– Select the Hardware tab and right-click the unassigned physical disks, and select Create Disk
Group from the pop-up menu.
– To create a secure disk group — On the Hardware tab, select one or more unassigned security
capable physical disks of the same physical disk type, and from the menu bar, select Storage
Disk GroupCreate.
The Introduction (Create Disk Group) window is displayed.
2. Click Next.
The Disk Group Name & Physical Disk Selection window is displayed.
3. Type up to 30-character name of the disk group in Disk group name.
4. Select the appropriate Physical Disk selection choices and click Next.
You can make the following choices:
Automatic. – Manual.
5. For automatic configuration, the RAID Level and Capacity window is displayed:
a) Select the appropriate RAID level in Select RAID level. You can select RAID levels 0, 1/10, 5, and 6.
Depending on your RAID level selection, the physical disks available for the selected RAID level are displayed in Select capacity table.
b) In the Select Capacity table, select the relevant disk group capacity, and click Finish.
80
6. For manual configuration, the Manual Physical Disk Selection window is displayed:
a) Select the appropriate RAID level in Select RAID level. You can select RAID levels 0, 1/10, 5, and 6.
Depending on your RAID level selection, the physical disks available for the selected RAID level are displayed in Unselected physical disks table.
b) In the Unselected physical disks table, select the appropriate physical disks and click Add.
NOTE: You can select multiple physical disks at the same time by holding <Ctrl> or <Shift> and selecting additional physical disks.
c) To view the capacity of the new disk group, click Calculate Capacity. d) Click Finish.
A message prompts you that the disk group is successfully created and that you should create at least one virtual disk before you can use the capacity of the new disk group. For more information on creating virtual disks, see Creating Virtual Disks.

Locating A Disk Group

You can physically locate and identify all of the physical disks that comprise a selected disk group. An LED blinks on each physical disk in the disk group.
To locate a disk group:
1. In the AMW, select the Storage & Copy Services tab.
2. Right-click on a disk group and select Blink from the pop-up menu.
The LEDs for the selected disk group blink.
3. After locating the disk group, click OK.
The LEDs stop blinking.
4. If the LEDs for the disk group do not stop blinking, from the toolbar in AMW, select Hardware
BlinkStop All Indications.
If the LEDs successfully stop blinking, a confirmation message is displayed.
5. Click OK.

Creating Standard Virtual Disks

Keep these important guidelines in mind when you create a standard virtual disk:
Many hosts can have 256 logical unit numbers (LUNs) mapped per storage partition, but the number varies per operating system.
After you create one or more virtual disks and assign a mapping, you must register the virtual disk with the operating system. In addition, you must make sure that the host recognizes the mapping between the physical storage array name and the virtual disk name. Depending on the operating system, run the host-based utilities, hot_add and SMdevices.
If the storage array contains physical disks with different media types or different interface types, multiple Unconfigured Capacity nodes may be displayed in the Total Unconfigured Capacity pane of the Storage & Copy Services tab. Each physical disk type has an associated Unconfigured Capacity node if unassigned physical disks are available in the expansion enclosure.
You cannot create a disk group and subsequent virtual disk from different physical disk technology types. Each physical disk that comprises the disk group must be of the same physical disk type.
NOTE: Ensure that you create disk groups before creating virtual disks. If you chose an
Unconfigured Capacity node or unassigned physical disks to create a virtual disk, the Disk Group Required dialog is displayed. Click Yes and create a disk group by using the Create Disk Group Wizard. The Create Virtual Disk Wizard is displayed after you create the disk group.
81
To create standard virtual disks:
1. In the AMW, select the Storage & Copy Services tab.
2. Select a Free Capacity node from an existing disk group and do one of the following:
– From the menu bar, select StorageVirtual DiskCreateVirtual Disk. – Right click on the Free Capacity and select Create Disk Group.
The Create Virtual Disk: Specify Parameters window is displayed.
3. Select the appropriate unit for memory in Units and enter the capacity of the virtual disk in New
virtual disk capacity.
4. In Virtual disk name, enter a virtual disk name of up to 30 characters.
5. In the Map to host list, select an appropriate host or select Map later.
6. In the Data Service (DS) Attributes area, you can select:
Enable data assurance (DA) protection on the new virtual diskUse SSD cache
7. In the Virtual disk I/O characteristics type list, select the appropriate Virtual Disk I/O characteristics
type. You can select:
File system (typical)DatabaseMultimediaCustom
NOTE: If you select Custom, you must select an appropriate segment size.
8. Select Enable dynamic cache read prefetch.
For more information on virtual disk cache settings, see Changing The Virtual Disk Cache Settings.
NOTE: Enable dynamic cache read prefetch must be disabled if the virtual disk is used for database applications or applications with a large percentage of random reads.
9. From the Segment size list, select an appropriate segment size.
10. Click Finish.
The virtual disks are created.
NOTE: A message prompts you to confirm if you want to create another virtual disk. Click Yes to proceed further, else click No.
NOTE: Thin virtual disks are supported on disk pools. For more information, see Thin Virtual
Disks.

Changing The Virtual Disk Modification Priority

You can specify the modification priority setting for a single virtual disk or multiple virtual disks on a storage array.
Guidelines to change the modification priority of a virtual disk:
If more than one virtual disk is selected, the modification priority defaults to the lowest priority. The current priority is shown only if a single virtual disk is selected.
Changing the modification priority by using this option modifies the priority for the selected virtual disks.
82
To change the virtual disk modification priority:
1. In the AMW, select the Storage & Copy Services tab.
2. Select a virtual disk.
3. In the menu bar, select StorageVirtual DiskChangeModification Priority.
The Change Modification Priority window is displayed.
4. Select one or more virtual disks. Move the Select modification priority slider bar to the desired
priority.
NOTE: To select nonadjacent virtual disks, press <Ctrl> click and select the appropriate virtual disks. To select adjacent virtual disks, press <Shift> click the appropriate virtual disks. To select all of the available virtual disks, click Select All.
5. Click OK.
A message prompts you to confirm the change in the virtual disk modification priority.
6. Click Yes.
7. Click OK.

Changing The Virtual Disk Cache Settings

You can specify the cache memory settings for a single virtual disk or for multiple virtual disks in a storage array.
Guidelines to change cache settings for a virtual disk:
After opening the Change Cache Settings dialog, the system may display a window indicating that the RAID controller module has temporarily suspended caching operations. This action may occur when a new battery is charging, when a RAID controller module has been removed, or if a mismatch in cache sizes has been detected by the RAID controller module. After the condition has cleared, the cache properties selected in the dialog become active. If the selected cache properties do not become active, contact your Technical Support representative.
If you select more than one virtual disk, the cache settings default to no settings selected. The current cache settings appear only if you select a single virtual disk.
If you change the cache settings by using this option, the priority of all of the virtual disks that you selected is modified.
To change the virtual disk cache settings:
1. In the AMW, select the Storage & Copy Services tab and select a virtual disk.
2. In the menu bar, select StorageVirtual DiskChangeCache Settings.
The Change Cache Settings window is displayed.
3. Select one or more virtual disks.
To select nonadjacent virtual disks, press <Ctrl> click. To select adjacent virtual disks, press <Shift> click. To select all of the available virtual disks, select Select All.
83
4. In the Cache Properties area, you can select:
Enable read cachingEnable write caching
* Enable write caching without batteries — to permit write caching to continue even if the
RAID controller module batteries are discharged completely, not fully charged, or are not present.
* Enable write caching with mirroring — to mirror cached data across two redundant RAID
controller modules that have the same cache size.
Enable dynamic cache read prefetch
CAUTION: Possible loss of data—Selecting the Enable write caching without batteries option lets write caching continue even when the batteries are discharged completely or are not fully charged. Typically, write caching is turned off temporarily by the RAID controller module until the batteries are charged. If you select this option and do not have a universal power supply for protection, you could lose data. In addition, you could lose data if you do not have RAID controller module batteries and you select the Enable write caching without batteries option.
NOTE: When the Optional RAID controller module batteries option is enabled, the Enable write caching does not appear. The Enable write caching without batteries is still available, but it is not checked by default.
NOTE: Cache is automatically flushed after the Enable write caching check box is disabled.
5. Click OK.
A message prompts you to confirm the change in the virtual disk modification priority.
6. Click Yes.
7. Click OK.
The Change Virtual Disk Propreties - Progress dialog is displayed.

Changing The Segment Size Of A Virtual Disk

You can change the segment size on a selected virtual disk. During this operation, I/O performance is affected, but your data remains available.
Follow these guidelines to proceed with changing the segment size:
You cannot cancel this operation after it starts.
Do not start this operation unless the disk group is in Optimal status.
The MD Storage Manager determines the segment size transitions that are allowed. Segment sizes that are inappropriate transitions from the current segment size are unavailable on the menu. Allowed transitions usually are double or half of current segment size. For example, if the current virtual disk segment size is 32 KB, a new virtual disk segment size of either 16 KB or 64 KB is allowed.
NOTE: The operation to change the segment size is slower than other modification operations (for example, changing RAID levels or adding free capacity to a disk group). This slowness is the result of how the data is reorganized and the temporary internal backup procedures that occur during the operation.
The amount of time that a change segment size operation takes depends on:
The I/O load from the host
The modification priority of the virtual disk
84
The number of physical disks in the disk group
The number of physical disk ports
The processing power of the storage array RAID controller modules
If you want this operation to complete faster, you can change the modification priority to the highest level, although this may decrease system I/O performance.
To change the segment size of a virtual disk:
1. In the AMW, select the Storage & Copy Services tab and select a virtual disk.
2. From the menu bar, select StorageVirtual DiskChangeSegment Size.
3. Select the required segment size.
A message prompts you to confirm the selected segment size.
4. Click Yes.
The segment size modification operation begins. The virtual disk icon in the Details pane shows an Operation in Progress status while the operation is taking place.
NOTE: To view the progress or change the priority of the modification operation, select a virtual disk in the disk group, and from the menu bar, select StorageVirtual DiskChange
Modification Priority.

Changing The IO Type

You can specify the virtual disk I/O characteristics for the virtual disks that you are defining as part of the storage array configuration. The expected I/O characteristics of the virtual disk is used by the system to indicate an applicable default virtual disk segment size and dynamic cache read prefetch setting. See the online help topics for information on the Automatic Configuration Wizard.
NOTE: The dynamic cache read prefetch setting can be changed later by selecting Storage Virtual DiskChangeCache Settings from the menu bar. You can change the segment size later by selecting StorageVirtual DiskChangeSegment Size from the menu bar.
The I/O characteristic types shown below are only presented during the create virtual disk process. When you choose one of the virtual disk I/O characteristics, the corresponding dynamic cache prefetch
setting and segment size that are typically well suited for expected I/O patterns are populated in the Dynamic cache read prefetch field and the Segment size field.
To change the I/O type:
1. To enable read caching, select Enable read caching.
2. To enable dynamic cache read prefetch, select Enable dynamic cache read prefetch.
3. To enable write caching, select Enable write caching.
4. Select one of the following:
Enable write caching with mirroring — Select this option to mirror cached data across two
redundant RAID controller modules that have the same cache size.
Enable write caching without batteries — Select this option to permit write caching to continue
even if the RAID controller module batteries are discharged completely, not fully charged, or are not present.
NOTE: Cache is automatically flushed if you disable Enable write caching.
5. Click OK.
85
6. In the confirmation dialog, click Yes.
A progress dialog is displayed, which indicates the number of virtual disks being changed.

Thin Virtual Disks

When creating virtual disks from a disk pool, you have the option to create thin virtual disks instead of standard virtual disks. Thin virtual disks are created with physical (or preferred) and virtual capacity, allowing flexibility to meet increasing capacity requirements.
When you create standard virtual disks, you allocate all available storage based on an estimation of how much space you need for application data and performance. If you want to expand the size of a standard virtual disk in the future, you must add physical disks to your existing disk groups or disk pools. Thin volumes allow you to create large virtual disks with smaller physical storage allocations that can be increased as required.
NOTE: Thin virtual disks can only be created from an existing disk pool.

Advantages Of Thin Virtual Disks

Thin virtual disks, also known as thin provisioning, present a more logical storage view to hosts.
Thin virtual disks allow you to dynamically allocate storage to each virtual disk as data is written. Using thin provisioning helps to eliminate large amounts of unused physical capacity that often occurs when creating standard virtual disks.
However, in certain cases, standard virtual disks may provide a more suitable alternative compared to thin provisioning, such as in situations when:
you anticipate that storage consumption on a virtual disk is highly unpredictable or volatile
an application relying on a specific virtual disk is exceptionally mission critical

Physical Vs Virtual Capacity On A Thin Virtual Disk

When you configure a thin virtual disk, you can specify the following types of capacity:
physical (or preferred)
virtual
Virtual capacity is capacity that is reported to the host, while physical capacity is the amount of actual physical disk space allocated for data write operations. Generally, physical capacity is much smaller than virtual capacity.
Thin provisioning allows virtual disks to be created with a large virtual capacity but a relatively small physical capacity. This is beneficial for storage utilization and efficiency because it allows you to increase capacity as application needs change, without disrupting data throughput. You can also set a utilization warning threshold that causes MD Storage Manager to generate an alert when a specified percentage of physical capacity is reached.
Changing Capacity On Existing Thin Virtual Disks
If the amount of space used by the host for read/write operations (sometimes called consumed capacity) exceeds the amount of physical capacity allocated on a standard virtual disk, the storage array cannot accommodate additional write requests until the physical capacity is increased. However, on a thin virtual disk, MD Storage Manager can automatically expand physical capacity of a thin virtual disk. You can also do it manually using StorageVirtual DiskIncrease Repository Capacity. If you select the automatic expansion option, you can also set a maximum expansion capacity. The maximum expansion capacity
86
enables you to limit the automatic growth of a virtual disk to an amount less than the defined virtual capacity.
NOTE: Since less than full capacity is allocated when you create a thin virtual disk, insufficient free capacity may exist when certain operations are performed, such as snapshot images and snapshot virtual disks. If this occurs, an alert threshold warning is displayed.

Thin Virtual Disk Requirements And Limitations

The following table provides the minimum and maximum capacity requirements applicable to thin virtual disks.
Table 3. Minimum and Maximum Capacity Requirements
Capacity Types Size
Virtual capacity
Minimum 32 MB
Maximum 63 TB
Physical capacity
Minimum 4 GB
Maximum 64 TB
The following limitations apply to thin virtual disks:
The segment size of a thin virtual disk cannot be changed.
The pre-read consistency check for a thin virtual disk cannot be enabled.
A thin virtual disk cannot serve as the target virtual disk in a Virtual Disk Copy.
A thin virtual disk cannot be used in a Snapshot (Legacy) operation.
A thin virtual disk cannot be used in a Remote Replication (Legacy) operation.

Thin Volume Attributes

When you create a thin virtual disk from free capacity in an existing disk pool, you can manually set disk attributes or allow MD Storage Manager to assign default attributes. The following manual attributes are available:
Preferred Capacity — Sets the initial physical capacity of the virtual disk (MB, GB or TB). Preferred capacity in a disk pool is allocated in 4 GB increments. If you specify a capacity amount that is not a multiple of 4 GB, MD Storage Manager assigns a 4 GB multiple and assigns the remainder as unused. If space exists that is not a 4 GB multiple, you can use it to increase the size of the thin virtual disk. To increase the size of the thin virtual disk, select StorageVirtual Disk Increase Capacity.
Repository Expansion Policy — Select either Automatic or Manual to indicate whether MD Storage Manager must automatically expand physical capacity thresholds. If you select Automatic, enter a Maximum Expansion Capacity value that triggers automatic capacity expansion. The MD Storage Manager expands the preferred capacity in increments of 4 GB until it reaches the specified capacity. If you select Manual, automatic expansion does not occur and an alert is displayed when the Warning Threshold value percentage is reached.
Warning Threshold — When consumed capacity reaches the specified percentage, MD Storage Manager sends an E-mail or SNMP alert.
87

Thin Virtual Disk States

The following are the virtual disk states displayed in MD Storage Manager:
Optimal — Virtual disk is operating normally.
Full — Physical capacity of a thin virtual disk is full and no more host write requests can be processed.
Over Threshold — Physical capacity of a thin virtual disk is at or beyond the specified Warning Threshold percentage. The storage array status is shown as Needs Attention.
Failed — Virtual disk failed, and is no longer available for read or write operations. The storage array status is shown as Needs Attention.

Comparison—Types Of Virtual Disks And Copy Services

The availability of copy services depends on the type of virtual disk that you are working with.
The following table shows the copy services features supported on each type of virtual disk.
Copy Services Feature Standard Virtual Disk in a
Snapshot (Legacy) Supported Not supported Not supported
Snapshot image Supported Supported Supported
Snapshot virtual disk Supported Supported Supported
Rollback of snapshot Supported Supported Supported
Delete virtual disk with snapshot images or snapshot virtual disks
Consistency group membership
Remote Replication (Legacy)
Remote Replication Supported Supported Not supported
The source of a virtual disk copy can be either a standard virtual disk in a disk group, a standard virtual disk in a disk pool, or a thin virtual disk. The target of a virtual disk copy can be only a standard virtual disk in a disk group or a standard virtual disk in a disk pool, not a thin virtual disk. The following table summarizes the types of virtual disks you can use in a virtual disk copy.
Virtual Disk Copy Source Virtual Disk Copy Target Availability
Disk Group
Supported Supported Supported
Supported Supported Supported
Supported Not supported Not supported
Standard Virtual Disk in a Disk Pool
Thin Virtual Disk
Standard virtual disk Standard virtual disk Supported
Thin virtual disk Standard virtual disk Supported
Standard virtual disk Thin virtual disk Not supported
Thin virtual disk Thin virtual disk Not supported
88

Rollback On Thin Virtual Disks

Rollback operations are fully supported on thin virtual disks. A rollback operation restores the logical content of a thin virtual disk to match the selected snapshot image. There is no change to the consumed capacity of the thin virtual disk as a result of a rollback operation.

Initializing A Thin Virtual Disk

CAUTION: Possible loss of data – Initializing a thin virtual disk erases all data from the virtual disk. If you have questions, contact your Technical Support representative before performing this procedure.
When a thin virtual disk is created, it is automatically initialized. However, the MD Storage Manger Recovery Guru may advise that you manually initialize a thin virtual disk to recover from certain failure conditions. If you choose to reinitialize a thin virtual disk, you have several options:
Keep the same physical capacity — If you keep the same physical capacity, the virtual disk can keep its current repository virtual disk, which saves initialization time.
Change the physical capacity — If you change the physical capacity, a new repository virtual disk is created and you can optionally change the repository expansion policy and warning threshold.
Move the repository to a different disk pool.
Initializing a thin virtual disk erases all data from the virtual disk. However, host mappings, virtual capacity, repository expansion policy and security settings are preserved. Initialization also clears the block indices, which causes unwritten blocks to be read as if they are zero-filled. After initialization, the thin virtual disk appears to be completely empty.
The following types of virtual disks cannot be initialized:
Snapshot (Legacy) virtual disk
Base virtual disk of a Snapshot virtual disk
Primary virtual disk in a Remote Replication relationship
Secondary virtual disk in a Remote Replication relationship
Source virtual disk in a Virtual Disk Copy
Target virtual disk in a Virtual Disk Copy
Thin virtual disk that already has an initialization in progress
Thin virtual disk that is not in the Optimal state
Initializing A Thin Virtual Disk With The Same Physical Capacity
CAUTION: Initializing a thin virtual disk erases all data from the virtual disk.
You can create thin virtual disks only from disk pools, not from disk groups.
By initializing a thin virtual disk with the same physical capacity, the original repository is maintained but the contents of the thin virtual disk are deleted.
1. In the AMW, select the Storage & Copy Services tab.
2. Select the thin virtual disk that you want to initialize.
The thin virtual disks are listed under the Disk Pools node.
3. Select StorageVirtual DiskAdvancedInitialize.
The Initialize Thin Virtual Disk window is displayed.
89
4. Select Keep existing repository, and click Finish.
The Confirm Initialization of Thin Virtual Disk window is displayed.
5. Read the warning and confirm if you want to initialize the thin virtual disk.
6. Type yes, and click OK.
The thin virtual disk initializes.
Initializing A Thin Virtual Disk With A Different Physical Capacity
CAUTION: Initializing a thin virtual disk erases all data from the virtual disk.
You can create thin virtual disks only from disk pools, not from disk groups.
By initializing a thin virtual disk with the same physical capacity, the original repository is maintained but the contents of the thin virtual disk are deleted.
1. In the AMW, select the Storage & Copy Services tab.
2. Select the thin virtual disk that you want to initialize.
The thin virtual disks are listed under the Disk Pools node.
3. Select StorageVirtual DiskAdvancedInitialize.
The Initialize Thin Virtual Disk window is displayed.
4. Select Use a different repository.
5. Based on whether you want to keep the current repository for future use, select or clear Delete
existing repository, and click Next.
6. Select one of the following:
– Yes — If there more than one disk pool on your storage array – No — If there is only one disk pool on your storage array
The Select Disk Pool window is displayed.
7. Select Keep existing disk pool, and click Next.
The Select Repository window is displayed.
8. Use the Preferred capacity box to indicate the initial physical capacity of the virtual disk and the
Units list to indicate the specific capacity units to use (MB, GB, or TB).
NOTE: Do not allocate all of the capacity to standard virtual disks — ensure that you keep storage capacity for copy services (snapshots (legacy), snapshot images, snapshot virtual disks, virtual disk copies, and remote replications).
NOTE: Regardless of the capacity specified, capacity in a disk pool is allocated in 4 GB increments. Any capacity that is not a multiple of 4 GB is allocated but not usable. To make sure that the entire capacity is usable, specify the capacity in 4 GB increments. If unusable capacity exists, the only way to regain it is to increase the capacity of the virtual disk.
Based on the value that you entered in the previous step, the Disk pool physical capacity candidates table is populated with matching repositories.
9. Select a repository from the table.
Existing repositories are placed at the top of the list.
NOTE: The benefit of reusing an existing repository is that you can avoid the initialization process that occurs when you create a new one.
90
10. If you want to change the repository expansion policy or warning threshold, click View advanced
repository settings.
Repository expansion policy – Select either Automatic or Manual. When the consumed capacity
gets close to the physical capacity, you can expand the physical capacity. The MD storage management software can automatically expand the physical capacity or you can do it manually. If you select expansion capacity allows you to limit the virtual disk’s automatic growth below the virtual capacity. The value for the maximum expansion capacity must be a multiple of 4 GB.
Warning threshold – In the Send alert when repository capacity reaches field, enter a
percentage. The MD Storage Manager sends an alert notification when the physical capacity reaches the full percentage.
11. Click Finish.
The Confirm Initialization of Thin Virtual Disk window is displayed.
12. Read the warning and confirm if you want to initialize the thin virtual disk.
13. Type yes, and click OK.
The thin virtual disk initializes.
Automatic, you also can set a maximum expansion capacity. The maximum
Initializing A Thin Virtual Disk And Moving It To A Different Disk Pool
CAUTION: Initializing a thin virtual disk erases all data from the virtual disk.
NOTE: You can create thin virtual disks only from disk pools, not from disk groups.
1. In the AMW, select the Storage & Copy Services tab.
2. Select the thin virtual disk that you want to initialize.
The thin virtual disks are listed under the Disk Pools node.
3. Select StorageVirtual DiskAdvancedInitialize.
The Initialize Thin Virtual Disk window is displayed.
4. Based on whether you want to keep the current repository for future use, select or clear Delete
existing repository, and click Next. The Select Disk Pool window is displayed.
5. Select the Select a new disk pool radio button.
6. Select a new disk pool from the table, and click Next.
The Select Repository window is displayed.
7. Select Keep existing disk pool, and click Next.
The Select Repository window is displayed.
8. Use the Preferred capacity box to indicate the initial physical capacity of the virtual disk and the
Units list to indicate the specific capacity units to use (MB, GB, or TB).
NOTE: Do not allocate all of the capacity to standard virtual disks — ensure that you keep storage capacity for copy services (snapshots (legacy), snapshot images, snapshot virtual disks, virtual disk copies, and remote replications).
NOTE: Regardless of the capacity specified, capacity in a disk pool is allocated in 4 GB increments. Any capacity that is not a multiple of 4 GB is allocated but not usable. To make sure that the entire capacity is usable, specify the capacity in 4 GB increments. If unusable capacity exists, the only way to regain it is to increase the capacity of the virtual disk.
Based on the value that you entered in the previous step, the Disk pool physical capacity candidates table is populated with matching repositories.
91
9. Select a repository from the table.
Existing repositories are placed at the top of the list.
NOTE: The benefit of reusing an existing repository is that you can avoid the initialization process that occurs when you create a new one.
10. If you want to change the repository expansion policy or warning threshold, click View advanced
repository settings.
Repository expansion policy – Select either Automatic or Manual. When the consumed capacity
gets close to the physical capacity, you can expand the physical capacity. The MD Storage Manager can automatically expand the physical capacity or you can do it manually. If you select Automatic, you also can set a maximum expansion capacity. The maximum expansion capacity allows you to limit the virtual disk’s automatic growth below the virtual capacity. The value for the maximum expansion capacity must be a multiple of 4 GB.
Warning threshold – In the Send alert when repository capacity reaches field, enter a
percentage. The MD Storage Manager sends an alert notification when the physical capacity reaches the full percentage.
11. Click Finish.
The Confirm Initialization of Thin Virtual Disk window is displayed.
12. Read the warning and confirm if you want to initialize the thin virtual disk.
13. Type yes, and click OK.
The thin virtual disk initializes.

Changing A Thin Virtual Disk To A Standard Virtual Disk

If you want to change a thin virtual disk to a standard virtual disk, use the Virtual Disk Copy operation to create a copy of the thin virtual disk. The target of a virtual disk copy must always be a standard virtual disk.

Choosing An Appropriate Physical Disk Type

You can create disk groups and virtual disks in the storage array. You must select the capacity that you want to allocate for the virtual disk from either unconfigured capacity, free capacity, or an existing disk pool available in the storage array. Then you define basic and optional advanced parameters for the virtual disk.
With the advent of different physical disk technologies, it is now possible to mix physical disks with different media types and different interface types within a single storage array.

Physical Disk Security With Self Encrypting Disk

Self Encrypting Disk (SED) technology prevents unauthorized access to the data on a physical disk that is physically removed from the storage array. The storage array has a security key. Self encrypting disks provide access to data only through an array that has the correct security key.
The self encrypting disk or a security capable physical disk encrypts data during writes and decrypts data during reads. For more information, see the online help topics.
You can create a secure disk group from security capable physical disks. When you create a secure disk group from security capable physical disks, the physical disks in that disk group become security enabled. When a security capable physical disk has been security enabled, the physical disk requires the correct security key from a RAID controller module to read or write the data. All of the physical disks and RAID controller modules in a storage array share the same security key. The shared security key provides read and write access to the physical disks, while the physical disk encryption key on each physical disk is used
92
to encrypt the data. A security capable physical disk works like any other physical disk until it is security enabled.
Whenever the power is turned off and turned on again, all of the security enabled physical disks change to a security locked state. In this state, the data is inaccessible until the correct security key is provided by a RAID controller module.
You can view the self encrypting disk status of any physical disk in the storage array from the Physical Disk Properties dialog. The status information reports whether the physical disk is:
Security capable
Secure — Security enabled or disabled
Read/Write Accessible — Security locked or unlocked
You can view the self encrypting disk status of any disk group in the storage array. The status information reports whether the storage array is:
Security capable
Secure
The following table shows how to interpret the security status of a disk group.
Table 4. Interpretation of Security Status of Disk Group
Secure Security Capable - Yes Security Capable - No
Yes The disk group is composed of all SED
physical disks and is in a Secure state.
No The disk group is composed of all SED
physical disks and is in a Non-Secure
Not applicable. Only SED physical disks can be in a Secure state.
The disk group is not entirely composed of SED physical disks.
state.
The Physical Disk Security menu is displayed in the Storage Array menu. The Physical Disk Security menu has the following options:
Create Key
Change Key
Save Key
Validate Key
Import Key
Unlock Drives
NOTE: If you have not created a security key for the storage array, the Create Key option is active. If you have created a security key for the storage array, the Create Key option is inactive with a check mark to the left. The Change Key option, the Save Key option, and the Validate Key option are now active.
The Secure Physical Disks option is displayed in the Disk Group menu. The Secure Physical Disks option is active if these conditions are true:
The selected storage array is not security enabled but is comprised entirely of security capable physical disks.
The storage array contains no snapshot base virtual disks or snapshot repository virtual disks.
The disk group is in an Optimal state.
93
A security key is set up for the storage array.
NOTE: The Secure Physical Disks option is inactive if these conditions are not true.
The Secure Physical Disks option is inactive with a check mark to the left if the disk group is already security enabled.
The Create a secure disk group option is displayed in the Create Disk Group Wizard–Disk Group Name and Physical Disk Selection dialog. The Create a secure disk group option is active only when these conditions are met:
A security key is installed in the storage array.
At least one security capable physical disk is installed in the storage array.
All of the physical disks that you selected on the Hardware tab are security capable physical disks.
You can erase security enabled physical disks so that you can reuse the drives in another disk group or in another storage array. When you erase security enabled physical disks, ensure that the data cannot be read. When all of the physical disks that you have selected in the Physical Disk type pane are security enabled, and none of the selected physical disks is part of a disk group, the Secure Erase option is displayed in the Hardware menu.
The storage array password protects a storage array from potentially destructive operations by unauthorized users. The storage array password is independent from self encrypting disk, and should not be confused with the pass phrase that is used to protect copies of a security key. However, it is good practice to set a storage array password.

Creating A Security Key

When you create a security key, it is generated by and securely stored by the array. You cannot read or view the security key. A copy of the security key must be kept on some other storage medium for backup in case of system failure or for transfer to another storage array. A pass phrase that you provide is used to encrypt and decrypt the security key for storage on other media.
When you create a security key, you also provide information to create a security key identifier. Unlike the security key, you can read or view the security key identifier. The security key identifier is also stored on a physical disk or transportable media. The security key identifier is used to identify which key the storage array is using.
To create a security key:
1. In the AMW, from the menu bar, select Storage ArraySecurityPhysical Disk SecurityCreate
Key.
2. Perform one of these actions:
– If the Create Security Key dialog is displayed, go to step 6. – If the Storage Array Password Not Set or Storage Array Password Too Weak dialog is displayed,
go to step 3.
3. Choose whether to set (or change) the storage array password at this time.
– Click Yes to set or change the storage array password. The Change Password dialog is displayed.
Go to step 4.
– Click No to continue without setting or changing the storage array password. The Create
Security Key dialog is displayed. Go to step 6.
94
4. In New password, enter a string for the storage array password. If you are creating the storage array
password for the first time, leave Current password blank. Follow these guidelines for cryptographic strength when you create the storage array password:
– The password should be between eight and 30 characters long. – The password should contain at least one uppercase letter. – The password should contain at least one lowercase letter. – The password should contain at least one number. – The password should contain at least one non-alphanumeric character, for example, < > @ +.
5. In Confirm new password, re-enter the exact string that you entered in New password.
6. In Security key identifier, enter a string that becomes part of the secure key identifier.
You can enter up to 189 alphanumeric characters without spaces, punctuation, or symbols. Additional characters are generated automatically and is appended to the end of the string that you enter. The generated characters help to ensure that the secure key identifier is unique.
7. Enter a path and file name to save the security key file by doing one of the following:
– Edit the default path by adding a file name to the end of the path. – Click Browse to navigate to the required folder, then add a file name to the end of the path.
8. In Pass phrase dialog box, enter a string for the pass phrase.
The pass phrase must:
– be between eight and 32 characters long – contain at least one uppercase letter – contain at least one lowercase letter – contain at least one number – contain at least one non-alphanumeric character, for example, < > @ +
The pass phrase that you enter is masked.
NOTE: Create Key is active only if the pass phrase meets the above mentioned criterion.
9. In the Confirm pass phrase dialog box, re-enter the exact string that you entered in the Pass phrase
dialog box. Make a record of the pass phrase that you entered and the security key identifier that is associated
with the pass phrase. You need this information for later secure operations.
10. Click Create Key.
11. If the Invalid Text Entry dialog is displayed, select:
Yes — There are errors in the strings that were entered. The Invalid Text Entry dialog is displayed.
Read the error message in the dialog, and click OK. Go to step 6.
No — There are no errors in the strings that were entered. Go to step 12.
12. Make a record of the security key identifier and the file name from the Create Security Key Complete
dialog, and click OK.
After you have created a security key, you can create secure disk groups from security capable physical disks. Creating a secure disk group makes the physical disks in the disk group security enabled. Security enabled physical disks enter Security Locked status whenever power is re-applied. They can be unlocked only by a RAID controller module that supplies the correct key during physical disk initialization. Otherwise, the physical disks remain locked, and the data is inaccessible. The Security Locked status prevents any unauthorized person from accessing data on a security enabled physical disk by physically removing the physical disk and installing the physical disk in another computer or storage array.
95

Changing A Security Key

When you change a security key, a new security key is generated by the system. The new key replaces the previous key. You cannot view or read the key. However, a copy of the security key must be kept on some other storage medium for backup in case of system failure or for transfer to another storage array. A pass phrase that you provide encrypts and decrypts the security key for storage on other media. When you change a security key, you also provide information to create a security key identifier. Changing the security key does not destroy any data. You can change the security key at any time.
Before you change the security key, ensure that:
All virtual disks in the storage array are in Optimal status.
In storage arrays with two RAID controller modules, both are present and working normally.
To change the security key:
1. In the AMW menu bar, select Storage ArraySecurityPhysical Disk SecurityChange Key.
The Confirm Change Security Key window is displayed.
2. Type yes in the text field, and click OK.
The Change Security Key window is displayed.
3. In Secure key identifier, enter a string that become part of the secure key identifier.
You may leave the text box blank, or enter up to 189 alphanumeric characters without white space, punctuation, or symbols. Additional characters is generated automatically.
4. Edit the default path by adding a file name to the end of the path or click Browse, navigate to the
required folder, and enter the name of the file.
5. In Pass phrase, enter a string for the pass phrase.
The pass phrase must meet the following criteria:
– It must be between eight and 32 characters long. – It must contain at least one uppercase letter. – It must contain at least one lowercase letter. – It must contain at least one number. – It must contain at least one non-alphanumeric character (for example, < > @ +).
The pass phrase that you enter is masked.
6. In Confirm pass phrase, re-enter the exact string you entered in Pass phrase.
Make a record of the pass phrase you entered and the security key identifier it is associated with. You need this information for later secure operations.
7. Click Change Key.
8. Make a record of the security key identifier and the file name from the Change Security Key
Complete dialog, and click OK.

Saving A Security Key

You save an externally storable copy of the security key when the security key is first created and each time it is changed. You can create additional storable copies at any time. To save a new copy of the security key, you must provide a pass phrase. The pass phrase you choose does not need to match the pass phrase used when the security key was created or last changed. The pass phrase is applied to the particular copy of the security key you are saving.
96
To save the security key for the storage array,
1. In the AMW toolbar, select Storage ArraySecurityPhysical Disk SecuritySave Key.
The Save Security Key File - Enter Pass Phrase window is displayed.
2. Edit the default path by adding a file name to the end of the path or click Browse, navigate to the
required folder and enter the name of the file.
3. In Pass phrase, enter a string for the pass phrase.
The pass phrase must meet the following criteria:
– It must be between eight and 32 characters long. – It must contain at least one uppercase letter. – It must contain at least one lowercase letter. – It must contain at least one number. – It must contain at least one non-alphanumeric character (for example, < > @ +).
The pass phrase that you enter is masked.
4. In Confirm pass phrase, re-enter the exact string you entered in Pass phrase.
Make a record of the pass phrase you entered. You need it for later secure operations.
5. Click Save.
6. Make a record of the security key identifier and the file name from the Save Security Key Complete
dialog, and click OK.

Validate Security Key

A file in which a security key is stored is validated through the Validate Security Key dialog. To transfer, archive, or back up the security key, the RAID controller module firmware encrypts (or wraps) the security key and stores it in a file. You must provide a pass phrase and identify the corresponding file to decrypt the file and recover the security key.
Data can be read from a security enabled physical disk only if a RAID controller module in the storage array provides the correct security key. If security enabled physical disks are moved from one storage array to another, the appropriate security key must also be imported to the new storage array. Otherwise, the data on the security enabled physical disks that were moved is inaccessible.
See the online help topics for more information on validating the security key.

Unlocking Secure Physical Disks

You can export a security enabled disk group to move the associated physical disks to a different storage array. After you install those physical disks in the new storage array, you must unlock the physical disks before data can be read from or written to the physical disks. To unlock the physical disks, you must supply the security key from the original storage array. The security key on the new storage array is different and cannot unlock the physical disks.
You must supply the security key from a security key file that was saved on the original storage array. You must provide the pass phrase that was used to encrypt the security key file to extract the security key from this file.
For more information, see the online help topics.

Erasing Secure Physical Disks

In the AMW, when you select a security enabled physical disk that is not part of a disk group, the Secure Erase menu item is enabled on the Physical Disk menu. You can use the secure erase procedure to re-
97
provision a physical disk. You can use the Secure Erase option if you want to remove all of the data on the physical disk and reset the physical disk security attributes.
CAUTION: Possible loss of data access—The Secure Erase option removes all of the data that is currently on the physical disk. This action cannot be undone.
Before you complete this option, make sure that the physical disk that you have selected is the correct physical disk. You cannot recover any of the data that is currently on the physical disk.
After you complete the secure erase procedure, the physical disk is available for use in another disk group or in another storage array. See the online help topics for more information on the secure erase procedure.

Configuring Hot Spare Physical Disks

Guidelines to configure host spare physical disks:
CAUTION: If a hot spare physical disk does not have Optimal status, follow the Recovery Guru procedures to correct the problem before you try to unassign the physical disk. You cannot assign a hot spare physical disk if it is in use (taking over for a failed physical disk).
You can use only unassigned physical disks with Optimal status as hot spare physical disks.
You can unassign only hot spare physical disks with Optimal, or Standby status. You cannot unassign a hot spare physical disk that has the In Use status. A hot spare physical disk has the In Use status when it is in the process of taking over for a failed physical disk.
Hot spare physical disks must be of the same media type and interface type as the physical disks that they are protecting.
If there are secure disk groups and security capable disk groups in the storage array, the hot spare physical disk must match the security capability of the disk group.
Hot spare physical disks must have capacities equal to or larger than the used capacity on the physical disks that they are protecting.
The availability of enclosure loss protection for a disk group depends on the location of the physical disks that comprise the disk group. To make sure that enclosure loss protection is not affected, you must replace a failed physical disk to initiate the copyback process. See Enclosure Loss Protection.
To assign or unassign hot spare physical disks:
1. In the AMW, select the Hardware tab.
2. Select one or more unassigned physical disks.
3. Perform one of these actions:
– From the menu bar, select HardwareHot Spare Coverage. – Right-click the physical disk and select Hot Spare Coverage from the pop-up menu.
The Hot Spare Physical Disk Options window is displayed.
98
4. Select the appropriate option, you can select:
View/change current hot spare coverage — to review hot spare coverage and to assign or
unassign hot spare physical disks, if necessary. See step 5.
Automatically assign physical disks — to create hot spare physical disks automatically for the
best hot spare coverage using available physical disks.
Manually assign individual physical disks — to create hot spare physical disks out of the selected
physical disks on the Hardware tab.
Manually unassign individual physical disks — to unassign the selected hot spare physical disks
on the Hardware tab. See step 12.
NOTE: This option is available only if you select a hot spare physical disk that is already assigned.
5. To assign hot spares, in the Hot Spare Coverage window, select a disk group in the Hot spare
coverage
6. Review the information about the hot spare coverage in the Details area.
7. Click Assign.
The Assign Hot Spare window is displayed.
8. Select the relevant Physical disks in the Unassigned physical disks area, as hot spares for the
selected disk and click OK.
9. To unassign hot spares, in the Hot Spare Coverage window, select physical disks in the Hot spare
physical disks area.
10. Review the information about the hot spare coverage in the Details area.
11. Click Unassign.
A message prompts you to confirm the operation.
12. Type yes and click OK.
area.

Hot Spares And Rebuild

A valuable strategy to protect data is to assign available physical disks in the storage array as hot spares. A hot spare adds another level of fault tolerance to the storage array.
A hot spare is an idle, powered-on, stand-by physical disk ready for immediate use in case of disk failure. If a hot spare is defined in an enclosure in which a redundant virtual disk experiences a physical disk failure, a rebuild of the degraded virtual disk is automatically initiated by the RAID controller modules. If no hot spares are defined, the rebuild process is initiated by the RAID controller modules when a replacement physical disk is inserted into the storage array.

Global Hot Spares

The MD Series storage arrays support global hot spares. A global hot spare can replace a failed physical disk in any virtual disk with a redundant RAID level as long as the capacity of the hot spare is equal to or larger than the size of the configured capacity on the physical disk it replaces, including its metadata.

Hot Spare Operation

When a physical disk fails, the virtual disk automatically rebuilds using an available hot spare. When a replacement physical disk is installed, data from the hot spare is copied back to the replacement physical disk. This function is called copy back. By default, the RAID controller module automatically configures the number and type of hot spares based on the number and capacity of physical disks in your system.
A hot spare may have the following states:
99
A standby hot spare is a physical disk that has been assigned as a hot spare and is available to take over for any failed physical disk.
An in-use hot spare is a physical disk that has been assigned as a hot spare and is currently replacing a failed physical disk.

Hot Spare Drive Protection

You can use a hot spare physical disk for additional data protection from physical disk failures that occur in a RAID Level 1, or RAID Level 5 disk group. If the hot spare physical disk is available when a physical disk fails, the RAID controller module uses redundancy data to reconstruct the data from the failed physical disk to the hot spare physical disk. When you have physically replaced the failed physical disk, a copyback operation occurs from the hot spare physical disk to the replaced physical disk. If there are secure disk groups and security capable disk groups in the storage array, the hot spare physical disk must match the security capability of the disk group. For example, a non-security capable physical disk cannot be used as a hot spare for a secure disk group.
NOTE: For a security capable disk group, security capable hot spare physical disks are preferred. If security capable physical disks are not available, non-security capable physical disks may be used as hot spare physical disks. To ensure that the disk group is retained as security capable, the non­security capable hot spare physical disk must be replaced with a security capable physical disk.
If you select a security capable physical disk as hot spare for a non-secure disk group, a dialog box is displayed indicating that a security capable physical disk is being used as a hot spare for a non-secure disk group.
The availability of enclosure loss protection for a disk group depends on the location of the physical disks that comprise the disk group. The enclosure loss protection might be lost because of a failed physical disk and location of the hot spare physical disk. To make sure that enclosure loss protection is not affected, you must replace a failed physical disk to initiate the copyback process.
The virtual disk remains online and accessible while you are replacing the failed physical disk, because the hot spare physical disk is automatically substituted for the failed physical disk.

Enclosure Loss Protection

Enclosure loss protection is an attribute of a disk group. Enclosure loss protection guarantees accessibility to the data on the virtual disks in a disk group if a total loss of communication occurs with a single expansion enclosure. An example of total loss of communication may be loss of power to the expansion enclosure or failure of both RAID controller modules.
CAUTION: Enclosure loss protection is not guaranteed if a physical disk has already failed in the disk group. In this situation, losing access to an expansion enclosure and consequently another physical disk in the disk group causes a double physical disk failure and loss of data.
Enclosure loss protection is achieved when you create a disk group where all of the physical disks that comprise the disk group are located in different expansion enclosures. This distinction depends on the RAID level. If you choose to create a disk group by using the Automatic method, the software attempts to choose physical disks that provide enclosure loss protection. If you choose to create a disk group by using the Manual method, you must use the criteria specified below.
RAID Level Criteria for Enclosure Loss Protection
RAID level 5 or RAID level 6
100
Ensure that all the physical disks in the disk group are located in different expansion enclosures.
Loading...