Dell E03J, E04J User Manual

Page 1
Dell PowerVault MD3200
and MD3220 Storage Arrays
Owner’s Manual
Regulatory Model: E03J Series and E04J Series Regulatory Type: E03J001 and E04J001
Page 2
Notes, Cautions, and Warnings
NOTE: A NOTE indicates important information that helps you make better use of
CAUTION: A CAUTION indicates potential damage to hardware or loss of data if
instructions are not followed.
WARNING: A WARNING indicates a potential for property damage, personal
injury, or death.
____________________
© 2013 Dell Inc.
Trademarks used in this text: Dell™, the DELL logo, PowerEdge™, PowerVault™, and OpenManage™ are trademarks of Dell Inc. Intel the U.S. and other countries. Microsoft
®
Explorer and/or other countries. Red Hat Hat, Inc. in the United States and other countries. SUSE the United States and other countries.
Regulatory Model: E03J Series and E04J Series Regulatory Type: E03J001 and E04J001
2013 - 06 Rev. A02
are either trademarks or registered trademarks of Microsoft Corporation in the United States
®
®
and Red Hat Enterprise Linux® are registered trademarks of Red
®
, Windows®, Windows Server®, MS-DOS®, and Internet
is a registered trademarks of Intel Corporation in
®
is a registered trademark of Novell, Inc. in
Page 3
Contents
1 Introduction. . . . . . . . . . . . . . . . . . . . . . . 19
About This Document . . . . . . . . . . . . . . . . . . 19
Inside the box of the Dell PowerVault MD3200 Series Storage Array
Other Information you May Need . . . . . . . . . . . . 20
2 Planning: About Your Storage Array . . . . 23
Overview. . . . . . . . . . . . . . . . . . . . . . . . . 23
. . . . . . . . . . . . . . . . . . . 20
MD3200 Dell PowerVault Modular
Disk Storage Manager . . . . . . . . . . . . . . . 20
Series Storage Array . . . . . . . . . . . 20
Hardware Features
Front-Panel Features and Indicators Back Panel Features and Indicators
Hard-Drive Indicator Patterns Power Supply and Cooling Fan Features
Power Indicator Codes and Features . . . . . . . . . . 30
. . . . . . . . . . . . . . . . . . . 24
. . . . . . . 24
. . . . . . . . 27
. . . . . . . . . . . . . . 28
. . . . . . . . 29
3 Planning: RAID Controller Modules . . . . 31
RAID Controller Modules . . . . . . . . . . . . . . . . 31
Contents 3
Page 4
RAID Controller Module Connectors and Features
. . . . . . . . . . . . . . . . . . . . . . . 32
RAID Controller Module—Additional Features. . . . . 34
Battery Backup Unit
. . . . . . . . . . . . . . . . 34
Storage Array Thermal Shutdown . . . . . . . . . 35
System Password Reset
. . . . . . . . . . . . . . 35
Cache Functions and Features
Cache Mirroring
. . . . . . . . . . . . . . . . . . . 36
Write-Back Cache Write-Through Cache
. . . . . . . . . . . . . . 36
. . . . . . . . . . . . . . . . . 36
. . . . . . . . . . . . . . . . 36
4 Planning: MD3200 Series Storage
Array Terms and Concepts . . . . . . . . . . . 37
Physical Disks, Virtual Disks, and Disk Groups. . . . . 37
Physical Disks
Physical Disk States . . . . . . . . . . . . . . . . 38
Self-Monitoring Analysis and
Reporting Technology. . . . . . . . . . . . . . . . 39
Virtual Disks and Disk Groups Virtual Disk States
RAID Levels
RAID Level Usage
Segment Size
Virtual Disk Operations . . . . . . . . . . . . . . . . . 43
Virtual Disk Initialization Background Initialization Foreground Initialization Consistency Check Media Verification
. . . . . . . . . . . . . . . . . . . . 38
. . . . . . . . . . . 39
. . . . . . . . . . . . . . . . . 40
. . . . . . . . . . . . . . . . . . . . . . . . 40
. . . . . . . . . . . . . . . . . . 41
. . . . . . . . . . . . . . . . . . . . . . . 43
. . . . . . . . . . . . . . 43
. . . . . . . . . . . . . . 43
. . . . . . . . . . . . . . 43
. . . . . . . . . . . . . . . . . 44
. . . . . . . . . . . . . . . . . 44
4 Contents
Page 5
Cycle Time . . . . . . . . . . . . . . . . . . . . . 44
Virtual Disk Operations Limit . . . . . . . . . . . . 45
Disk Group Operations. . . . . . . . . . . . . . . . . . 45
RAID Level Migration
. . . . . . . . . . . . . . . . 45
Segment Size Migration . . . . . . . . . . . . . . 45
Virtual Disk Capacity Expansion Disk Group Expansion
. . . . . . . . . . . . . . . 46
. . . . . . . . . . 46
Disk Group Defragmentation . . . . . . . . . . . . 47
Disk Group Operations Limit
. . . . . . . . . . . . 47
RAID Background Operations Priority
. . . . . . . . . 47
Virtual Disk Migration and Disk Roaming. . . . . . . . 48
Disk Migration
. . . . . . . . . . . . . . . . . . . 48
Disk Roaming . . . . . . . . . . . . . . . . . . . . 50
Advanced Features . . . . . . . . . . . . . . . . . . . 50
Host Server-to-Virtual Disk Mapping
. . . . . . . . 51
Host Types . . . . . . . . . . . . . . . . . . . . . 51
Snapshot Virtual Disks
. . . . . . . . . . . . . . . 51
Snapshot Repository Virtual Disk. . . . . . . . . . 52
Virtual Disk Copy Virtual Disk Recovery
. . . . . . . . . . . . . . . . . . 53
. . . . . . . . . . . . . . . . 54
Using Snapshot and Disk Copy Together. . . . . . 54
Multi-Path Software
. . . . . . . . . . . . . . . . . . . 54
Preferred and Alternate Controllers and Paths
Virtual Disk Ownership
Load Balancing
. . . . . . . . . . . . . . . . . . . . . . 55
. . . . . . . . . . . . . . . 55
. . . . . . . . . . . . . . . . . . . . . 56
Monitoring MD3200 Series System Performance
. . . . . . . . . . . . . . . . . . . . . . . 57
Contents 5
Page 6
5 Configuration: Overview . . . . . . . . . . . . 61
User Interface . . . . . . . . . . . . . . . . . . . . . . 61
Enterprise Management Window Array Management Window
. . . . . . . . . 62
. . . . . . . . . . . . 63
6 Configuration: About Your
Storage Array . . . . . . . . . . . . . . . . . . . . . 65
Out-of-Band and In-Band Management . . . . . . . . . 65
Storage Arrays
Adding Storage Arrays Setting Up Your Storage Array
. . . . . . . . . . . . . . . . . . . . . . 66
. . . . . . . . . . . . . . . 66
. . . . . . . . . . . 68
Locating Storage Arrays . . . . . . . . . . . . . . 69
Naming or Renaming Storage Arrays Setting a Password
. . . . . . . . . . . . . . . . . 71
. . . . . . . . 70
Viewing Storage Array Connections . . . . . . . . 72
Adding/Editing a Comment to an
Existing Storage Array . . . . . . . . . . . . . . . 73
Removing Storage Arrays Enabling Premium Features
. . . . . . . . . . . . . . 73
. . . . . . . . . . . . . 74
Failover Alert Display . . . . . . . . . . . . . . . . 74
Changing the Cache Settings on the Storage Array
. . . . . . . . . . . . . . . . . . . . 75
Changing Expansion Enclosure ID Number . . . . 75
Changing the Enclosure Order in the Physical Pane
Configuring Alert Notifications
Configuring E-mail Alerts Configuring SNMP Alerts
Battery Settings
. . . . . . . . . . . . . . . . . . . . 75
. . . . . . . . . . . . . 76
. . . . . . . . . . . . . . 77
. . . . . . . . . . . . . . 79
. . . . . . . . . . . . . . . . . . . . . 81
6 Contents
Page 7
Setting the Storage Array RAID Controller Module Clocks
. . . . . . . . . . . . . . . . . . . 82
7 Configuration: Event Monitor . . . . . . . . . 83
Enabling or Disabling the Event Monitor . . . . . . . . 84
Windows
. . . . . . . . . . . . . . . . . . . . . . 84
Linux . . . . . . . . . . . . . . . . . . . . . . . . 84
8 Configuration: About Your Host. . . . . . . . 85
Configuring Host Access . . . . . . . . . . . . . . . . 85
Using the Mappings Tab
Defining a Host
Removing Host Access
. . . . . . . . . . . . . . . . . 86
. . . . . . . . . . . . . . . . . . . 86
. . . . . . . . . . . . . . . . . 88
Managing Host Groups . . . . . . . . . . . . . . . . . 88
Creating a Host Group
. . . . . . . . . . . . . . . 88
Moving a Host to a Different Host Group Removing a Host Group
. . . . . . . . . . . . . . . 90
Host Topology. . . . . . . . . . . . . . . . . . . . 90
Starting or Stopping the Host Context Agent
I/O Data Path Protection Managing Host Port Identifiers
. . . . . . . . . . . . . . . . . . . . 91
. . . . . . . . . . . . . . . . . 92
. . . . . . . . . . . . . 93
9 Configuration: Disk Groups and
Virtual Disks
Creating Disk Groups and Virtual Disks. . . . . . . . . 95
. . . . . . . . . . . . . . . . . . . . . . 95
. . . . . . 89
Contents 7
Page 8
Creating Disk Groups . . . . . . . . . . . . . . . . 96
Locating a Disk Group . . . . . . . . . . . . . . . 98
Creating Virtual Disks
. . . . . . . . . . . . . . . . 98
Changing the Virtual Disk
Modification Priority . . . . . . . . . . . . . . . 100
Changing the Virtual Disk Cache Settings
. . . . 101
Changing the Segment Size of
a Virtual Disk . . . . . . . . . . . . . . . . . . . 103
Changing the I/O Type
. . . . . . . . . . . . . . . 104
Choosing an Appropriate Physical Disk Type
. . . . . 105
Physical Disk Security with Self Encrypting Disk
Creating a Security Key Changing a Security Key Saving a Security Key
. . . . . . . . . . . . . . . . . . . . 106
. . . . . . . . . . . . . . 108
. . . . . . . . . . . . . 110
. . . . . . . . . . . . . . . 111
Validate Security Key. . . . . . . . . . . . . . . 112
Unlocking Secure Physical Disks Erasing Secure Phy sical Disks
Configuring Hot Spare Phys ical Disks
Hot Spares and Rebuild
. . . . . . . . . . . . . . 115
. . . . . . . . . 113
. . . . . . . . . . 113
. . . . . . . . . 114
Global Hot Spares. . . . . . . . . . . . . . . . . 116
Hot Spare Operation Hot Spare Drive Protection
Enclosure Loss Protection Host-to-Virtual Disk Mapping
Creating Host-to-Virtual Disk Mappings
. . . . . . . . . . . . . . . 116
. . . . . . . . . . . . 116
. . . . . . . . . . . . . . . 117
. . . . . . . . . . . . . 119
. . . . . 119
Modifying and Removing Host-to-Virtual Disk Mapping
. . . . . . . . . . . . . . . . . . . 121
Changing Controller Ownership of the Virtual Disk
Removing Host-to-Virtual Disk Mapping
. . . . . . . . . . . . . . . . . . . . 122
. . . . . 122
8 Contents
Page 9
Changing the RAID Controller Module Ownership of a Disk Group
Changing the RAID Level of a Disk Group
. . . . . . . . . . . . . 123
. . . . . 124
Removing a Host-to-Virtual Disk Mapping
Using Linux DMMP . . . . . . . . . . . . . . . . . 124
Restricted Mappings. . . . . . . . . . . . . . . . . . . 126
Changing the RAID Controller Module Ownership of a Virtual Disk or a Disk Group
. . . . 127
Changing the RAID Level of a Disk Group. . . . . . . . 129
Storage Partitioning Disk Group and Virtual Disk Expansion
Disk Group Expansion Virtual Disk Expansion Using Free Capacity
. . . . . . . . . . . . . . . . . . . 130
. . . . . . . . . 131
. . . . . . . . . . . . . . . 131
. . . . . . . . . . . . . . . 132
. . . . . . . . . . . . . . . . 132
Using Unconfigured Capacity . . . . . . . . . . . 133
Disk Group Migration . . . . . . . . . . . . . . . . . . 133
Export Disk Group Exporting a Disk Group
Import Disk Group
Importing a Disk Group
Storage Array Media Scan
Changing Media Scan Settings
. . . . . . . . . . . . . . . . . . 133
. . . . . . . . . . . . . . . 134
. . . . . . . . . . . . . . . . . . . . 134
. . . . . . . . . . . . . . . 134
. . . . . . . . . . . . . . . 136
. . . . . . . . . . 136
Suspending the Media Scan . . . . . . . . . . . . 137
10 Configuration: Premium Feature—
Snapshot Virtual Disks
. . . . . . . . . . . . . 139
Scheduling a Snapshot Virtual Disk. . . . . . . . . . . 140
Contents 9
Page 10
Common Reasons for Scheduling a Snapshot Virtual Disk
. . . . . . . . . . . . . . . 140
Guidelines for Creating Snapshot Schedules
. . . . . . . . . . . . . . . 141
Enabling and Disabling Snapshot
Schedules. . . . . . . . . . . . . . . . . . . . . 142
Creating a Snapshot Virtual Disk Using the Simple Path
About the Simple Path
. . . . . . . . . . . . . . . . . . . . 142
. . . . . . . . . . . . . . 143
Preparing Host Servers to Create the
Snapshot Using the Simple Path . . . . . . . . . 143
Creating a Snapshot Virtual Disk Using the Advanced Path
About the Advanced Path
. . . . . . . . . . . . . . . . . . . . . 146
. . . . . . . . . . . . . 146
Preparing Host Servers to Create the Snapshot Using the Advanced Path
. . . . . . . 147
Creating the Snapshot Using
the Advanced Path . . . . . . . . . . . . . . . . 149
10 Contents
Specifying Snapshot Virtual Disk Names Snapshot Repository Capacity
. . . . . . . . . . . . . 152
. . . . . . . 151
Disabling a Snapshot Virtual Disk. . . . . . . . . . . 155
Preparing Host Servers to Re-create a Snapshot Virtual Disk
. . . . . . . . . . . . . . . 156
Re-creating Snapshot Virtual Disks. . . . . . . . . . 157
Snapshot Rollback
. . . . . . . . . . . . . . . . . . . 157
Rules and Guidelines for Performing a Snapshot Rollback
. . . . . . . . . . . . . . . . 158
Protecting Against a Failed Snapshot Rollback
. . . . . . . . . . . . . . . . 159
Previous Versions of the
MD Storage Manager. . . . . . . . . . . . . . . 159
Starting a Snapshot Rollback
. . . . . . . . . . . 159
Page 11
Resuming a Snapshot Rollback . . . . . . . . . . 160
Canceling a Snapshot Rollback . . . . . . . . . . 161
11 Configuration: Premium Feature—
Virtual Disk Copy. . . . . . . . . . . . . . . . . . 163
Types of Virtual Disk Copies. . . . . . . . . . . . . . . 164
Offline Copy Online Copy
Creating a Virtual Disk Copy for an MSCS Shared Disk
. . . . . . . . . . . . . . . . . . . . . 164
. . . . . . . . . . . . . . . . . . . . . 165
. . . . . . . . . . . . . . . . . . . . 166
Virtual Disk Read/Write Permissions
. . . . . . . . . . 166
Virtual Disk Copy Restrictions. . . . . . . . . . . . . . 167
Creating a Virtual Disk Copy
Before you Begin
. . . . . . . . . . . . . . . 168
. . . . . . . . . . . . . . . . . . 168
Virtual Disk Copy and Modification
Operations . . . . . . . . . . . . . . . . . . . . . 169
Create Copy Wizard
. . . . . . . . . . . . . . . . 169
Failed Virtual Disk Copy. . . . . . . . . . . . . . . 169
Preferred RAID Controller Module Ownership . . . . . 170
Failed RAID Controller Module Copy Manager
. . . . . . . . . . . . . . . . . . . . . . 170
. . . . . . . . . . . . . 170
Copying the Virtual Disk. . . . . . . . . . . . . . . . . 171
Storage Array Performance During Virtual Disk Copy
Setting Copy Priority Stopping a Virtual Disk Copy
. . . . . . . . . . . . . . . . . . . . . 172
. . . . . . . . . . . . . . . . . . . 173
. . . . . . . . . . . . . . 173
Contents 11
Page 12
Recopying a Virtual Disk. . . . . . . . . . . . . . . . 174
Preparing Host Servers to Recopy a Virtual Disk
Re-Copying a Virtual Disk
. . . . . . . . . . . . . . . 174
. . . . . . . . . . . . . 175
Removing Copy Pairs
. . . . . . . . . . . . . . . . . . 176
12 Configuration: Premium Feature—
Upgrading to High-
Performance-Tier . . . . . . . . . . . . . . . . . 177
13 Configuration: Device Mapper
Multipath for Linux . . . . . . . . . . . . . . . . 179
Overview . . . . . . . . . . . . . . . . . . . . . . . . 179
Using DM Multipathing Devices
Prerequisite Steps
Device Mapper Configuration Steps . . . . . . . 181
Limitations and Known Issues. . . . . . . . . . . . . 187
Troubleshooting
. . . . . . . . . . . . . . . . . . . . 188
. . . . . . . . . . . . 180
. . . . . . . . . . . . . . . . 180
14 Management: Firmware
Downloads. . . . . . . . . . . . . . . . . . . . . . . 189
12 Contents
Downloading RAID Controller and NVSRAM Packages
Downloading Both RAID Controller and NVSRAM Firmware
Downloading Only NVSRAM Firmware
. . . . . . . . . . . . . . . . . . 189
. . . . . . . . . . . . . . . . . . 190
. . . . . . . . 192
Page 13
Downloading Physical Disk Firmware . . . . . . . . . 195
Downloading MD1200 Series Expansion Module EMM Firmware
. . . . . . . . . . . . . . . . . . . . . 197
Self-Monitoring Analysis and Reporting Technology (SMART)
. . . . . . . . . . . . . . . . . . 198
Media Errors and Unreadable Sectors
. . . . . . . . . 198
15 Management: Installing Array
Components . . . . . . . . . . . . . . . . . . . . . 201
Recommended Tools. . . . . . . . . . . . . . . . . . . 201
Front Bezel (Optional)
Removing the Front Bezel Installing the Front Bezel
Hard Drives
. . . . . . . . . . . . . . . . . . . . . . . . 203
Removing a Hard-Drive Blank Installing a Hard-Drive Blank
Removing a Hard Drive . . . . . . . . . . . . . . . 204
Installing a Hard Drive Removing a Hard Drive From a
Hard-Drive Carrier Installing a Hard Drive Into a
Hard-Drive Carrier
RAID Controller Module
Removing a RAID Controller Module Blank Installing a RAID Controller Module Blank Removing a RAID Controller Module Installing a RAID Controller Module Opening the RAID Controller Module Closing the RAID Controller Module
. . . . . . . . . . . . . . . . . . 202
. . . . . . . . . . . . . 202
. . . . . . . . . . . . . . 202
. . . . . . . . . . . 203
. . . . . . . . . . . . 204
. . . . . . . . . . . . . . . 206
. . . . . . . . . . . . . . . . . 207
. . . . . . . . . . . . . . . . . 209
. . . . . . . . . . . . . . . . . 209
. . . . . 210
. . . . . . . . 211
. . . . . . . . 212
. . . . . . . 212
. . . . . . . . 213
. . . . 209
Contents 13
Page 14
RAID Controller Module Backup Battery Unit. . . . . 214
Removing the RAID Controller Module Backup Battery Unit
. . . . . . . . . . . . . . . 214
Installing the RAID Controller Module Backup Battery Unit
. . . . . . . . . . . . . . . 215
Power Supply/Cooling Fan Module
. . . . . . . . . . 216
Removing a Power Supply/Cooling Fan Module
. . . . . . . . . . . . . . . . . . . . 216
Installing a Power Supply/Cooling
Fan Module . . . . . . . . . . . . . . . . . . . . 217
Control Panel
Removing the Control Panel Installing the Control Panel
Backplane
Removing the Backplane Installing the Backplane
. . . . . . . . . . . . . . . . . . . . . . 218
. . . . . . . . . . . 218
. . . . . . . . . . . . 220
. . . . . . . . . . . . . . . . . . . . . . . 221
. . . . . . . . . . . . . 221
. . . . . . . . . . . . . 224
16 Management: Firmware Inventory . . . . 225
Viewing the Firmware Inventory . . . . . . . . . . . 225
17 Management: System Interfaces . . . . . . 227
Microsoft Services. . . . . . . . . . . . . . . . . . . 227
Virtual Disk Service Volume Shadow-Copy Service
. . . . . . . . . . . . . . . . 227
. . . . . . . . . . 227
18 Troubleshooting: Your Storage
Array Software
14 Contents
. . . . . . . . . . . . . . . . . . . 229
Start-Up Routine . . . . . . . . . . . . . . . . . . . . 229
Page 15
Device Health Conditions . . . . . . . . . . . . . . . . 229
Storage Array Support Data Automatically Collect the Support Bundle Data
. . . . . . . . . . . . . . . 232
. . . . 233
Collecting the Physical Disk Data. . . . . . . . . . . . 234
Event Log Recovery Guru
. . . . . . . . . . . . . . . . . . . . . . . . . 234
. . . . . . . . . . . . . . . . . . . . . . 235
Storage Array Profile . . . . . . . . . . . . . . . . . . 236
Viewing the Logical Associations Viewing the Physical Associations
. . . . . . . . . . . . 238
. . . . . . . . . . . 238
Finding Nodes . . . . . . . . . . . . . . . . . . . . . . 239
Using Go To
. . . . . . . . . . . . . . . . . . . . . . . 240
Recovering From an Unresponsive Storage Array Condition
Locating a Physical Disk Locating an Expansion Enclosure
. . . . . . . . . . . . . . . . . . . . . 241
. . . . . . . . . . . . . . . . . 244
. . . . . . . . . . . . 245
Capturing the State Information SMrepassist Utility Unidentified Devices
. . . . . . . . . . . . . . . . . . . 247
. . . . . . . . . . . . . . . . . . 248
. . . . . . . . . . . . . 246
Recovering From an Unidentified Storage Array Starting or Restarting the Host Context
Agent Software
. . . . . . . . . . . . . . . . . . . . . 251
. . . . 248
Contents 15
Page 16
19 Troubleshooting: Your Array . . . . . . . . . 253
Safety First—For you and Your Array . . . . . . . . . 253
Troubleshooting Storage Array Startup Failure
. . . . 253
Troubleshooting Loss of Communication . . . . . . . 253
Troubleshooting External Connections
. . . . . . . . 253
Troubleshooting Power Supply/Cooling Fan Module
Troubleshooting Array Cooling Problems
. . . . . . . . . . . . . . . . . . . . . . . 254
. . . . . . . 255
Troubleshooting Expansion Enclosure Management Modules
. . . . . . . . . . . . . . . . . 255
Troubleshooting RAID Controller Modules . . . . . . 256
Troubleshooting Hard Drives
. . . . . . . . . . . . . 258
Troubleshooting Array and Expansion Enclosure Connections
Troubleshooting a Wet Storage Array Troubleshooting a Damaged Array Troubleshooting RAID Controller Modules
Conditions
. . . . . . . . . . . . . . . . 259
. . . . . . . . . 260
. . . . . . . . . . 261
. . . . . . 261
. . . . . . . . . . . . . . . . . . . . . 261
Invalid Storage Array . . . . . . . . . . . . . . . 262
ECC Errors PCI Errors
. . . . . . . . . . . . . . . . . . . . . 262
. . . . . . . . . . . . . . . . . . . . . 262
Critical Conditions. . . . . . . . . . . . . . . . . 262
Noncritical Conditions
. . . . . . . . . . . . . . 263
16 Contents
Page 17
20 Getting Help. . . . . . . . . . . . . . . . . . . . . . 265
Locating Your System Service Tag . . . . . . . . . . . 265
Contacting Dell
Documentation Feedback . . . . . . . . . . . . . . . . 266
. . . . . . . . . . . . . . . . . . . . . 265
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Contents 17
Page 18
18 Contents
Page 19
1

Introduction

NOTE: Unless specified, MD3200 Series represents Dell PowerVault MD3200 and
Dell PowerVault MD3220 Storage Arrays.
WARNING: See the Safety, Environmental, and Regulatory Information document
for important safety information before following any procedures listed in this document.

About This Document

This guide familiarizes you with the functions of the Dell PowerVault MD3200 Series storage array. The guide is organized according to the tasks that you must complete after receiving your MD3200 Series storage array. The tasks are:
Planning—Information about the storage array and its features. Configuration—Tasks that must be completed to ensure that your storage
array performs optimally. Management—Tasks that may be performed to ensure that the storage array
components are up to date and performing properly, including removal and installation of storage array components.
Troubleshooting—Tasks that you must complete to resolve problems that may occur with the storage array.
Additional information on these and other topics can be found in the Dell PowerVault MD3200 and MD3220 Storage Array Deployment Guide at dell.com/support/manuals.
Introduction 19
Page 20

Inside the Box of the Dell PowerVault MD3200 Series Storage Array

Your MD3200 Series product package includes:
Dell PowerVault MD3200
SAS cables
Power cables (2)
Front Bezel (optional)
Mounting rails (2) (optional)
MD3200
Rack Installation Instructions
Getting Started With Your System
features, setting up your enclosure, and technical specifications)
Series
resource media

MD3200 Series Storage Array

The Dell PowerVault MD3200 Series is a 2U rack-mounted external redundant array of independent disks (RAID) storage array capable of accommodating up to twelve 3.5" or twenty four 2.5" 6.0-Gbps Serial­Attached SCSI (SAS) disks. The MD3200 Series storage arrays can be daisy­chained with MD1200 Series expansion enclosures, providing access to a maximum of 120 disks (or 192 disks with Premium Feature activation) in the entire storage array. Connectivity between the storage array and the host server is provided by a Dell 6.0-Gbps SAS Host Bus Adapter (SAS 6Gb HBA).
Series
storage array
(provides an overview of enclosure

Dell PowerVault Modular Disk Stor ag e Man ager

Dell PowerVault Modular Disk Storage Manager (MDSM) is a graphical user interface (GUI) application, used to configure and manage one or more MD3200 Series Storage Arrays. The MDSM software is located on the MD3200 Series resource media.

Other Information You May Need

WARNING: See the safety and regulatory information that shipped with your
system. Warranty information may be included within this document or as a separate document.
20 Introduction
Page 21
•The
Getting Started Guide
provides an overview of setting up and cabling
your storage array.
•The
Deployment Guide
provides installation and configuration instructions
for both software and hardware.
•The
Storage Manager CLI Guide
provides information about using the
command line interface (CLI).
Resource media contains all system management tools.
•The
Systems Support Matrix
provides information on supported software
and hardware for MD systems. The document is available at
dell.com/support/manuals
•The
Dell P owerEdge C luster D ocum entation
dell.com/support/manuals
Release notes
or readme files are included to provide last-minute updates
.
is available at
.
to the enclosure or documentation or advanced technical reference material intended for experienced users or technicians.
This document as well as the
Guide
is available at
dell.com/support/manuals
Dell P owe rVault MD 1200 Series Installation
for users who incorporate
MD1200 expansion enclosures.
The Rack Installation Instructions included with your rack solution describes how to install your enclosure into a rack.
NOTE: Always check for updates on dell.com/support/manuals and read the
updates first because they often supersede information in other documents.
Introduction 21
Page 22
22 Introduction
Page 23
2

Planning: About Your Storage Array

Overview

The MD3200 Series storage array is designed for high availability, offering redundant access to data storage. It features support for both single and dual RAID controller configuration.
The Dell PowerVault MD3200 Series storage array provides 6.0-Gbps SAS connectivity to the host server and enables access for up to eight non­redundant servers or four redundant servers.
The MD3200 Series storage array includes a number of components. These are:
RAID controller module(s)
•PSU/Fan modules
Disk drives (also called physical disk drives in this document)
A front bezel (optional)
A system enclosure, into which, the other components are plugged
Planning: About Your Storage Array 23
Page 24

Hardware Features

1
2 3
5
4
6
1
2
3
4
6
5

Front-Panel Features and Indicators

Figure 2-1. Front-Panel Features and Indicators—Dell PowerVault MD3200
Figure 2-2. Front-Panel Features and Indicators—Dell PowerVault MD3220
24 Planning: About Your Storage Array
Page 25
Figure 2-3. Front-Bezel Features and Indicators
1
2
3
Item Indicator, Button, or
Connector
1 Enclosure status LED The enclosure status LED lights when the
2 Power LED The power LED lights green when at least one
Icon Description
enclosure power is on. Lights blue during normal operation. Blinks blue when a host server is identifying the
enclosure or when the system identification button is pressed.
Lights amber as enclosure boots or is reset. Blinks amber when the enclosure is either in a
fault state or the hosts are not using the preferr ed path to a virtual disk.
power supply is supplying power to the enclosure.
Planning: About Your Storage Array 25
Page 26
Item Indicator, Button, or
Connector
3 Split mode LED This LED must be unlit as the split mode
4 System identification
button
5 Hard drives MD3200—Up to twelve 3.5" SAS hot-swappable
6Enclosure mode
switch
Icon Description
function is not supported by the MD3200 Series Storage Arrays.
The system identification button on the front control panel can be used to locate a particular enclosure within a rack. When the button is pushed, the system status indicators on the control panel and the RAID controller module(s) blink blue until the button is pushed again.
hard drives. MD3220—Up to twenty four 2.5" SAS hot-
swappable hard drives. The function of this switch is not applicable to
your storage array. However, if MD1200 Series expansion enclosures are daisy chained to the storage array, the enclosure mode switches of the MD1200 Series expansion enclosures must be set to the Unified-Mode position.
NOTE: This switch must be set before turning on
the MD1200 Series expansion enclosure. Changing the switch setting after the expansion enclosure is turned on has no effect on the enclosure configuration until the expansion enclosure goes through a complete power cycle.
26 Planning: About Your Storage Array
Page 27

Back Panel Features and Indicators

1
23
4
Figure 2-4. Back-Panel Features and Indicators—Dell PowerVault MD3200 Series
1 600 W power supply/cooling fan
module
3 RAID Controller Module 1 4 600 W power supply/cooling fan
2 RAID Controller Module 0
module
Planning: About Your Storage Array 27
Page 28

Hard-Drive Indicator Patterns

1
2
Figure 2-5. Hard Drive Indicators
1 hard-drive activity indicator (green) 2 hard-drive status indicator (green
28 Planning: About Your Storage Array
and amber)
Page 29
Drive-Status Indicator Pattern Condition
Off The physical disk is:
• not yet discovered by the host server
•is spun down for removal
• not supported for the RAID con trolle r module or is not in the physical disk slot
NOTE: The drive status indicator remains
off until all hard drives are initialized after system power is turned on. Drives are not ready for insertion or removal during this time.
Steady green Physical disk is online Green flashing (On 250 ms, Off 250 ms) Physical disk is being identified Green flashing (On 400 ms, Off 100 ms) Physical disk rebuilding Amber flashing (On 150 ms, Off 150 ms) Physical disk failed Flashing green, amber , and Off (green On
500 ms, amber On 500 ms, Off 1000 ms) Flashing green, amber, and Off (green 3
seconds, amber 3 seconds, and Off 3 seconds)
Physical disk failure predicted (SMART)
Physical disk rebuild aborted

Power Supply and Cooling Fan Features

The MD3200 Series storage array includes two integrated, hot-swappable power supply/cooling fan modules. Both modules must be installed to ensure proper cooling. The system requires at least one of the cooling fans to function to avoid overheating.
A power supply/cooling fan module can be replaced without powering down the system. For information on removing and installing the modules, see "Power Supply/Cooling Fan Module" on page 216.
CAUTION: A power supply/cooling fan module can be removed from a powered-
on enclosure for a maximum period of 5 minutes. Beyond that time, the system may automatically shut down to prevent damage.
Planning: About Your Storage Array 29
Page 30

Power Indicator Codes and Features

1 2
3
5
4
Figure 2-6. Power Indicator Codes and Feature
Item LED Type Icon Description
1 DC power The LED lights green when the DC output
voltage is within the limit. If this LED is off, it indicates that the DC output
voltage are not within the limit.
2 Power supply/cooling
fan fault
3 AC power The LED lights green when the AC input voltage
4 Power connector Connect the external power supply to this
5 Power switches (2) The power switch controls the power supply
The LED lights amber when the DC output voltage is not within the limit or a fault with the fan is detected.
If this LED is off, it indicates that no fault condition is present.
is within the limit. If this LED is off, it indicates either there is no
power or the AC input voltage is not within the limit.
connector.
output to the enclosure.
30 Planning: About Your Storage Array
Page 31
3

Planning: RAID Controller Modules

RAID Controller Modules

The RAID controller modules provide high-performance, advanced virtual disk configuration, and fault-tolerant disk subsystem management. Each RAID controller module contains 2 GB or 4 GB of mirrored cache for high availability and is protected by a battery powered cache offload mechanism.
NOTE: The 4 GB mirrored cache is an optional feature.
RAID controller modules provide the following data path and enclosure management functions:
Monitoring and controlling enclosur e environment elements (temperature, fans, power supplies, and enclosure LEDs)
Contro lling access to the physical disks
Communicating enclosure attributes and states to the host server and management station
Each RAID controller module has multiple SAS IN ports for host access. The ports provide redundant host connections and support a high availability storage environment. Vari ous configurations can be utilized, in both single controller (simplex) and dual controller (duplex) modes, to connect the storage enclosure to hosts depending on specific redundancy needs.
For information on cabling, see the MD3200 and MD3220 Series storage array’s Deployment Guide at dell.com/support/manuals.
Planning: RAID Controller Modules 31
Page 32

RAID Controller Module Connectors and Features

12 3456 7
811121516 21
9
10 13 17 1814
19
20
Figure 3-1. MD3200 Series SAS RAID Controller Module
Item Component Function
1 SAS OUT port Provides SAS connection for cabling to a downchain
expansion enclosure. 2 SAS IN port 0 Provides host-to-controller SAS connection. 3 SAS IN port 1 Provides host-to-controller SAS connection. 4 SAS IN port 2 Provides host-to-controller SAS connection. 5 SAS IN port 3 Provides host-to-controller SAS connection. 6 MAC Address
label 7 Debug port Dell support only. 8SAS OUT port
link/fault LED
Provides MAC addresses of the management port.
Lights green when all four links are connected. Lights amber when one to 3 links are disconnected. Off when all links in the port are disconnected or cable is
disconnected.
32 Planning: RAID Controller Modules
Page 33
Item Component Function
9 Controller power
LED
10 SAS IN 0 port
link/fault LED
11 Controller fault
LED
12 System
identification LED
13 SAS IN 1 port
link/fault LED
14 Cache active or
cache offload LED
15 Battery fault Lights amber when battery backup unit or battery has
16 Password Reset
switch
17 SAS IN 2 port
link/fault LED
Lights green when controller power is on. Off when controller is not powered. Lights green when all four links are connected. Lights amber when one to 3 links are disconnected. Off when all links in the port are disconnected or cable is
disconnected. Lights amber when controller fault detected. Off when controller operating normally. Blinks blue when system identification switch push-button
on enclosure front panel is pressed.
Lights green when all four links are connected. Lights amber when one to 3 links are disconnected. Off when all links in the port are disconnected or cable is
disconnected. Lights green when on-board controller memory contains
data. If AC power fails, this LED changes to indicate Cache
Offload status. If the password reset function has successfully changed the
password, this LED flashes on and off briefly.
failed. Off when battery backup unit is operating normally. Activating this switch deletes the password.
Lights green when all four links are connected. Lights amber when one to 3 links are disconnected. Off when all links in the port are disconnected or cable is
disconnected.
Planning: RAID Controller Modules 33
Page 34
Item Component Function
18 SAS IN 3 port
link/fault LED
19 Management
port speed LED
20 Management
port activity LED 21 Management
port Ethernet
connector
Lights green when all four links are connected. Lights amber when one to 3 links are disconnected. Off when all links in the port are disconnected or cable is
disconnected. Lights green when ethernet connection is operating at 1000
Mbps. Lights amber when ethernet connection is operating at 100
Mbps. Off when ethernet connection is operating at 10 Mbps or is
not active. Lights green when ethernet connection is active. Off when ethernet connection is not active. Provides a 100/1000 Mbps Ethernet connection for out-of-
band management of the enclosure.
RAID Controller Module—Additional Features

Battery Backup Unit

Each RAID controller contains a two-cell Lithium ion nanopolymer battery backup unit (BBU). It provides power to the RAID controller module in the event of a power outage. For information on removing and installing the BBU, see "RAID Controller Module Backup Battery Unit" on page 214.
NOTE: For virtual disks, the RAID controller firmware changes the data cache
setting based on the state of the battery. If the battery is missing or does not have sufficient charge, the controller flushes the cache and sets the write cache attribute to Write Through for all virtual disks. When the battery is replaced, Write Back is re-enabled.
34 Planning: RAID Controller Modules
Page 35

Storage Array Thermal Shutdown

The system automatically shuts down when system temperature exceeds the safe threshold. The battery backup unit protects against data loss by providing power to offload cache to non-volatile memory in the event of power loss. It is not necessary to shut down any MD1200 Series expansion enclosures attached to the storage array when thermal shutdown occurs.
Temperature threshold values determine the temperature at which shutdown occurs. These thresholds cannot be changed.
Table 3-1. Shutdown Threshold Type
Threshold Temperature Exceeding Event Description
Nominal failure threshold A critical event is set Maximum failure threshold Shutdown of the system power supplies occurs
within 3 minutes
Shutdown threshold Shutdown of the system power supplies occurs
within 5 seconds

System Password Reset

The storage array password can be reset if it is forgotten. To reset the password, push and hold down the password reset switch for at least 5 seconds. The password is deleted. See Figure3-1 to locate the password reset switch.
The RAID controller module allows you to change the password. For more information about setting your password, see "Setting a Password" on page 71.
NOTE: The reset switch can be accessed by using a small object such as the tip of
a pen.
Planning: RAID Controller Modules 35
Page 36

Cache Functions and Features

Cache Mirroring

Cache mirroring function copies accepted host-write data from the primary controller to the partner controller. This action ensures that host-write data is safely mirrored to the partner controller before successful completion status is returned to the host. If a controller fails, the surviving controller safely r etains all mirrored data. Cache mirroring is enabled by default.

Write-Back Cache

In Write-back Cache, write operations result in a completion signal being sent to the host operating system as soon as the cache receives the data to be written. The target physical disk receives the data at a more appropriate time in order to increase controller performance. In dual-active controller configurations with Write-back Caching enabled, the write data is always mirrored to the cache of the second controller before completion status is issued to the host initiator. Write-Back Cache is enabled by default unless cache mirroring is disabled.

Write-Through Cache

In write-through cache, data is written to the physical disk before completion status is returned to the host operating system. Write-through cache is considered more robust than write-back cache, since a power failure is less likely to cause loss of data. The RAID controller automatically switches to write-through if cache mirroring is disabled or if the battery is missing or has a fault condition.
36 Planning: RAID Controller Modules
Page 37
4

Planning: MD3200 Series Storage Array Terms and Concepts

This chapter explains the terms and concepts used for configuration and operation of the MD3200 Series storage arrays.

Physical Disks, Virtual Disks, and Disk Groups

Physical disks in your storage array provide the physical storage capacity for your data. Before you can begin writing data to the storage array, you must configure the physical storage capacity into logical components, called disk groups and virtual disks.
A disk group is a set of physical disks upon which multiple virtual disks are created. The maximum number of physical disks supported in a disk group is 120 disks (or 192 drives with Premium Feature activation) for RAID 0, RAID 1, and RAID 10, and 30 drives for RAID 5 and RAID 6. You can create disk groups from unconfigured capacity on your storage array.
A virtual disk is a partition in a disk group that is made up of contiguous data segments of the physical disks in the disk group. A virtual disk consists of data segments from all physical disks in the disk group. Virtual disks and disk groups are set up according to how you plan to organize your data. For example, you may have one virtual disk for inventory, a second virtual disk for financial and tax information, and so on.
All virtual disks in a disk group support the same RAID level. The storage array supports up to 255 virtual disks (minimum size of 10 MB each) that can be assigned to host servers. Each virtual disk is assigned a Logical Unit Number (LUN) that is recognized by the host operating system.
Planning: MD3200 Series Storage Array Terms and Concepts 37
Page 38

Physical Disks

Only Dell supported 6.0-Gbps SAS physical disks are supported in the storage array. If the storage array detects unsupported physical disks, it marks the disk as unsupported and the physical disk becomes unavailable for all operations.
NOTE: The MD3200 Series storage array must contain at least two physical disks
for proper operation. This is necessary because the phy sical disks are used to store configuration information.

Physical Disk States

Table 4-1 describes the various states of the physical disk, which are recognized by the storage array and reported in the MDSM application.
Table 4-1. RAID Controller Physical Disk States
Status Mode Description Physical Disk
Status LED
Optimal Assigned The physical disk in the indicated slot
is configured as part of a disk group.
Optimal Unassigned The physical disk in the indicated slot
is unused and available to be configured.
Optimal Hot Spare
standby
Optimal Hot Spare in
use
Failed Assigned,
Unassigned, Hot Spare in use, or Hot Spare standby
Replaced Assigned The physical disk in the indicated slot
The physical disk in the indicated slot is configured as a hot spare.
The physical disk in the indicated slot is in use as a hot spare within a disk group.
The physical disk in the indicated slot has failed because of an unrecoverable error, an incorrect drive type or drive size, or by its operational state being set to failed.
is replaced and is ready to be, or is actively being configured into a disk group.
Steady Green
Steady Green
Steady Green
Steady Green
Amber flashing (150 ms)
Green flashing (On 400 ms, Off 100 ms)
38 Planning: MD3200 Series Storage Array Terms and Concepts
Page 39
Table 4-1. RAID Controller Physical Disk States
Status Mode Description Physical Disk
Pending Failure
Offline Not applicable The physical disk has either been spun
Identify Assigned,
N/A N/A The indicated slot is empty, or the
Assigned, Unassigned, Hot Spare in use, or Hot Spare standby
Unassigned, Hot Spare in use, or Hot Spare standby
A Self-Monitoring Analysis and Reporting T echnology (SMART) error is detected on the physical disk in the indicated slot.
down or had a rebuild aborted by user request.
The physical disk is being identified. Green flashing
array cannot detect the physical disk.
(continued)
Status LED
Green flashing (500 ms), Amber (500 ms), and Off (1000ms)
Green flashing (3000 ms), Amber (3000 ms), and Off (3000ms)
(250 ms)
If a disk drive rebuild fails because of a source drive failure or because the drive is too small, the MDSM reports a failure of the physical disk eve n though the LED state on the drive indicates the rebuild was aborted (green for 3 seconds, amber for 3 seconds, then off for 3 seconds).

Self-Monitoring Analysis and Reporting Techno logy

SMART monitors the internal performance of all physical disk components to detect faults indicating the potential for physical disk failure. SMART uses this information to report whether failure is imminent so that a physical disk can be replaced before failure occurs. The storage array monitors all attached drives and notifies you when a predicted failure is reported by a physical disk.

Virtual Disks and Disk Groups

When configuring a storage array, you must:
1
Organize the physical disks into disk groups.
2
Create virtual disks within these disk groups.
Planning: MD3200 Series Storage Array Terms and Concepts 39
Page 40
3
Provide host server access.
4
Create mappings to associate the virtual disks with the host servers.
NOTE: Host server access must be created before mapping virtual disks.
Disk groups are always created in the unconfigured capacity of a storage array . Unconfigured capacity is the available physical disk space not already assigned in the storage array.
Virtual disks are cr eated within the free ca pacity of a disk group. F r ee capacity is the space in a disk group that has not been assigned to a virtual disk.

Virtual Disk States

The storage array recognizes the following virtual disk states.
Table 4-2. RAID Controller Virtual Disk States
State Description
Optimal The virtual disk contains physical disks that are all online. Degraded The virtual disk with a redundant RAID leve l conta ins an in accessible
physical disk. The system can still work properly, but performance may be affected and additional disk failures may result in data loss.
Offline A virtual disk with one or more member disks is in an inaccessible
(failed, missing, or offline) state. Data on the virtual disk is no longer accessible.
Force online The storage array forces a virtual disk that is in an Offline state to an
Optimal state. If all the member physical disks are not available, the storage array forces the virtual disk to a Degraded state. The storage array can force a virtual disk to an Online state only when a sufficient number of physical disks are available to support the virtual disk.

RAID Levels

RAID levels determine the way in which data is written to physical disks. Different RAID levels provide different l evels of accessibility, redundancy, and capacity.
40 Planning: MD3200 Series Storage Array Terms and Concepts
Page 41
Using multiple physical disks has the following advantages over using a single physical disk:
Placing data on multiple physical disks (striping) allows input/output (I/O) operations to occur simultaneously and improve performance.
Storing redundant data on multiple physical disks using mirroring or parity supports reconstruction of lost data if an error occurs, even if that error is the failure of a physical disk.
Each RAID level provides different performance and protection. You must select a RAID level based on the type of application, access, fault tolerance, and data you are storing.
The storage array supports RAID levels 0, 1, 5, 6, and 10. The maximum number of physical disks that can be used in a disk group depends on the RAID level:
192 for RAID
•30 for RAID
levels
levels
0, 1, and 10
5 and 6.

RAID Level Usage

To ensure best performance, you must select an optimal RAID level when you create a system physical disk. The optimal RAID level for your disk array depends on:
Number of physical disks in the disk array
Capacity of the physical disks in the disk array
Need for redundant access to the data (fault tolerance)
Disk performance requirements
RAID 0
RAID 0 uses disk striping to provide high data throughput, especially for large files in an environment that requires no data redundancy. RAID 0 breaks the data down into segments and writes each segment to a separate physical disk. I/O performance is greatly improved by spreading the I/O load across many physical disks. Although it offers the best performance of any RAID level, RAID 0 lacks data redundancy. Select this option only for non-critical data, because failure of one physical disk results in the loss of all data. Examples of RAID 0 applications include video editing, image editing, prepress applications, or any application requiring high bandwidth.
Planning: MD3200 Series Storage Array Terms and Concepts 41
Page 42
RAID 1
RAID 1 uses disk mirroring so that data written to one physical disk is simultaneously written to another physical disk. This RAID level offers fast performance, the best data availability , and the highest disk overhead. R AID 1 is recommended for small databases or other applications that do not require large capacity. RAID 1 provides full data redundancy. F or example accounting, payroll, or financial applications.
RAID 5
RAID 5 uses parity and striping data across all physical disks (distributed parity) to provide high data throughput and data redundancy, especially for small random access. This is a versatile RAID level and is suited for multi-user environments where typical I/O size is small a n d the re is a high proportion of read activity such as file, application, database, web, e-mail, news, and intranet servers.
RAID 6
RAID 6 is similar to RAID 5 but provides an additional parity disk for better redundancy. This is the most versatile RAID level and is suited for multi-user environments where typical I/O size is small a n d the re is a high proportion of read activity. RAID 6 is recommended when large size physical disks are used or large number of physical disks are used in a disk group.
RAID 10
RAID 10, a combination of RAID 1 and RAID 0, uses disk striping across mirrored disks. It provides high data throughput and complete data redundancy. Utilizing an even number of physical disks (four or more) creates a RAID level 10 disk group and/or virtual disk. Because RAID levels 1 and 10 use disk mirroring, half of the capacity of the physical disks is utilized for mirroring. This leaves the remaining half of the physical disk capacity for actual storage. RAID 10 is automatically used when a RAID level of 1 is chosen with four or more physical disks. RAID 10 works well for medium­sized databases or any environment that requires high performance and fault tolerance and moderate-to-medium capacity.
42 Planning: MD3200 Series Storage Array Terms and Concepts
Page 43

Segment Size

Disk striping enables data to be written across multiple physical disks. Disk striping enhances performance because striped disks are accessed simultaneously.
The segment size or stripe element size specifies the size of data in a stripe written to a single disk. The MD3200 Series array supports stripe element sizes of 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, and 512 KB. The default stripe element size is 128 KB.
Stripe width, or depth, refers to the number of disks involved in an array where striping is implemented. For example, a four-disk group with disk striping has a stripe width of four.
NOTE: Although disk striping delivers excellent performance, striping alone does
not provide data redundancy.

Virtual Disk Operations

Virtual Disk Initialization

Every virtual disk must be initialized. Initialization can be done in the foreground or the background. A maximum of four virtual disks can be initialized concurrently on each RAID controller module.

Background Initialization

The storage array executes a background initialization when the virtual disk is created to establish parity, while allowing full host server access to the virtual disks. Background initialization does not run on RAID 0 virtual disks. The background initialization rate is controlled by MDSM. To change the rate of background initialization, you must stop any existing background initialization. The rate change is implemented when the background initialization restarts automatically.

Foreground Initialization

The storage array supports foreground initialization for virtual disks. All access to the virtual disk is blocked during foreground initialization. During foreground initialization, zeros (0x00) are written to every sector of the virt ual disk. The virtual disk is available after foreground initialization is completed.
Planning: MD3200 Series Storage Array Terms and Concepts 43
Page 44

Consistency Check

A consistency check verifies the correctness of data in a redundant array (RAID levels 1, 5, 6, and 10). For example, in a system with parity, checking consistency involves computing the data on one physical disk and comparing the results to the contents of the parity physical disk.
A consistency check is similar to a background initialization. The difference is that background initialization cannot be started or stopped manually, while consistency check can.
NOTE: It is recommended that you run data consistency checks on a redundant
array at least once a month. This allows detection and automatic replacement of unreadable sectors. Finding an unreadable sector during a rebuild of a failed physical disk is a serious problem, beca use the system does not have the redundancy to recover the data.

Media Verification

Another background task performed by the storage array is media verification of all configured physical disks in a disk group. The storage array uses the Read operation to perform verification on the space configured in virtual disks and the space reserved for the metadata.

Cycle Time

The media verification operation runs only on selected disk groups, independent of other disk groups. Cycle time is the time taken to complete verification of the metadata region of the disk group and all virtual disks in the disk group for which media verification is configured. The next cycle for a disk group starts automatically when the current cycle completes. You can set the cycle time for a media verification operation between 1 and 30 days. The storage controller throttles the media verification I/O accesses to disks based on the cycle time.
The storage array tracks the cycle for each disk group independent of other disk groups on the controller and creates a checkpoint. If the media verification operation on a disk group is pr eempted or blocked by another operation on the disk group, the storage array resumes after the current cycle. If the media verification process on a disk group is stopped due to a RAID controller module restart, the storage array resumes the process from the last checkpoint.
44 Planning: MD3200 Series Storage Array Terms and Concepts
Page 45

Virtual Disk Operations Limit

The maximum number of active, concurrent virtual disk processes per RAID controller module installed in the storage array is four. This limit is applied to the following virtual disk processes:
Background initialization
Foreground initialization
Consistency check
•Rebuild
•Copy back
If a redundant RAID controller module fails with existing virtual disk processes, the processes on the failed controller are transferred to the peer controller . A transferr ed process is placed in a suspended state if there ar e four active processes on the peer controller. The suspended processes are resumed on the peer controller when the number of active processes falls below four.

Disk Group Operations

RAID Level Migration

You can migrate from one RAID level to another depending on your requirements. For example, fault-tolerant characteristics can be added to a stripe set (RAID 0) by converting it to a RAID 5 set. MDSM provides information about RAID attributes to assist you in selecting the appropriate RAID level. You can perform a RAID level mig ration while the system is still running and without rebooting, which maintains data availability.

Segment Size Migration

Segment size refers to the amount of data (in KB) that the storage array writes on a single physical disk in a virtual disk before writing data on the next physical disk. Valid values for the segment size are 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, and 512 KB.
Planning: MD3200 Series Storage Array Terms and Concepts 45
Page 46
Dynamic segment size migration enables the segment size of a given virtual disk to be changed. A default segment size is set when the virtual disk is created, based on such factors as the RAID level and expected usage. You can change the default value (128 KB) if segment size usage does not match your needs.
When considering a segment-size change, two scenarios illustrate different approaches to the limitations:
If I/O activity stretches beyond the segment size, you can increase it to reduce the number of disks required for a single I/O . Using a single physical disk for a single request frees disks to service other requests, especially when you have multiple users accessing a database or storage environment.
If you use the virtual disk in a single-user, large I/O environment (such as for multimedia application storage), performance can be optimized when a single I/O request is serviced with a single data stripe (the segment size multiplied by the number of physical disks in the disk group used for data storage). In this case, multiple disks are used for the same request, but each disk is only accessed once.

Virtual Disk Capacity Expansion

When you configure a virtual disk, you select a capacity based on the amount of data you expect to store. However, you may need to increase the virtual disk capacity for a standard virtual disk by adding free capacity to the disk group. This creates more unused space for new virtual disks or to expand existing virtual disks.

Disk Group Expansion

Because the storage array supports hot pluggable physical disks, you can add two physical disks at a time for each disk group while the storage array remains online. Data remains accessible on virtual disk groups, virtual disks, and physical disks throughout the operation. The data and increased unused free space are dynamically redistributed across the disk group. RAID characteristics are also reapplied to the disk group as a whole.
46 Planning: MD3200 Series Storage Array Terms and Concepts
Page 47

Disk Group Defragmentation

Defragmenting consolidates the free capacity in the disk group into one contiguous area. Defragmentation does not change the way in which the data is stored on the virtual disks.

Disk Group Operations Limit

The maximum number of active, concurrent disk group processes per installed RAID controller module is one. This limit is applied to the following disk group processes:
Virtual disk RAID level migration
Segment size migration
Virtual disk capacity expansion
Disk group expansion
Disk group defragmentation
If a redundant RAID controller module fails with an existing disk group process, the process on the failed controller is transferred to the peer controller. A transferred process is placed in a suspended state if there is an active disk group process on the peer controller. The suspended processes are resumed when the active process on the peer controller completes or is stopped.
NOTE: If you try to start a disk group process on a controller that does not have an
existing active process, the start attempt fails if the first virtual disk in the disk group is owned by the other controller and there is an active process on the other controller.

RAID Background Operations Priority

The storage array supports a common configurable priority for the following RAID operations:
Background initialization
•Rebuild
•Copy back
Virtual disk capacity expansion
Raid level migration
Planning: MD3200 Series Storage Array Terms and Concepts 47
Page 48
Segment size migration
Disk group expansion
Disk group defragmentation
The priority of each of these operations can be changed to address performance requirements of the environment in which the operations are to be executed.
NOTE: Setting a high priority level impacts storage array performance. It is not
advisable to set priority levels at the maximum level. Priority must also be assessed in terms of impact to host server access and time to complete an operation. For example, the longer a rebuild of a degraded virtual disk takes, the greater the risk for secondary disk failure.

Virtual Disk Migration and Disk Roaming

Virtual disk migration is moving a virtua l disk or a hot spare from one array to another by detaching the physical disks and re-attaching them to the new array. Disk roaming is moving a physical disk from one slot to another on the same array.

Disk Migration

You can move virtual disks from one array to another without taking the target array offline. However, the disk group being migrated must be offline before your perform disk migration. If the disk group is not offline prior to migration, the source array holding the physical and virtual disks within the disk group marks them as missing. However, the disk groups themselves migrate to the target array.
An array can import a virtual disk only if it is in an optimal state. You can move virtual disks that are part of a disk group only if all members of the disk group are being migrated. The virtual disks automatically become available after the target array has finished importing all the disks in the disk group.
When you migrate a physical disk or a disk group from one MD3200 array to another, the MD3200 array you migrate to, recognizes any data structures and/or metadata you had in place on the migrating MD3200 array . However, if you are migrating from any device other than a MD3200 Series storage array,
48 Planning: MD3200 Series Storage Array Terms and Concepts
Page 49
the MD3200 Series storage array does not recognize the migrating metadata and that data is lost. In this case, the MD3200 Series storage array initializes the physical disks and marks them as unconfigured capacity.
NOTE: Only disk groups and associated virtual disks with all member physical disks
present can be migrated from one storage array to another. it is recommended that you only migrate disk groups that have all their associated member virtual disks in an optimal state.
NOTE: The number of physical disks and virtual disks that a storage array supports
limits the scope of the migration.
Use either of the following methods to move disk groups and virtual disks:
Hot virtual disk migration—Disk migration with the destination storage array power turned on.
Cold virtual disk migration—Disk migration with the destination storage array power turned off.
NOTE: To ensure that the migrating disk groups and virtual disks are correctly
recognized when the target storage array has an existing physical disk, use hot virtual disk migration.
When attempting virtual disk migration, follow these recommendations:
Moving physical disks to the destination array for migration—When inserting drives into the destination storage array during hot virtual disk migration, wait for the inserted physical disk to be displayed in MDSM before inserting the next physical disk.
WARNING: Without the delay between drive insertions, the storage array can
become unstable and manageability is temporarily lost.
Migrating virtual disks from multiple storage arrays into a single storage array—When migrating virtual disks from multiple or different storage arrays into a single destination storage array, move all of the physical disks from the same storage array as a set into the new destination storage array. Ensure that all of the physical disks from a storage array are migrated to the destination storage array before starting migration from the next storage array.
NOTE: If the drive modules are not moved as a set to the destination storage
array, the newly relocated disk groups may not be accessible.
Planning: MD3200 Series Storage Array Terms and Concepts 49
Page 50
Migrating virtual disks to a storage array with no existing physical disks— Turn off the destination storage array, when migrating disk groups or a complete set of physical disks from a storage array to another storage array that has no existing physical disks. After the destination storage array is turned on and has successfully recognized the newly migrated physical disks, migration operations can continue.
NOTE: Disk groups from multiple storage arrays must not be migrated at the
same time to a storage array that has no existing physical disks.
Enabling premium features before migration—Before migrating disk groups and virtual disks, enable the required premium features on the destination storage array. If a disk group is migrated from a MD3200 storage array that has a premium feature enabled and the destination array does not have this feature enabled, an may be generated.
Out of Compliance
error message

Disk Roaming

You can move physical disks within an array. The RAID controller module automatically recognizes the relocated physical disks and logically places them in the proper virtual disks that are part of the disk group. Disk roaming is permitted when the RAID controller module is either online or powered off.
NOTE: The disk group must be exported before moving the physical disks.

Advanced Features

The RAID enclosure supports several advanced features:
Virtual Disk Snapshots
•Virtual Disk Copy
High Performance Tier
NOTE: Virtual Disk Snapshot, Virtual Disk Copy, and High Performance Tier are
premium features that must be activated separately. If you have purchased these features, an activation card is supplied that contains instructions for enabling this functionality.
50 Planning: MD3200 Series Storage Array Terms and Concepts
Page 51

Host Server-to-Virtual Disk Mapping

The host server attached to a storage array accesses various virtual disks on the storage array through its host ports. Specific virtual disk-to-LUN mappings to an individual host server can be defined. In addition, the host server can be part of a host group that shares access to one or more virtual disks.
You can manually configure a host server-to-virtual disk mapping. When you configure host server-to-virtual disk mapping, consider these guidelines:
You can define one host server-to-virtual disk mapping for each virtual disk in the storage array.
Host server-to-virtual disk mappings are shared between RAID controller modules in the storage array.
A unique LUN must be used by a host group or host server to access a virtual disk.
Not every operating system has the same number of LUNs available for use.

Host Types

A host server is a server that accesses a storage array. Host servers are mapped to the virtual disks. Host servers have the following attributes:
Host name—A name that uniquely identifies the host server.
Host group (used in Cluster solutions only)—Two or more host servers associated together to share access to the same virtual disks.
This host group is a logical entity you can create in MDSM. All host servers in a host group must be running the same operating system.
Host type—The operating system running on the host server.

Snapshot Virtual Disks

A snapshot is a point-in-time image of a virtual disk. The snapshot provides an image of the virtual disk at the time the snapshot was created. You create a snapshot so that an application (for example, a backup application) can access the snapshot and read the data while the source virtual disk remains
Planning: MD3200 Series Storage Array Terms and Concepts 51
Page 52
online and user-accessible. When the backup is completed, the snapshot virtual disk is no longer needed. You can create up to four snapshots per virtual disk.
Snapshots are used to recover previous versions of files that have changed since the snapshot was taken. Snapshots are implemented using a copy-on­write algorithm, which makes a backup copy of data the instant an error occurs. Data on a virtual disk is copied to the snapshot repository before it is modified. Snapshots can be created instantaneously, or can be scheduled, and take up less overhead than a full physical copy process.

Snapshot Repository Virtual Disk

When you create a snapshot virtual disk, it automatically creates a snapshot repository virtual disk. A snapshot repository is a virtual disk created in the storage array as a resource for a snapshot virtual disk. A snapshot repository virtual disk contains snapshot virtual disk metadata and copy-on-write data for a particular snapshot virtual disk. The repository supports one snapshot only.
You cannot select a snapshot repository virtual disk as a source virtual disk or as a target virtual disk in a virtual disk copy. If you select a snapshot source virtual disk as the target virtual disk of a virtual disk copy, you must disable all snapshot virtual disks associated with the source virtual disk.
CAUTION: Before using the Snapshot Virtual Disks Premium Feature in a
Windows Clustered configuration, you must map the snapshot virtual disk to the cluster node that owns the source virtual disk. This ensures that the cluster nodes correctly recognize the snapshot virtual disk.
CAUTION: Mapping the snapshot virtual disk to the node that does not own the
source virtual disk before the snapshot enabling process is completed can result in the operating system incorrectly identifying the snapshot virtual disk. This can result in data loss or an inaccessible snapshot.
For more information on mapping the snapshot virtual disk to the secondary node, see the Dell PowerVault MD3200 and MD3220 Storage Arrays With Microsoft Windows Server Failover Clusters on dell.com/support/manuals.
52 Planning: MD3200 Series Storage Array Terms and Concepts
Page 53

Virtual Disk Copy

Virtual disk copy is a premium feature to:
•Back up data
Copy data from disk groups that use smaller-capacity physical disks to disk groups using greater capacity physical disks
Restore snapshot virtual disk data to the source virtual disk.
Virtual disk copy generates a full copy of data from the source virtual disk to the target virtual disk in a storage array and can be performed either online or offline.
Source Virtual Disk
When you create a virtual disk copy, a copy pair consisting of a source virtual disk and a target virtual disk is created on the same storage array. When a virtual disk copy is started, data from the source virtual disk is copied completely to the target virtual disk.
Target Virtual Disk
When you start a virtual disk copy, the target virtual disk maintains a copy of the data from the source virtual disk. You can choose whether to use an existing virtual disk or create a new virtual disk as the target virtual disk. If you choose an existing virtual disk as the target, all data on the target is overwritten. A target virtual disk can be a standard virtual disk or the source virtual disk of a failed or disabled snapshot virtual disk.
NOTE: The target virtual disk capacity must be equal to or greater than the source
virtual disk capacity.
When you begin the disk copy process, you must define the rate at which the copy is completed. Giving the copy process top priority slightly impacts I/O performance, while giving it lowest priority makes the copy process longer to complete. You can modify the copy priority while the disk copy is in progress.
For more information, see the online help topics.
Planning: MD3200 Series Storage Array Terms and Concepts 53
Page 54

Virtual Disk Recovery

You can use the Edit host server-to-virtual disk mappings feature to re cover data from the backup virtual disk. This functionality enables you to unmap the original source virtual disk from its host server, then map the backup virtual disk to the same host server.
Ensure that you record the LUN used to provide access to the source virtual disk. You need this information when you define a host server-to-virtual disk mapping for the target (backup) virtual disk. Also, be sure to stop all I/O activity to the source virtual disk before beginning the virtual disk recovery procedure.

Using Snapshot and Disk Copy Together

You can use the Snapshot Virtual Disk and Virtual Disk Copy premium features together to back up data on the same storage array, or to restore the data on the snapshot virtual disk to its original source virtual disk.
You can copy data from a virtual disk by:
Taking a point-in-time snapshot of the data (online)
Copying the data to another virtual disk using a virtual disk copy (offline)
You can select a snapshot virtual disk as the source virtual disk for a virtual disk copy. This configuration is one of the best ways you can apply the snapshot virtual disk feature, since it enables complete backups without any impact on the storage array I/O.
You cannot use a snapshot repository virtual disk as a source virtual disk or as a target virtual disk in a virtual disk copy. If you select the source virtual disk as the target virtual disk of a virtual disk copy, you must disable all snapshot virtual disks associated with the source virtual disk.

Multi-Path Software

Multi-path software (also referred to as the failover driver) is a software resident on the host server that provides management of the redundant data path between the host server and the storage array.
54 Planning: MD3200 Series Storage Array Terms and Concepts
Page 55
The multi-path software identifies the existence of multiple paths to a virtual disk and establishes a preferred path to that disk. If any component in the preferred path fails, the multi-path software automatically re-routes I/O requests to the alternate path so that the storage array continues to operate without interruption.
NOTE: Multi-path software available on the MD3200 Series resource media.

Preferred and Alternate Controllers and Paths

A preferred controller is a R AID controller module designated as the owner of a virtual disk or disk group. The preferred controller is automatically selected by MDSM when a virtual disk is created. You can change the preferred RAID controller module owner of a virtual disk after it is created. If a host is only connected to only one RAID controller module, the preferred owner must manually be assigned to the RAID controller module the host can access.
Ownership of a virtual disk is moved from the preferred controller to the secondary controller (also called the alternate controller) when the preferred controller is:
Physically removed
Updating firmware
Involved in an event that caused failover to the alternate controller
P aths used by the pr eferr ed R AID controller module to access either the disks or the host server are called the preferred paths; redundant paths are called the alternate paths. If a failure causes the preferred path to become inaccessible, the storage array automatically uses the alternate path to access data, when this occurs the enclosure status LED blinks amber.

Virtual Disk Ownership

MDSM can be used to automatically build and view virtual disks. It uses optimal settings to stripe the disk group. Virtual disks are assigned to alternating RAID controller modules when they are created. This default assignation provides a simple means for load balancing the workload of the RAID controller modules.
Planning: MD3200 Series Storage Array Terms and Concepts 55
Page 56
Ownership can later be modified to balance workload according to actual usage. If virtual disk ownership is not manually balanced, it is possible for one controller to have the majority of the work, while the other controller is idle. Limit the number of virtual disks in a disk group. If multiple virtual disks are in a disk group, consider:
The impact each virtual disk has on other virtual disks in the same disk group.
The patterns of usage for each virtual disk.
Different virtual disks have higher usage at different times of day.

Load Balancing

A load balance policy is used to determine which path is used to process I/O. Multiple options for setting the load balance policies lets you optimize I/O performance when mixed host interfaces are configured.
You can choose one of these load balance policies to optimize I/O performance:
Round robin with subset—The round robin with subset I/O load balance policy routes I/O requests, in rotation, to each available data path to the RAID controller module that owns the virtual disks. This policy treats all paths to the RAID controller module that owns the virtual disk equally for I/O activity. Paths to the secondary RAID controller module are ignored until ownership changes. The basic assumption for the round-robin policy is that the data paths are equal. With mixed host support, the data paths may have different bandwidths or different data transfer speeds.
Least queue depth with subset—The least queue depth with subset policy is also known as the least I/Os or least requests policy. This policy routes the next I/O request to a data path that has the least outstanding I/O requests queued. For this policy, an I/O request is simply a command in the queue. The type of command or the number of blocks that are associated with the command are not considered. The least queue depth with subset policy treats large block requests and small block requests equally. The data path selected is one of the paths in the path group of the RAID controller module that owns the virtual disk.
56 Planning: MD3200 Series Storage Array Terms and Concepts
Page 57
Least path weight with subset (Windows operating systems only)—The least queue depth with subset policy is also known as the least I/Os or least requests policy. This policy routes the next I/O request to a data path that has the least outstanding I/O requests queued. For this policy, an I/O request is simply a command in the queue. The type of command or the number of blocks that are associated with the command are not considered. The least queue depth with subset policy treats large block requests and small block requests equally. The data path selected is one of the paths in the path group of the RAID controller module that owns the virtual disk.

Monitoring MD3200 Series System Performance

You can use the Performance Monitor to select virtual disks and RAID controller modules to monitor or to change the polling interval. Keep these guidelines in mind when using the Performance Monitor:
The Performance Monitor does not dynamically update its display if any configuration changes occur while the window is open. You must close the
Performance Monitor
displayed.
Using the Performance Monitor to retrieve performance data can affect the normal storage array performance depending on the polling interval that you set.
If the storage array you are monitoring begins in or transitions to an unresponsive state, an informational dialog is displayed. The dialog informs you that the Performance Monitor cannot poll the storage array for performance data.
To monitor the performance of the arrays:
1
Open MDSM and select the appropriate storage array.
2
Open the array
3
In the AMW, select
Array Management Window
window and reopen it for the changes to be
(AMW) for the selected storage
Storage Array Monitor Per f ormance
.
Planning: MD3200 Series Storage Array Terms and Concepts 57
Page 58
4
Click
Settings
a
Select the items that you want to monitor. You can monitor:
.
RAID controller modules
•Virtual disks
Storage array totals
NOTE: By default, all items are selected.
In
b
Polling interval
, select how often you want to update the
performance statistics.
NOTE: For an accurate elapsed time, do not use the Set RAID Controller
Module Clocks option while using the Performance Monitor.
Each time the polling interval elapses, the Performance Monitor queries the storage array again and updates the statistics in the table.
5
Click
Start
.
Values are displayed for the selected storage arrays in the Performance Monitor data table. The table is updated at the interval specified in the Polling Interval setting.
6
Click
7 8
Click Click
Update Stop Save As
to force an immediate poll of the storage array.
to stop monitoring the storage array.
on the Performance Monitor main dialog to save the
currently displayed performance statistics.
9
Select an appropriate directory.
10
Type a file name in the
File name
text box.
NOTE: The .perf extension is the default.
11
Select a file type from the
•Use the
Report format
Files of type
list.
(ASCII text) file type if you want to save the
data to a report form for viewing or printing.
•Use the
Comma Delimited Format
file type if you want to save the data in a form that can be imported into a commercial spreadsheet application for further analysis. Most leading commercial spreadsheet applications recognize a comma delimiter. These applications use the delimiter to import the data into spreadsheet cells.
58 Planning: MD3200 Series Storage Array Terms and Concepts
Page 59
12
Click
Save
.
The Performance Monitor data provides information about how your storage array is performing. The data is presented in eight columns, which are described in this table. Use this data to make performance tuning decisions for your storage array.
Table 4-3. Performance Monitor Table Description
Column Headings Description
Devices Controller, virtual disk or storage array total. Total IOs Cumulative IO’s per second from last start time. Read Percentage Percentage of cumulative IO’s that are READs. Cache Hit Percentage Percentage of cumulative IO’s that are in-cache. Current KB/second Snapshot of throughput value per second (1KB = 1024
bytes). Maximum KB/second Maximum recorded throughput value from last start time. Current IO/second Snapshot of IO’s per second (IOP = Input/output per
second or one completed I/O transaction). Maximum IO/second Maximum recorded IOP from last start time.
For more information, see the online help topics.
Planning: MD3200 Series Storage Array Terms and Concepts 59
Page 60
60 Planning: MD3200 Series Storage Array Terms and Concepts
Page 61
5

Configuration: Overview

Dell PowerVault Modular Disk Storage Manager (MDSM) online help contains information on the how to use the MDSM application to perform the configuration and management tasks described in this document. You can access online help by selecting Help located at the top right corner of MDSM interface. For information on installing the MDSM, see the MD3200 and MD3220 Storage Array’s Deployment Guide at dell.com/support/manuals.
NOTE: MDSM supports MD3000 and MD3200 storage arrays and can automatically
detect both these storage arrays.

User Interface

The Storage Manager screen is divided into two primary windows:
Enterprise Management Window (EMW)—The EMW provides high­level management of the storage arrays. You can launch the Array Management Window from the EMW.
Array Management Window (AMW)—The AMW provides management functions for a single storage array. You can launch more than one AMW at the same time to manage different storage arrays.
The EMW and the AMW consist of the following:
The title bar at the top of the window shows the name of the application.
Beneath the title bar, is the menu bar. You can select menu options from the menu bar to perform tasks on a storage array.
Beneath the menu bar, is the toolbar. You can select options in the toolbar to perform tasks on a storage array.
Beneath the toolbar, are the tabs. Tabs are used to group the tasks that you can perform on a storage array.
Beneath the tabs, is the status bar. The status bar shows status messages and status icons related to the storage array.
NOTE: By default, the toolbar and status bar is not displayed. To view the toolbar or
the status bar, select ViewToolbar or View Status Bar, respectively.
Configuration: Overview 61
Page 62

Enterprise Management Window

The EMW provides high-level management of storage arrays. When you start MDSM, the EMW is displayed. The EMW has these tabs:
Devices
Setup
storage arrays and configuring alerts.
The Devices tab has a Tree view on the left side of the window that shows discovered storage arrays, unidentified storage arrays, and the status conditions for the storage arrays. Discovered storage arrays are managed by MDSM. Unidentified storage arrays are available to MDSM but not configured for management. The right side of the Devices tab has a Table view that shows detailed information for each storage array.
In the EMW, you can:
Discover hosts and managed storage arrays on the local sub-network.
Manually add and remove hosts and storage arrays.
Locate the storage arrays.
Name or Rename discovered storage arrays.
Add storage array comments to the Table view.
Sort rows in the Table view according to different criteria.
Store your EMW view preferences and configuration data in local configuration files. The next time you open the EMW, data from the local configuration files is used to show customized view and preferences.
Monitor the status of managed storage arrays and indicate status using appropriate icons.
Add or remove management connections.
Configure alert notifications for all selected storage arrays through e-mail or SNMP traps.
Report critical events to the configured alert destinations.
Launch the AMW for a selected storage array.
Run a script to perform batch management tasks on specific storage arrays.
Import the operating system theme settings into the MDSM.
Upgrade firmware on multiple storage arrays concurrently.
tab—Provides information about the storage arrays.
tab—Presents the initial setup tasks that guide you through adding
62 Configuration: Overview
Page 63
Inheriting the System Settings
Use the Inherit System Settings option to import the operating system theme settings into the MDSM. Importing system theme settings affects features like font type, font size, color, and contrast in the MDSM.
1
Open the
Select
Select the
2
Select
3
Click OK.
Inherit System Settings
Tools
Inherit System Settings
Setup
tab and click
window in one of these ways:
.
Inherit System Settings
Inherit system settings for color and font
.
.

Array Management Window

You can launch the AMW from the EMW. The AMW provides management functions for a single storage array. You can have multiple AMWs open simultaneously to manage different storage arrays.
To launch the AMW:
1
In the
EMW
, on the
The context menu for the selected storage is displayed.
2
In the context menu, select The AMW for the selected storage is displayed.
The AMW has the following tabs:
Summary tab—You can view the following information about the storage array:
Status
Hardware components
Capacity
•Hosts and Mappings
Storage partitions
Disk groups and virtual disks
Logical tab—You can view the organization of the storage array by virtual disks, disk groups, free capacity nodes, and any unconfigured capacity for the storage array.
Devices
tab, double click on the relevant storage array.
Manage Storage Array.
Configuration: Overview 63
Page 64
Physical tab—You can view the organization of the storage array by RAID controller modules, physical disks, and other hardware components.
Mappings tab—You can define the hosts, host groups, and host ports. You can change the mappings to grant virtual disk access to host groups and hosts and create storage partitions.
Setup tab—You can complete the initial setup tasks to configure the storage array.
Support—You can complete common support tasks like downloading RAID controller module firmware, viewing the online help, and so on.
In the AMW, you can:
Provide storage array options, for example, renaming a storage array, changing a password, or enabling a background media scan.
Provide the ability to configure virtual disks from the storage array capacity, define hosts and host groups, and grant host or host group access to sets of virtual disks called storage partitions.
Monitor the health of storage array co mponents and r eport detailed status using applicable icons.
Provide applicable recovery procedures for a failed logical component or a failed hardware component.
Present a view of the Event Log for the storage array.
Present profile information about hardware components, such as RAID controller modules and physical disks.
Provide RAID controller module management options, such as changing ownership of virtual disks or placing a RAID controller module online or offline.
Provide physical disk management options, such as assignment of hot spares and locating the physical disk.
Monitor storage array performance.
64 Configuration: Overview
Page 65
6

Configuration: About Your Storage Array

Out-of-Band and In-Band Management

You can manage a storage array in two ways:
Out-of-band management
In-b a nd management
Out-of-Band Management
Using the out-of-band management method, data is separate from commands and events. Data travels through the host-to-controller interface, while commands and events travel through the management port Ethernet cables.
This management method lets you configure the maximum number of virtual disks that are supported by your operating system and host adapters. A maximum of eight storage management stations can concurrently monitor an out-of-band managed storage array. This limit does not apply to systems that manage the storage array through the in-band management method.
When you use out-of-band management, you must set the network configuration for each RAID controller module’s management Ethernet port. This includes the Internet Protocol (IP) address, subnetwork mask (subnet mask), and gateway. If you are using a Dynamic Host Configuration Protocol (DHCP) server, you can enable automatic network configuration, but if you are not using a DHCP server, you must enter the network configuration manually.
NOTE: RAID controller module network configurations can be assigned using a
DHCP server (the default setting). However , if a DHCP server is no t available for 150 seconds, the RAID controller modules assign static IP addresses. The addresses assigned are 192.168.128.101 for controller 0 and 192.168.128.102 for controller 1.
Configuration: About Your Storage Array 65
Page 66
In-Band Management
Using in-band management, commands, events, and data travel through the host-to-controller interface. Unlike out-of-band management, commands and events are mixed with data.
NOTE: For detailed information on settin g up in-band and out-of-band management
see the Deployment Guide.
When you add storage arrays by using this management method, you need to specify only the host name or IP address of the host. After you add the specific host name or IP address, the host-agent software automatically detects any storage arrays that are connected to that host.
CAUTION: Some operating systems can be used only as storage management
stations. For more information about the operating system that you are using, see
MD PowerVault Support Matrix
the
at dell.com/support/manuals.
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
Access Virtual Disk
Each RAID controller module in an MD3200 Series storage array maintains a special virtual disk, called the access virtual disk. The host-agent software uses the access virtual disk to communicate management requests and event information between the storage management station and the RAID controller module in an in-band–managed storage array. The access virtual disk is not available for application data storage. The default LUN is 31.

Storage Arrays

You must add the storage arrays to MDSM before you can setup the storage array for optimal use.

Adding Storage Arrays

You can add storage arrays only in the EMW. You can:
Automatically discover storage arrays
Manually add storage arrays
NOTE: Verify that your host or management station network configuration—
including station IP address, subnet mask, and default gateway—is correct before adding a new storage array using the Automatic option.
66 Configuration: About Your Storage Array
Page 67
NOTE: For Linux, set the default gateway so that broadcast packets are sent to
255.255.255.0. For Red Hat Enterprise Linux, if n o gateway exists on the network, set the default gateway to the IP address of the NIC.
NOTE: MDSM uses TCP/UDP port 2463 for communication to the MD Storage
Array.
Automatic Discovery of Storage Arrays
The Automatic Discovery process sends out a broadcast message across the local subnetwork (subnet) and adds any storage array that responds to the message. The Automatic Discovery process finds both in-band and out-of­band storage arrays.
NOTE: The Automatic Discovery option and the Re-scan Hosts option in the
Enterprise Management Window provide automatic methods to discover managed storage arrays.
Manual Addition of a Storage Array
Use Manual Addition if the storage array resides outside of the local subnet. This process requires specific identification information to manually add a storage array.
To add a storage array that uses out-of-band management, specify the host name or management port IP address of each controller in the storage array.
Before using this option, verify that the applicable network configuration tasks are performed.
To add an in-band storage array, add the host through which the storage array is attached to the network.
NOTE: It can take several minutes for MDSM to connect to the specified storage
array.
To add a storage array manually:
1
Select
Edit
2
Select the relevant management method:
Out-of-band management
• the
In-band management
Add Storage Array
.
—Enter a host name or an IP address for
RAID controller Modules
—Enter a name or an IP address for the
in the storage array.
through which the storage array is attached to the network.
Configuration: About Your Storage Array 67
Host
Page 68
NOTE: The host agent must be restarted before in-band management
communication can be established. See "Starting or Restarting the Host Context Agent Software" on page251.
3
Click
Add
.
4
Use one of these methods to name a storage array:
In the EMW, select the
Arrays
.
In the AMW, select the
Setup
tab, and select
Setup
tab, and select
Name/Rename Storage
Rename Storage Array
.
In the EMW, right-click the icon corresponding to the array and select
Rename
.

Setting Up Your Storage Array

A list of initial setup tasks is displayed on the Setup tab in the AMW. The list of initial setup tasks shows you how to set up a storage array. Using the steps outlined in the Initial Setup Tasks Area, ensures that the basic setup steps are completed properly.
Use the Initial Setup Tasks list the first time that you set up a storage array to perform these tasks:
Locate the storage array—F ind the physical location of the storage array on your network by turning on the unit identify LEDs. The storage array can be identified with a label.
Give a new name to the storage array—Use a unique name that identifies each storage array.
Set a storage array password—Configure the storage array with a password to protect it from unauthorized access. MDSM prompts for the password when an attempt is made to change the storage array configuration, such as when a virtual disk is created or deleted.
Configure the storage array—Create disk groups, virtual disks, and hot spare physical disks by using the Automatic configuration method or the Manual configuration method. For more information, see the
Modular Disk Storage Manager online help
topics.
Map V irtual Disks—Map virtual disks to hosts or host groups.
PowerVault
68 Configuration: About Your Storage Array
Page 69
Save Configuration—Save the configuration parameters in a file that you can use to restore the configuration, or reuse the configuration on another storage array. For more information, see the
Storage Manager online help
topics.
P owerVault Modular Disk
After you complete the basic steps for configuring the storage array, you can perform these optional tasks:
Manually define hosts—Define the hosts and the host port identifiers that are connected to the storage array. Use this option only if the host is not automatically recognized and shown in the Mappings tab.
Configure ethernet management ports—Configure the network parameters for the Ethernet management ports on the RAID controller modules if you are managing the storage array by using the out-of-band management connections.
View and enable premium features—Your MDSM may include premium features. View the premium features that are available and the premium features that are already started. You can start available premium features that are currently stopped.

Locating Storage Arrays

You can use the Blink option to physically locate and identify a storage array.
NOTE: If the LEDs from the Blink Storage Array operation do not stop blinking,
select Stop All Indications to stop the process manually.
To locate the storage array:
1
Select the relevant storage array and:
In the EMW, right-click the appropriate storage array, and select
Storage Array
In the AMW, select the
In the AMW, select
.
Setup
tab, click
Blink Storage Array
Storage ArrayBlinkStorage Array
The LEDs blink on the physical disks in the storage array.
Blink
.
.
2
After locating the storage array, click OK. The LEDs stop blinking.
3
If the LEDs do not stop blinking, select
Indications
.
Configuration: About Your Storage Array 69
Storage ArrayBlink Stop All
Page 70
A confirmation message is displayed.
4
Click OK.

Naming or Renaming Storage Arrays

Each storage array must be assigned a unique up to 30-character alphanumeric name. A name can consist of letters, numbers, and the special characters underscore (_), dash (–), and pound sign (#). No other special characters are allowed.
To rename a selected storage array:
1
Perform one of these actions:
In the AMW
•In the EMW
•In the EMW
•In the EMW icon and select
The
Name/Rename Storage Arrays
2
Select the relevant storage array from the If you do not know the name or physical location of the storage array, click
Blink
. After locating the storage array, click OK to turn off the LEDs.
Setup
tab, select
Devices Devices Devices
Rename.
Renam e Storage Array
tab Tree view, select tab Table view, select tab Tree view, right-click on the desired array
dialog is displayed.
Edit Rename.
Edit
Select storage array
.
Rename.
table.
The name of the storage array is displayed in the
3
In
Storage array name
applicable, add a comment for the storage array in
4
Click
Apply
.
A message is displayed warning you about the implications of changing the storage array name.
5
Click
Yes
.
The new storage array name is displayed in the
6
Repeat step 2 through step 4 to name or rename additional storage arrays.
NOTE: Avoid arbitrary names or names that may lose meaning in the future.
70 Configuration: About Your Storage Array
, type the new name of the storage array. If
Storage array name
Additional comment
Select storage array
.
.
table.
Page 71

Setting a Password

You can configure each storage array with a password to protect it from unauthorized access. MDSM prompts for the password when an attempt is made to change the storage array configuration, such as, when a virtual disk is created or deleted. View operations do not change the storage array configuration and do not require a password. You can create a new password or change an existing password.
NOTE: It is recommended that you use a long password with at least 15
alphanumeric characters to increase security.
To set a new password or change an existing password:
1
Select the relevant storage array and navigate to the AMW for that storage array. See "Array Management Window" on page 63.
The AMW for the selected storage array is displayed.
2
In the AMW, perform one of these actions:
Select the storage array in the
Array
Set Password
Select the
Setup
.
tab, and then click
In the AMW, select the
Password.
The
Set Password
3
If you are resetting the password, type the
dialog is displayed.
Logical
Logical
pane, and then select
Set a Storage Array Password
tab, right-click and select
Current password
Storage
.
Set
.
NOTE: If you are setting the password for the first time, leave the Current
password blank.
4
Type the
5
Re-type the new password in
6
Click OK.
NOTE: You are not prompted for a password when you attempt to change the
storage array configuration in the current management session.
New password
NOTE: The password in Confirm new password and New password must be
exactly the same.
.
Confirm new password
Configuration: About Your Storage Array 71
.
Page 72
Password Guidelines
Follow these guidelines when you create a password:
Use secure passwords for your storage array. A password must be easy for you to remember but difficult for others to determine. Consider using numbers or special characters in the place of letters, such as a 1 in the place of the letter I, or the at sign (@) in the place of the letter a.
For increased protection, use a long password with at least 15 alphanumeric characters. The maximum password length is 30 characters.
Passwords are case sensitive.
NOTE: You can attempt to enter a password up to ten times before the storage
array enters a lockout state. Before you can try to enter a password again, you must wait 10 minutes for the storage array to reset. To reset the password, press the password reset switch on your RAID controller module, see Figure3-1.

Viewing Storage Array Connections

You can use the View Connections option to view the expansion enclosures connected to the RAID controller module.
To view the storage array connections:
1
From the toolbar in AMW, select The
<Storage Array>:Connections
2
Click the column name to sort the connections according to your preference.
3
Click
Close
.
If you receive an error message for a port, you can use this dialog to identify the components on the port that may have caused the error. By isolating these components, you prevent accidentally disconnecting components that are still in operation, which could cause an interruption in data flow.
Storage Array View
dialog is displayed.
Connections
.
72 Configuration: About Your Storage Array
Page 73

Adding/Editing a Comment to an Existing Storage Array

A descriptive comment, with an applicable storage array name, is a helpful identification tool. You can add or edit a comment for a storage array in the EMW only.
To add or edit a comment:
1
In the EMW, select the storage array.
2
Select
Edit
The
Edit Comment
3
Type a 60-character comment.
4
Click OK.
This option updates the comment in the table view and saves it in your local storage management station file system. The comment is not displayed to administrators who are using other storage management stations.
Comment
Devices
dialog is displayed.
tab and select the relevant managed
.

Removing Storage Arrays

You can remove a storage array from the list of managed arrays if you no longer want to manage it from a specific storage management station. Removing a storage array does not affect the storage array or its data in any way. Removing a storage array simply removes it from the list of storage arrays that are displayed in the drop-down list in the Array Selector . If a storage array is accidentally removed, it can be added again. See "Adding Storage Arrays" on page 66.
You can remove the storage array only from the EMW. To remove the storage array:
1
In the EMW, select the storage array.
2
Select
Edit
A message prompts you for a confirmation for the removal of the selected storage array.
Remove
Devices
tab and select the relevant managed
Storage Array
.
3
To remove the storage array, click
Yes
.
Configuration: About Your Storage Array 73
Page 74

Enabling Premium Features

You can enable premium features on the storage array. T o enable the pr emium features, you must obtain a feature key file specific to the premium feature that you want to enable from your storage supplier.
To enable premium features:
1
From the toolbar in AMW, select The
Premium Features and Feature Pack Information
displayed.
2
Select the relevant premium feature, and click The
Select Feature Key File
3
Navigate to the relevant folder, select the appropriate key file, and click
OK
.
4
Click
Close
.
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
Storage Array Premium Features
window is
Enable
.
dialog is displayed.
.

Failover Alert Display

You can change the failover alert delay for a storage array. The failover alert delay lets you delay the logging of a critical event if the multi-path driver transfers virtual disks to the non-preferred controller. If the multi-path driver transfers the virtual disks back to the preferred controller within the specified delay period, a critical event is not logged. If the transfer exceeds this delay period, then a virtual disk-not-on-preferred-path alert is issued as a critical event. You can also use this option to minimize multiple alerts when more than one virtual disk fails over because of a system error, such as a failed host adapter.
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
74 Configuration: About Your Storage Array
Page 75

Changing the Cache Settings on the Storage Array

To change the storage array cache settings:
1
In the AMW, select The
Change Cache Settings
2
Select or enter the percentage of unwritten data in the cache to trigger a cache flush in
3
Select or enter the percentage of unwritten data in the cache to stop a cache flush in progress in
4
Select the appropriate A smaller cache size is a good choice for file-system use or database-
application use. A larger cache size is a good choice for applications that generate sequential I/O, such as multimedia.
5
In the
Enter Password
array, and click
Storage Array
Start flushing
Stop flushing
Cache block size
dialog, type the current password for the storage
OK
.
Change
window is displayed.
.
.
.
Cache Settings
.

Changing Expansion Enclosure ID Number

When an MD1200 Series expansion enclosure is connected to an MD3200 Series storage array for the first time, an enclosure ID number is assigned and maintained by the expansion enclosure. This enclosure ID number is also shown in the MDSM.
To change the enclosure ID numbers:
1
In the AMW, select the storage array, and select
Change
2
Select a new enclosure ID number from the The enclosure ID must be between 0 and 99 (inclusive).
Enclosure ID
.
Storage Array
Change Enclosure ID
list.
3
To save the changed enclosure ID, click
Change
.

Changing the Enclosure Order in the Physical Pane

You can change the order of the RAID controller modules and the expansion enclosures in the Physical pane to match the hardware configuration in your storage array. The Physical pane that initially is displayed is a default view that may not match your storage array. The enclosure order change remains in effect until it is modified again.
Configuration: About Your Storage Array 75
Page 76
To change the enclosure order in the Physical pane:
1
In the AMW, select
2
From the enclosures list, select the enclosure you want to move and click either
Up
or
3
Click OK. If you have set a password for the selected storage array , the
Password
4
Type the current password for the storage array.
5
Click OK.
dialog is displayed.
Storage Array
Down
to move the enclosure to the new position.
Change
Enclosure Order
.
Enter

Configuring Alert Notifications

MDSM can send an alert for any condition on the storage array that requires your attention. Alerts can be sent as e-mail messages or as Simple Network Management Protocol (SNMP) trap messages.
You can configure alert notifications either for all the storage arrays or a single storage array.
To configure alert notifications for all storage arrays:
1
In the EMW, select the
2
Select
Configure Alerts
The
Configure Alerts
Setup
tab.
.
dialog is displayed.
3
Select
All storage arrays
4
Click OK. The
Configure Alerts
"Configuring E-mail Alerts" on page 77. To configure SNMP alerts, see "Configuring SNMP Alerts" on page 79.
To configure alert notifications for a single storage array:
In the EMW, select the
1 2
Select the relevant storage array, then select The
Configure Alerts
"Configuring E-mail Alerts" on page 77. To configure SNMP alerts, see "Configuring SNMP Alerts" on page 79.
76 Configuration: About Your Storage Array
.
dialog is displayed. To configure e-mail alerts, see
Devices
dialog is displayed. To configure e-mail alerts, see
tab.
EditConfigure Alerts
.
Page 77

Configuring E-mail Alerts

For more information on configuring alert notifications, see "Configuring Alert Notifications" on page 76.
To configure e-mail alerts:
1
Open the
In the Tree view or the Table view on the
In the Setup tab in the EMW, select Configure Alerts. Go to step 2.
2
Select one of the following radio buttons to specify an alert level:
All storage arrays—Select this option to send an alert e-mail about
An individual storage array—Select this option to send an alert e-mail
These results occur, depending on your selection:
Configure Alerts
select a node, and then select
events on all storage arrays.
about events that occur on only a specified storage array.
dialog by performing one of these actions:
Edit
Configure Alerts
Devices
tab in the EMW,
. Go to step 3.
If you selected all storage arrays, the displayed.
If you selected An individual storage array, the dialog is displayed. Select the storage array for which you want to receive e-mail alerts and click displayed.
If you do not know which storage array to select, click the LEDs of the storage array.
3
In the
Configure Alerts
4
In
Mail server
(SMTP) mail server. The SMTP mail server is the name of the mail server that forwards the
alert e-mails to the configured e-mail addresses.
5
In
Email sender address
The e-mail address of the sender (the network administrator) is displayed on each e-mail alert sent to the destination.
, type the name of the Simple Mail Transfer Protocol
dialog, select the
, type the valid sender e-mail address.
Configuration: About Your Storage Array 77
Configure Alerts
OK
. The
Mail Server
dialog is
Select Storage Array
Configure Alerts
Blink
tab.
dialog is
to turn on
Page 78
6
To include the contact information of the sender in the e-mail alert, select
Include contact information with the alerts
, and type the contact
information.
NOTE: Including the contact information in the e-mail alert is optional.
7
Select the
Adding an e-mail address—In
Replacing an e-mail address—In the
e-mail
and click
tab to configure the e-mail destinations.
Add
e-mail address
.
, type the e-mail address,
Configured E-mail addresses
area, select the e-mail address to be r e placed, type the r e placement e­mail address in
Deleting an e-mail address—In the select the e-mail address, and click
Validating an e-mail address—Type the e-mail address in
address
or select the e-mail address in the
addresses
E-mail address
area, and click
, and click
Replace
.
Configured E-mail addresses Delete
.
Email
Configured E-mail
Test
. A test e-mail is sent to the selected e­mail address. A dialog with the results of the test and any error is displayed.
8
For the selected e-mail address, in
Event Only
—The alert e-mail contains only the event information.
Information To Send
, select:
This alert type is the default.
Event + Profile
—The alert e-mail contains the event information and
the storage array profile.
Event + Support
—The alert e-mail contains the event information and a compressed file tha t contains complete support information for the storage array that has generated the alert.
9
For the selected e-mail address, in
Every event
—Sends an alert e-mail whenever an event occurs. This is
Frequency
, select:
the default option.
Every x hours
—Sends an alert e-mail after the specified time interval
if an event occurred during that time interval. You can select this
10
option only if you have selected either
Support
in the
Information To Send
Click OK.
Event + Profile
drop down list.
or
Event +
area,
78 Configuration: About Your Storage Array
Page 79
An alert icon is displayed next to each node in the Tree view where an alert is set.
To ensure that the e-mail is sent successfully:
You must provide a SMTP mail server name and an e-mail sender address for the e-mail addresses to work.
The e-mail addresses that you had previously configured are displayed in the Configured e-mail addresses area.
You must use fully qualified e-mail addresses, for example, name@mycompany.com.
You can configure multiple e-mail addresses before you click
OK
.

Configuring SNMP Alerts

To add a management console to the list of addresses configured to receive SNMP alerts:
1
Open the
In the Tree view or the Table view on the
•In the
2
Select one of the following radio buttons to specify an alert level:
All storage arrays—Select this option to send an alert notification
An individual storage array—Select this option to send an alert
These results occur, depending on your selection:
Configure Alerts
select a node, and select
Setup
tab in the EMW, select
dialog by performing one of these actions:
Edit
Devices
Configure Alerts
Configure Alerts
tab in the EMW, . Go to step 3.
. Go to step 2.
about events on all storage arrays.
notification about events that occur in only a specified storage array.
If you selected All storage arrays, the
Configure Alerts
displayed.
If you selected An individual storage array, the
Select Storage Array
dialog is displayed. Select the storage array for which you want to receive alert notifications and click
OK
. The
Configure Alerts
displayed.
NOTE: If you do not know which storage array to select, click Blink to turn on
the LEDs of the storage array.
3
Select the
SNMP
tab to configure the SNMP alert destinations.
Configuration: About Your Storage Array 79
dialog is
dialog is
Page 80
Adding an SNMP address—In community name. In click
Add
.
NOTE: The community name is an American Standard Code for Information
Interchange (ASCII) string that identifies a known set of network management stations and is set by the network administrator. The def ault community name is the string “public”.The trap destination is the IP address or the host name of a computer running an SNMP management application. An example of a SNMP enabled management application is the Dell Management Console. For more information on Dell Management Console, see dell.com.
Trap destination
Community name
, type the trap destination, and
, type the
Replacing an SNMP address—Select the SNMP address in the
Configured SNMP addresses
name in
destination
Community name
, and click
area, type the replacement community
and the trap destination in
Replace
.
Trap
Deleting an SNMP address—Select the SNMP address in the
Configured SNMP addresses
area, and click
Delete
.
Validating an SNMP address—Select the SNMP address in the
Configured SNMP addresses
area, and click
Test
. A test message is sent to the SNMP address. A message box with the results of the validation and any error information is displayed.
4
Click OK. An alert icon is displayed next to each node in the Tree view for which an
alert is set.
Follow these guideline for SNMP alerts:
Any SNMP addresses that you had previously configured are displayed in the Configured SNMP addresses area.
The SNMP Community Name is determined by the system administrator and configured within the a management application, such as the Dell Management Console. More information about the Dell Management Console is available at
You can configure multiple SNMP addresses before you click
dell.com
.
OK
.
80 Configuration: About Your Storage Array
Page 81

Battery Settings

A smart battery backup unit (BBU) can perform a learn cycle. The smart BBU module includes the battery, a battery gas gauge, and a battery charger. The learn cycle calibrates the smart battery gas gauge so that it provides a measurement of the charge of the battery module. A learn cycle can only start when the battery is fully charged.
The learn cycle completes the following operations:
Discharges the battery to a predetermined threshold
Charges the battery back to full capacity
A learn cycle starts automatically when you install a new battery module. Learn cycles for batteries in both RAID controller modules in a duplex system occur simultaneously.
Learn cycles are scheduled to start automatically at regular intervals, at the same time and on the same day of the week. The interval between cycles is described in weeks.
Use the following guidelines to adjust the interval:
You can use the default interval.
You can run a learn cycle at any time.
You can set the learn cycle earlier than the currently scheduled time.
You cannot set the learn cycle to start more than seven days later than the currently scheduled time.
To change the battery settings perform these steps:
In the AMW, select
1
The
Battery Settings
Storage ArrayChange
dialog is displayed.
Battery Settings
.
2
In
Battery location
3
Check these details about the battery:
Battery status
Battery age
Days until replacement
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
, select a battery.
Configuration: About Your Storage Array 81
Page 82

Setting the Storage Array RAID Controller Module Clocks

You can use the Synchronize RAID Controller Module Clocks option to synchronize the storage array R AID contr oller module clocks with t he stor age management station. This option makes sure that the event timestamps written by the RAID controller modules to the Event Log match the event timestamps written to host log files. The RAID controller modules remain available during synchronization.
To synchronize the RAID controller module clocks with the storage management station:
1
In the AMW, select
Module Clocks
2
If a password is set, in the password for the storage array, and click
The RAID controller module clocks are synchronized with the storage management station.
Storage Array
.
Synchronize RAID Controller
Enter Password
dialog, type the current
Synchronize
.
82 Configuration: About Your Storage Array
Page 83
7

Configuration: Event Monitor

An event monitor is provided with Dell PowerVault Modular Disk Storage Manager (MDSM). The event monitor runs continuously in the background and monitors activity on the managed storage arrays. If the event monitor detects any critical problems, it can notify a host or remote system using e­mail, Simple Network Management Protocol (SNMP) trap messages, or both.
For the most timely and continuous notification of events, enable the event monitor on a management station that runs 24 hours a day. Enabling the event monitor on multiple systems or having a combination of an event monitor and MDSM active can result in duplicate events, but this does not indicate multiple failures on the array.
To use the Event Monitor:
Set up alert destinations for the managed device that you want to monitor. A possible alert destination would be the Dell Management Console. More information about the Dell Management Console can be found at
dell.com
Replicate the alert settings from a particular managed device by copying the want to receive alerts.
Each managed device shows a check mark that indicates that alerts are set.
.
emwdata.bin
file to every storage management station from which you
Configuration: Event Monitor 83
Page 84

Enabling or Disabling the Event Monitor

You can enable or disable the event monitor at any time. Disable the event monitor if you do not want the system to send alert
notifications. If you are running the event monitor on multiple systems, disabling the event monitor on all but one system prevents the sending of duplicate messages.
NOTE: It is recommended that you configure the event monitor to start by default
on a management station that runs 24 hours a day.

Windows

To enable or disa ble the event monitor:
1
Click
Start
or Click
Start Settings Control Panel Administrative Tools
Services
2
From the list of services, select
Monitor
3
Select
4
To enable the event monitor, in the
5
To disable the event monitor, in the
Administrative Tools
.
.
Action
Properties
Services
Modular Disk Storage Manager Event
.
Service Status
Service Status
.
area, click
area, click
Start
Stop
.
.

Linux

To enable the event monitor, at the command prompt, type SMmonitor start and press <Enter>. When the program startup begins, the system
displays the following message:
SMmonitor started.
To disable the event monitor, start terminal emulation application (console ox xterm) and at the command prompt, type SMmonitor stop and press <Enter>. When the program shutdown is complete, the following message is displayed:
Stopping Monitor process.
84 Configuration: Event Monitor
Page 85
8

Configuration: About Your Host

Configuring Host Access

Modular Disk Storage Manager (MDSM) software is comprised of multiple modules. One of these modules is the Host Context Agent. The host context agent is installed as part of the MDSM installation and runs continuously in the background.
If the host context agent is running on a host, the host and the host ports connected from it to the storage array are automatically detected by MDSM and are displayed on the Mappings tab in the Array Management Window (AMW). For more information on the Mappings tab, see "Using the Mappings Tab" on page 86.
If the hosts are not detected automatically, you can use the Define Host Wizard to define the hosts that access the virtual disks in the storage array. Defining a host is one of the steps required to let the storage array know which hosts are attached to it and to allow access to the virtual disks. For more information on defining the hosts, see "Defining a Host" on page 86.
To enable the host to write to the storage array, you must map the host to the virtual disk. This mapping grants a host or a host group access to a particular virtual disk or to a number of virtual disks in a storage array. You can define the mappings on the Mappings tab in the AMW.
On the Summary tab in the AMW, the Hosts & Mappings area indicates how many hosts are configured to access the storage array. You can click Configured Hosts in the Hosts & Mappings area to see the names of the hosts
A collection of elements, such as default host groups, hosts, and host ports, are displayed as nodes in the Topology pane of the Mappings tab in the AMW.
The host topology is reconfigurable. You can perform the following tasks:
Create a host and assign an alias or user label.
Add or associate a new host port identifier to a particular host.
Configuration: About Your Host 85
Page 86
Change the host port identifier alias or user label.
Move or associate a host port identifier to a different host.
Replace a host port identifier with a new host port identifier.
Manually activate an inactive host port so that the port can gain access to host specific or host group specific LUN mappings.
Change the ho st port type to another type.
Move a host from one host group to another host group.
Remove a host group, a host, or a host port identifier.
Rename a host group or a host.

Using the Mappings Tab

In the Mappings tab, you can:
Define hosts and hosts groups
Add mappings to the selected host groups
For more information, see the PowerVault Modular Disk Storage Manager online help topics.

Defining a Host

You can use the Define Host Wizard in the AMW to define a host for a storage array. Either a known unassociated host port identifier or a new host port identifier can be added.
NOTE: A user label must be specified before the host port identifier may be added
(the add button is disabled until one is entered).
To define a host:
1
In the AMW, select the array.
2
Perform one of the actions:
Select
Select the
86 Configuration: About Your Host
Mappings Define Host
Setup
Mappings
tab, and click
tab and select the appropriate storage
.
Manually Define Hosts
.
Page 87
Select the name), pane to which you want to add the host, and select
Mappings
tab. Right-click the root node (storage array
Default Group
node, or
Host Group
node in the
Define Host
Topology
from the pop-up menu.
The
Specify Host Name
3
In
Host name
4
Select the relevant option in
, enter an up to 30 character alphanumeric name.
the this storage array
The
Specify Host Port Identifier
5
Select the relevant option to add a host port identifier to the host, you can
window is displayed.
Do you plan to use the storage partitions in
? and click
Next
s window is displayed.
.
select:
Add by selecting a known unassociated host port identifier Known unassociated host port identifiers
, select the relevant host
—In
port identifier.
Add by creating a new host port identifier identifier
character
NOTE: The host port identifier name is in hexadecimal and must contain the letters A
through F and numbers 0 through 9.
6
Click
Next
The
Specify Host Type
, enter a 16 hexadecimal character name and an up to 30
Alias
for the host port identifier and click
.
window is displayed.
—In
New host port
Add
.
7
In
Host
type, select the relevant operating system for the host.
Host Group Question
The
8
In this window, you can select:
Yes
—this host shares access to the same virtual disks with other hosts.
No
—this host does NOT share access to the same virtual disks with
window is displayed.
other hosts.
9
Click
Next
.
If you select select
10
Enter the name of the host group or select an existing host group and click
Next
.
Yes
No
, see step 11
, the
Specify Host Group
window is displayed. If you
Configuration: About Your Host 87
Page 88
The
11
Preview
Click
window is displayed.
Finish
.

Removing Host Access

To remove host access:
1
In the AMW, select the
Topology
2
Perform one of these actions:
Select
Right-click the host node and select The
3
Type
4
Click OK.
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
pane.
Mappings Remove
Remove confirmation
yes
.
Mappings
tab, select the host node in the
.
dialog is displayed.
Remove
from the pop-up menu.

Managing Host Groups

A host group is a logical entity of two or more hosts that share access to specific virtual disks on the storage array. You create host groups with MDSM.
All hosts in a host group must have the same host type (operating system). In addition, all hosts in the host group must have special software, such as clustering software, to manage virtual disk sharing and accessibility.
If a host is part of a cluster, every host in the cluster must be connected to the storage array, and ever y host in the cluster must be added to the host group.

Creating a Host Group

To crea te host groups:
1
In the AMW, select the
2
In the
Topology
3
Perform one of the following actions:
Select
88 Configuration: About Your Host
pane, select the storage array or the
Mappings Define Host Group
Mappings
tab.
Default Group
.
Page 89
Right-click the storage array or the
Define Host Group
4
Type the name of the new host group in
5
Select the appropriate hosts in the
6
Click
Add
.
NOTE: To remove hosts, select the hosts in the Hosts in group area, and click
Remove.
7
Click OK.
from the pop-up menu.
Default Group,
and select
Enter new host group name
Select hosts to add
area.
.
The host group is added to the storage array.
Adding a Host to a Host Group
You can add a host to an existing host group or a new host group using the Define Host Wizard. For more information, see "Defining a Host" on page 86.
You can also move a host to a different host group. For more information, see "Moving a Host to a Different Host Group" on page 89.
Removing a Host From a Host Group
You can remove a host from the Topology pane on the Mappings tab of the Array Management Window. For more information, see "Removing a Host
Group" on page 90.

Moving a Host to a Different Host Group

To move a host to a different host group:
1
In the AMW, select the
Topology
2
Perform one of these actions:
Select
pane.
Mappings Move
Right-click the host node, and select
The
3
Move Host
In the
Select host group
dialog is displayed.
the host. The
Move Host Confirmation
Mappings
tab, select the host node in the
.
, select the host group to which you want to move
dialog is displayed.
Configuration: About Your Host 89
Move
from the pop-up menu.
Page 90
4
Click
Yes
.
The host is moved to the selected host group with the following mappings:
The host retains the specific virtual disk mappings assigned to it.
The host inherits the virtual disk mappings assigned to the host group to which it is moved.
The host loses the virtual disk mappings assigned to the host group from which it was moved.

Removing a Host Group

To remove a host group:
1
In the AMW, select the
Topology
2
Perform one of these actions:
Select
Right-click the host node, and select
The
3
Click The selected host group is removed.
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
pane.
Mappings Remove
Remove
dialog is displayed.
Yes
.
Mappings
.
tab, select the host node in the
Remove
from the pop-up menu.

Host Topology

Host topology is the organization of hosts, host groups, and host interfaces configured for a storage array. You can view the host topology in the Mappings tab of the AMW. For more informatio n, see "Using the Mappings Tab" on page 86.
The following tasks change the host topology:
Moving a host or a host connection
Renaming a host group, a host, or a host connection
Adding a host connection
Replacing a host connection
90 Configuration: About Your Host
Page 91
Changing a host type
MDSM automatically detects these changes for any host running the host agent software.

Starting or Stopping the Host Context Agent

The host context agent discovers the host topology and starts and stops with the host. The topology discovered by the host context agent can be viewed by clicking Configure Host Access (Automatic) in the Configure tab in the MDSM.
You must stop and restart the host context agent to see the changes to the host topology if:
A new storage array is attached to the host server.
A host is added while turning on power to the RAID controller modules.
Linux
To start or stop the host context agent, enter the following command at the prompt:
SMagent start SMagent stop
You stop and then restart SMagent after:
Moving a controller offline or replacing a controller.
Removing host-to-array connections from or attaching host-to-array connections to a Linux host server.
Windows
To start or stop the host context agent:
Click
1
Start Settings Control Panel Administrative Tools
Services
.
or Click
Start
2
From the list of services, select
Administrative Tools
Modular Disk Storage Manager Agent
Services
.
.
Configuration: About Your Host 91
Page 92
3
If the host context agent is running, click
Action
Stop
, then wait
approximately 5 seconds.
4
Click
Action
Start
.

I/O Data Path Protection

You can have multiple host-to-array connections for a host. Ensure that you select all the connections to the array when configuring host access to the storage array.
NOTE: See the Deployment Guide for more information on cabling configurations.
NOTE: For maximum redundancy, you must select all host connections to the array
when manually defining host topology. For example, a host may have two host connections listed when manually configuring host access. For this host, you would select the two host connections listed in the Available hosts section and add them to the Selected hosts section using the Add button.
NOTE: For more information on configuring hosts see "Configuration: About Your
Host" on page 85.
If a component such as a RAID controller module or a cable fails, or an error occurs on the data path to the preferred RAID controller module, the virtual disk ownership is moved to the alternate non preferred RAID controller module for processing. This feature is called failover.
Drivers for multi-path frameworks such as Microsoft Multi-Path IO (MPIO) and Linux Device Mapper (DM) are installed on host systems that access the storage array and provide I/O path failover.
For more information on Linux DM please see "Configuration: Device Mapper Multipath for Linux" on page179. For more information on MPIO please see microsoft.com.
NOTE: You must have the multi-path driver installed on the hosts at all times, even
in a configuration where there is only one path to the storage system, such as a single port cluster configuration.
During a failover, the virtual disk transfer is logged as a critical event, and an alert notification is sent automatically if you have configured alert destinations for the storage array.
92 Configuration: About Your Host
Page 93

Managing Host Port Identifiers

You can manage the host port identifiers that are added to the storage array. You can:
Add—Add or associate a new host port identifier to a particular host.
Edit—Change the host port identifier alias or user label. You can move (associate) the host port identifier to a new host.
Replace—Replace a particular host port identifier with another host port identifier.
Remove—Remove the association between a part icular host port iden tifier and the associated host.
To manage a host port identifier:
1
Perform one of these actions:
Right-click the host in the
Port Identifiers
From the menu bar, select
Identifiers
The
Manage Host Port Identifi ers
manage the host port identifiers for a specific host or all of the host port identifiers for all of the hosts in
with
.
2
If you want to manage the host port identifiers for a specific host, select the host from the list of hosts that are associated with the storage array. If you want to manage the host port identifiers for all hosts, select from the list of hosts that are associated with the storage array.
3
If you are adding a new host port identifier, go to step4. If you are managing an existing host port identifier, go to step 8.
4
Click
Add
.
Add Host Port Identifier
The
in the pop-up menu.
.
Topology
Mappings Manage Host Port
Show host port identifiers associated
dialog is displayed.
pane, and select
dialog is displayed. You can choose to
Manage Host
All hosts
Configuration: About Your Host 93
Page 94
5
Select the method to add a host port identifier to the host. You can select:
Add by selecting a know unassociated host port identifier
the appropriate host port identifier from the existing list of Known unassociated host port identifiers.
Add by creating a new host port identifier
identifier, enter the name of the new host port identifier.
6
In
User label
7
In
Associated with host
8
Select the host port identifier that you would like to manage from the list of host port identifiers in the Host port identifier information area.
9
Perform one of these actions for the selected host port identifier:
To edit the host port identifier—Select the appropriate host port identifier and click displayed, update
To replace the host port identifier—Select the appropriate host port identifier and click displayed, replace the current host port identifier with a known unassociated host port identifier or create a new host port identifier, update
To remove the host port identifier—Select the appropriate host port identifier and click displayed, type
For more information, see the
online help
, enter up to 30 character alphanumeric name.
, select the appropriate host or host group.
Edit
, the
Edit Host Port Identifier
User label
topics.
User label
Replace
and click
Edit
yes
and click OK.
and
Associated with host
, the
Replace Host P ort Identifier
Replace
, the
Remove Host Port Identifier
P owe rVault Modular Disk Storage Manager
—In New host port
.
—Select
dialog is
and click
dialog is
Save
dialog is
.
94 Configuration: About Your Host
Page 95
9

Configuration: Disk Groups and Virtual Disks

Creating Disk Groups and Virtual Disks

Disk groups are created in the unco nfi gured capacity of a storage array, and virtual disks are created in the free capacity of a disk group. The maximum number of physical disks supported in a disk group is 30. The hosts attached to the storage array read and write data to the virtual disks.
NOTE: Before you can create virtual disks, you must first organize the physical
disks into disk groups and configure host access. Then you can create virtual disks within a disk group.
To create a virtual disk, use one of the following methods:
Create a new disk group from unconfigured capacity. You first define the RAID level and free capacity (available storage space) for the disk group, and then you define the parameters for the first virtual disk in the new disk group.
Create a new virtual disk in the free capacity of an existing disk group. Y ou only need to specify the parameters for the new virtual disk.
A disk group has a set amount of free capacity that is configured when the disk group is created. You can use that free capacity to subdivide the disk group into one or more virtual disks.
You can create disk groups and virtual disks using:
Automatic configuration—Provides the fastest method, but with limited configuration options.
Manual configuration—Provides more configuration options.
When creating a virtual disk, consider the uses for that virtual disk, and select an appropriate capacity for those uses. For example, if a disk group has a virtual disk that stores multimedia files (which tend to be large) and another virtual disk that stores text f iles (which tend to be small), t he mult imedia fil e virtual disk requires more capacity than the text file virtual disk.
Configuration: Disk Groups and Virtual Disks 95
Page 96
A disk group must be organized according to its related tasks and subtasks. For example, if you create a disk group for the Accounting Department, you can create virtual disks that match the different types of accounting transactions performed in the department: Accounts Receivable (AR), Accounts Payable (AP), internal billing, and so forth. In this scenario, the AR and AP virtual disks probably need more capacity than the internal billing virtual disk.
NOTE: In Linux, the host must be rebooted after deleting virtual disks to reset the
/dev entries.
NOTE: Before you can use a virtual disk, you must register the disk with the host
systems. See "Host-to-Virtual Disk Mapping" on page119.

Creating Disk Groups

You can create disk groups using either Automatic configuration or Manual configuration.
To create disk groups using automatic configuration :
1
To start the Create Disk Group Wizard, perform one of these actions:
To create a disk group from unconfigured capacity in the storage array—On the and select
Unconfigured Capacity
pop-up menu.
To create a disk group from unassigned physical disks in the storage array—On the disks of the same physical disk type, and select Alternatively, you can right-click the unassigned physical disks, and select
Create Disk Group
To create a secure disk group—On the unassigned security capable physical disks of the same physical disk type, and select click the unassigned security capable physical disks, and select
Disk Group
ntroduction (Create Disk Group)
The I
Logical
tab, select an
Disk Group Create
node, and select
Physical
tab, select one or more unassigned physical
from the pop-up menu.
Disk Group Create
from the pop-up menu.
Unconfigured Capacity
node,
. Alternatively, you can right-click the
Create Disk Group
from the
Disk Group Create
Physical
tab, select one or more
. Alternatively, you can right-
Create
window is displayed.
.
2
Click
Next
.
The
Disk Group Name and Physical Disk Selection
96 Configuration: Disk Groups and Virtual Disks
window is displayed.
Page 97
3
Type the name of the disk group (up to 30 characters) in
4
Select the appropriate
Automatic
Manual
5
Click
Next
6
For Automatic configuration, The
, see step 6
, see step 9
.
Physical Disk selection choices
RAID Level and Capacity
Disk group name
, you can select:
displayed.
7
Select the appropriate RAID level in
Select RAID level
. You can select
RAID levels 0, 1/10, 6, and 5. Depending on your RAID level selection, the physical disks available for
the selected RAID level is displayed in
8
In the
Select Capacity
click
Finish
.
9
F or Manual configuration, The
table, select the relevant disk group capacity, and
Manual Physical Disk Selection
Select capacity
table.
displayed.
10
Select the appropriate RAID level in
Select RAID level
. You can select
RAID levels 0, 1/10, 6, and 5. Depending on your RAID level selection, the physical disks available for
the selected RAID level is displayed in
11
In the
Unselected physical disks
disks and click
Add
.
Unselected physical disks
table, select the appropriate physical
.
window is
window is
table.
NOTE: You can select multiple physical disks at the same time by holding
<Ctrl> or <Shift> and selecting additional physical disks.
12
To view the capacity of the new disk group, click
13
Click
Finish
.
Calculate Capacity
A message prompts you that the disk group is successfully created and that you must create at least one virtual disk before you can use the capacity of the new disk group. For more information on creating virtual disks, see "Creating Virtual Disks" on page 98.
Configuration: Disk Groups and Virtual Disks 97
.
Page 98

Locating a Disk Group

You can physically locate and identify all of the physical disks that comprise a selected disk group. An LED blinks on each physical disk in the disk group.
To locate a disk group:
1
In the AMW, select the
2
Select the appropriate disk group and from the toolbar select
Group
Blink
.
The LEDs for the selected disk group blink.
3
After locating the disk group, click OK. The LEDs stop blinking.
4
If the LEDs for the disk group do not stop blinking, from the toolbar in AMW, select
If the LEDs successfully stop blinking, a confirmation message is displayed.
5
Click OK.
Storage Array
Logical
tab.
Blink
Stop All Indications
Disk
.

Creating Virtual Disks

Keep these important guidelines in mind when you cr eate a virtual disk:
Many hosts can have 256 logical unit numbers (L UNs) mapped per storage partition, but the number varies per operating system.
After you create one or more virtual disks and assign a mapping, you must register the virtual disk with the operating system. In addition, you must make sure that the host recognizes the mapping between the physical storage array name and the virtual disk name. Depending on the operating system, run the host-based utilities,
If the storage array contains physical disks with different media types or different interface types, multiple Unconfigured Capacity nodes may be displayed in the has an associated Unconfigured Capacity node if unassigned physical disks are available in the expansion enclosure.
You cannot create a disk group and subsequent virtual disk from different physical disk technology types. Each physical disk that comprises the disk group must be of the same physical disk type.
Logical
pane of the
hot_add
Logical
and
SMdevices
tab. Each physical disk type
.
98 Configuration: Disk Groups and Virtual Disks
Page 99
NOTE: Ensure that you create disk groups before creating virtual disks.
To create virtual disks:
1
Choose one of these methods to start the Create Virtual Disk Wizard:
To create a virtual disk from unconfigured capacity in the storage array—On the and select
Unconfigured Capacity
Logical
tab, select an
Virtual Disk Create
node, and select
Unconfigured Capacity
. Alternatively, you can right-click the
Create Virtual Disk
the pop-up menu.
To create a virtual disk from free capacity on a disk group—On the
Logical Create
select
tab, select a
Free Capacity
node, and select
. Alternatively, you can right-click the
Create Virtual Disk
from the pop-up menu.
Virtual Disk
Free Capacity
To create a virtual disk from unassigned physical disks in the storage array—On the disks of the same physical disk type, and s elect
Physical
tab, select one or more unassigned physical
Virtual Disk Create
Alternatively, you can right-click the unassigned physical disks, and select
Create Virtual Disk
To create a secure virtual disk—On the
from the pop-up menu.
Physical
tab, select one or more unassigned security capable physical disks of the same physical disk type, and select
Virtual Disk Create
. Alternatively, you can
right-click the unassigned security capable physical disks, and select
Create Virtual Disk
If you chose an
Unconfigured Capacity
to create a virtual disk, the
Yes
and create a disk group by using the
Create Virtual Disk Wizard
from the pop-up menu.
node or unassigned physical disks
Disk Group Required
dialog is displayed. Click
Create Disk Group Wizard
is displayed after you create the disk group.
node,
from
node, and
.
. The
If you chose a
Disk)
window is displayed.
2
Click
Next
Specify Capacity/Name
The
3
Select the appropriate unit for memory in the virtual disk in
4
Enter an up to 30 character name for the virtual disk in
Free Capacity
.
New virtual disk capacity
node, the The
Introduction (Create Virtual
window is displayed.
Units
.
Configuration: Disk Groups and Virtual Disks 99
and enter the capacity of
Virtual disk name
.
Page 100
5
In
Advanced virtual disk parameters
Use recommended settings
Customize settings
6
Click
Next
.
7
In the
Customize Advanced Virtual Disk Parameters
appropriate Virtual Disk I/O characteristics type. You can select:
File system (typical)
Database
•Multimedia
•Custom
NOTE: If you select Custom, you must select an appropriate segment size.
8
Select the appropriate click
Next
.
The
Specify Virtual Disk-to-LUN Mapping
9
In the
Specify Virtual Disk-to-LUN Mapping
•Default Mapping
The appropriate
Map later Using Mappings View
10
Click
Finish
.
The virtual disks are created.
.
Preferred RAID controller module ownership
Host type
, you can select:
.
window is displayed.
window, you can select:
.
window, select the
and

Changing the Virtual Disk Modification Priority

You can specify the modification priority setting for a single virtual disk or multiple virtual disks on a storage array.
Guidelines to change the modification priority of a virtual disk:
If more than one virtual disk is selected, the modification priority defaults to the lowest priority. The current priority is shown only if a single virtual disk is selected.
Changing the modification priority by using this option modifies the priority for the selected virtual disks.
100 Configuration: Disk Groups and Virtual Disks
Loading...