Trademarks used in this text: Dell™, the DELL logo, PowerEdge™, PowerVault™, and
OpenManage™ are trademarks of Dell Inc. Intel
the U.S. and other countries. Microsoft
®
Explorer
and/or other countries. Red Hat
Hat, Inc. in the United States and other countries. SUSE
the United States and other countries.
Regulatory Model: E03J Series and E04J Series
Regulatory Type: E03J001 and E04J001
2013 - 06Rev. A02
are either trademarks or registered trademarks of Microsoft Corporation in the United States
®
®
and Red Hat Enterprise Linux® are registered trademarks of Red
®
, Windows®, Windows Server®, MS-DOS®, and Internet
is a registered trademarks of Intel Corporation in
NOTE: Unless specified, MD3200i Series represents Dell PowerVault MD3200i and
Dell PowerVault MD3220i storage arrays.
WARNING: See the Safety, Environmental, and Regulatory Information document
for important safety information before following any procedures listed in this
document.
About This Document
This guide familiarizes you with the functions of the Dell PowerVault
MD3200i Series storage array. The guide is organized according to the tasks
that you must complete after receiving your MD3200i Series storage array.
The tasks are:
Planning—Information about the storage array and its features.
Configuration—Tasks that must be completed to ensure that your storage
array performs optimally.
Management—Tasks that may be performed to ensure that the storage array
components are up to date and performing properly, including removal and
installation of storage array components.
Troubleshooting—Tasks that you must complete to resolve problems that
may occur with the storage array.
Additional information on these and other topics can be found in the Dell PowerVault MD3200i and MD3220i Storage Array Deployment Guide at
dell.com/support/manuals.
Inside the Box of the Dell PowerVault MD3200i
Series Storage Array
Your MD3200i Series product package includes:
•Dell PowerVault MD3200i
•Power cables
•Front bezel (optional)
Series
storage array
Introduction19
Page 20
•Mounting rails (2) (optional)
•MD3200i
•
Rack Installation Instructions
•
Getting Started With Your System
features, setting up your enclosure, and technical specifications).
Series
resource media
.
(provides an overview of enclosure
MD3200i Series Storage Array
The Dell PowerVault MD3200i Series is a 2U rack-mounted external
redundant array of independent disks (RAID) storage array capable of
accommodating up to twelve 3.5" or twenty four 2.5" 6.0-Gbps SerialAttached SCSI (SAS) disks. The MD3200i Series storage arrays can be daisychained with MD1200 Series expansion enclosures, providing access to a
maximum of 120 disks (or 192 disks with Premium Feature activation) in the
entire storage system. Connectivity between the storage array and the host
server is provided by a standard Ethernet connection.
Dell PowerVault Modular Disk Stor ag e Man ager
Dell PowerVault Modular Disk Storage Manager (MDSM) is a graphical user
interface (GUI) application, used to configure and manage one or more
MD3200i Series Storage Arrays. The MDSM software is located on the
MD3200i Series resource media.
Dell PowerVault Modular Disk Configuration Utility
Dell PowerVault Modular Disk Configuration Utility (MDCU) is an iSCSI
configuration wizard that can be used in conjunction with MDSM to simplify
the configuration of iSCSI connections. The MDCU software is located on
the MD3200i Series resource media.
Other Information You May Need
WARNING: See the safety and regulatory information that shipped with your
system. Warranty information may be included within this document or as a
separate document.
•The
20Introduction
Getting Started Guide
your storage array.
provides an overview of setting up and cabling
Page 21
•The
Deployment Guide
provides installation and configuration instructions
for both software and hardware.
•The
Storage Manager CLI Guide
provides information about using the
command line interface (CLI).
•The Resource media contains all system management tools.
•The
Systems Support Matrix
provides information on supported software
and hardware for MD systems. The document is available at
dell.com/support/manuals
•The
Dell P owerEdge C luster D ocum entation
dell.com/support/manuals
•
Release notes
or readme files are included to provide last-minute updates
.
is available at
.
to the enclosure or documentation or advanced technical reference
material intended for experienced users or technicians.
•This document as well as
Guide
is available at
Dell P o werVault MD 1200 Series Installation
dell.com/support/manuals
for users who incorporate
MD1200 expansion enclosures.
•The
Rack Installation Instructions
included with your rack solution
describes how to install your enclosure into a rack.
NOTE: Always check for updates on dell.com/support/manuals and read the
updates first because they often supersede information in other documents.
Introduction21
Page 22
22Introduction
Page 23
2
Planning: About Your Storage Array
Overview
The MD3200i Series storage array is designed for high availability, offering
redundant access to data storage. It features support for both single and dual
RAID controller configuration.
The Dell PowerVault MD3200i storage array provides 1.0-Gbps 1000 BaseT
connectivity to the host server and enables access to 32 physical hosts.
The MD3200i Series storage array includes:
•RAID controller module(s)
•PSU/Fan modules
•Disk drives (also called physical disk drives in this document)
•A front bezel (optional)
•A system enclosure, into which, the other components are plugged
Planning: About Your Storage Array23
Page 24
Hardware Features
1
2
3
5
4
6
1
2
3
4
6
5
Front-Panel Features and Indicators
Figure 2-1. Front-Panel Features and Indicators—Dell PowerVault MD3200i
Figure 2-2. Front-Panel Features and Indicators—Dell PowerVault MD3220i
24Planning: About Your Storage Array
Page 25
Figure 2-3. Front-Bezel Features and Indicators
1
2
3
Item Indicator, Button, or
Connector
1Enclosure status LEDThe enclosure status LED lights when the
2Power LEDThe power LED lights green when at least one
Icon Description
enclosure power is on.
Lights blue during normal operation.
Blinks blue when a host server is identifying the
enclosure or when the system identification
button is pressed.
Lights amber as enclosure boots or is reset.
Blinks amber when the enclosure is either in a
fault state or the hosts are not using the preferr ed
path to a virtual disk.
power supply is supplying power to
the enclosure.
Planning: About Your Storage Array25
Page 26
Item Indicator, Button, or
Connector
3Split mode LEDThis LED must be unlit as the split mode
4System identification
button
5Hard drivesMD3200i—Up to twelve 3.5" SAS hot-swappable
6Enclosure mode
switch
Icon Description
function is not supported by the MD3200i Series
Storage Arrays.
The system identification button on the front
control panel can be used to locate a particular
enclosure within a rack. When the button is
pushed, the system status indicators on the
control panel and the RAID controller module(s)
blink blue until the button is pushed again.
hard drives.
MD3220i—Up to twenty four 2.5" SAS hot-
swappable hard drives.
The function of this switch is not applicable to
your storage array. However, if MD1200 Series
expansion enclosures are daisy chained to the
storage array, the enclosure mode switches of the
MD1200 Series expansion enclosures must be set
to the Unified-Mode position.
NOTE: This switch must be set before turning on
the MD1200 series expansion enclosure. Changing
the switch setting after the expansion enclosure is
turned on has no effect on the enclosure
configuration until the expansion enclosure goes
through a complete power cycle.
26Planning: About Your Storage Array
Page 27
Back Panel Features and Indicators
1
23
4
Figure 2-4. Back-Panel Features and Indicators—Dell PowerVault MD3200i Series
1600 W power supply/cooling fan
module
3RAID Controller Module 14600 W power supply/cooling fan
2RAID Controller Module 0
module
Planning: About Your Storage Array27
Page 28
Hard-Drive Indicator Patterns
1
2
Figure 2-5. Hard Drive Indicators
1 hard-drive activity indicator (green)2hard-drive status indicator (green
28Planning: About Your Storage Array
and amber)
Page 29
Drive-Status Indicator PatternCondition
OffThe physical disk is:
• not yet discovered by the host server
•is spun down for removal
• not supported for the RAID con trolle r
module or is not in the physical disk slot
NOTE: The drive status indicator remains
off until all hard drives are initialized after
system power is turned on. Drives are not
ready for insertion or removal during this
time.
Steady greenPhysical disk is online
Green flashing (On 250 ms, Off 250 ms) Physical disk is being identified
Green flashing (On 400 ms, Off 100 ms) Physical disk rebuilding
Amber flashing (On 150 ms, Off 150 ms) Physical disk failed
Flashing green, amber, and off (green On
500 ms, amber On 500 ms, Off 1000 ms)
Flashing green, amber, and Off (green 3
seconds, amber 3 seconds, and Off 3
seconds)
Physical disk failure predicted (SMART)
Physical disk rebuild aborted
Power Supply and Cooling Fan Features
The MD3200i Series storage array includes two integrated, hot-swappable
power supply/cooling fan modules. Both modules must be installed to ensure
proper cooling. The system requires at least one of the cooling fans to
function to avoid overheating.
A power supply/cooling fan module can be replaced without powering down
the system. For information on removing and installing the modules, see
"Power Supply/Cooling Fan Module" on page 225.
CAUTION: A power supply/cooling fan module can be removed from a powered-
on system for a maximum period of 5 minutes. Beyond tha t time, the system may
automatically shut down to prevent damage.
Planning: About Your Storage Array29
Page 30
Power Indicator Codes and Features
1
2
3
5
4
Figure 2-6. Power Indicator Codes and Features
Item LED TypeIcon Description
1DC powerThe LED lights green when the DC output
voltage is within the limit.
If this LED is off, it indicates that the DC output
voltage are not within the limit.
2Power supply/cooling
fan fault
3AC powerThe LED lights green when the AC input voltage
4Power connectorConnect the external power supply to this
5Power switches (2)The power switch controls the power supply
The LED lights amber when the DC output
voltage is not within the limit or a fault with the
fan is detected.
If this LED is off, it indicates that no fault
condition is present.
is within the limit.
If this LED is off, it indicates either there is no
power or the AC input voltage is not within the
limit.
connector.
output to the enclosure.
30Planning: About Your Storage Array
Page 31
3
Planning: RAID Controller Modules
RAID Controller Modules
The RAID controller modules provide high-performance, advanced virtual
disk configuration, and fault-tolerant disk subsystem management. Each
RAID controller module contains 2 GB or 4 GB of mirrored cache for high
availability and is protected by a battery powered cache offload mechanism.
NOTE: The 4 GB mirrored cache is an optional feature.
RAID controller modules provide the following data path and enclosure
management functions:
•Monitoring and controlling enclosur e environment elements
(temperature, fans, power supplies, and enclosure LEDs)
•Contro lling access to the physical disks
•Communicating enclosure attributes and states to the host server and
management station
Each RAID controller module has multiple iSCSI IN-ports for host access.
The ports provide redundant host connections and support a high availability
storage environment. Vari ous configurations can be utilized, in both single
controller (simplex) and dual controller (duplex) modes, to connect the
storage enclosure to hosts depending on specific redundancy needs.
For information on cabling, see the MD3200i and MD3220i Series storage
array’s Deployment Guide at dell.com/support/manuals.
Planning: RAID Controller Modules31
Page 32
RAID Controller Module Connectors and Features
12567
81112141516
9
10
1318 1917
3
4
Figure 3-1. MD3200i Series iSCSI RAID Controller Module
Item ComponentFunction
1SAS OUT port Provides SAS connection for cabling to an expansion
enclosure.
2iSCSI IN port 0Provides host-to-controller iSCSI 1Gbps Ethernet
connection.
3iSCSI IN port 1Provides host-to-controller iSCSI 1Gbps Ethernet
connection.
4iSCSI IN port 2Provides host-to-controller iSCSI 1Gbps Ethernet
connection.
5iSCSI IN port 3Provides host-to-controller iSCSI 1Gbps Ethernet
connection.
6Management port
ethernet connector
7Debug portDell support only.
Provides a 100/1000 Mbps Ethernet connection for
out-of-band management of the enclosure.
32Planning: RAID Controller Modules
Page 33
Item ComponentFunction
8SAS OUT port
link/fault LED
9iSCSI IN port link
LED
10Controller power LED Lights green when controller is turned on.
11iSCSI IN port activity
LED
12Controller fault LED Lights amber when controller fault detected.
13System identification
LED
14Cache active or cache
offload LED
15Battery faultLights amber when battery backup unit or battery has
16Password reset switch Activating this switch deletes the password.
17MAC address labelProvides MAC addresses of iSCSI host ports and the
Lights green when all four links are connected.
Lights amber when one to 3 links are disconnected.
Off when all links in the port are disconnected or
cable is disconnected.
Lights green when ethernet connection at 1Gbps
established.
Off when there is no link.
Off when controller is not turned on.
Lights green when there is no activity on 1Gbps
connection.
Blinks green when there is Activity on 1Gbps
connection.
Off when 1Gbps link is down.
Off when controller operating normally.
Blinks blue when system identification switch push-
button on enclosure front panel is pressed.
Lights green when On-board controller memory
contains data.
If AC power fails, this LED changes to indica te Cache
Offload status. If the password reset function has
successfully changed the password, this LED flashes
on and off briefly.
failed.
Off when battery backup unit is operating normally.
management port.
Planning: RAID Controller Modules33
Page 34
Item ComponentFunction
18Management port
speed LED
19Management port
activity LED
Lights green when ethernet connection is operatin g at
1000 Mbps.
Lights amber when ethernet connection is operating
at 100 Mbps.
Off when ethernet connection is operating at 10 Mbps
or is not active.
Lights green when ethernet connection is active.
Off when ethernet connection is not active.
RAID Controller Module—Additional Features
Battery Backup Unit
Each RAID controller contains a two-cell Lithium ion nanopolymer battery
backup unit (BBU). It provides power to the RAID controller module in the
event of a power outage. For information on removing and installing the BBU,
see "RAID Controller Module Backup Battery Unit" on page 224.
NOTE: For virtual disks, the RAID controller firmware changes the data cache
setting based on the state of the battery. If the battery is missing or does not have
sufficient charge, the controller flushes the cache and sets the write cache
attribute to Write Through for all virtual disks. When the battery is replaced, Write
Back is re-enabled.
Storage Array Thermal Shutdown
The system automatically shuts down when system temperature exceeds the
safe threshold. The battery backup unit protects against data los s by
providing power to offload to non-volatile memory in the event of power loss.
It is not necessary to shut down any MD1200 Series expansion enclosures
attached to the storage array when thermal shutdown occurs.
Temperature threshold values determine the temperature at which shutdown
occurs. These thresholds cannot be changed.
34Planning: RAID Controller Modules
Page 35
Table 3-1. Shutdown Threshold Type
Threshold Temperature Exceeding Event Description
Nominal failure thresholdA critical event is set
Maximum failure thresholdShutdown of the system power supplies occurs
within 3 minutes
Shutdown thresholdShutdown of the system power supplies occurs
within 5 seconds
System Password Reset
The storage array system password can be reset if it is forgotten. To reset the
password, push and hold down the password reset switch for at least 5
seconds. The password is deleted. See Figure3-1 to locate the password reset
switch.
The RAID controller module allows you to change the password. For more
information about setting your password, see "Setting a Password" on page 71.
NOTE: The reset switch can be accessed by using a small object such as the tip of
a pen.
Cache Functions and Features
Cache Mirroring
Cache mirroring function copies accepted host-write data from the primary
controller to the partner controller. This action ensures that host-write data is
safely mirrored to the partner controller befor e successful completion status is
returned to the host. If a controller fails, the surviving controller safely retains
all mirrored data. Cache mirroring is enabled by default.
Write-Back Cache
In Write-back Cache, write operations result in a completion signal being
sent to the host operating system as soon as the cache receives the data to be
written. The target physical disk receives the data at a more appropriate time
in order to increase controller performance. In dual-active controller
configurations with Write-back Caching enabled, the write data is always
Planning: RAID Controller Modules35
Page 36
mirrored to the cache of the second controller before completion status is
issued to the host initiator. Write-Back Cache is enabled by default unless
cache mirroring is disabled.
Write-Through Cache
In write-through cache, data is written to the physical disk before completion
status is returned to the host operating system. Write-through cache is
considered more robust than write-back cache, since a power failure is less
likely to cause loss of data. The RAID controller automatically switches to
write-through if cache mirroring is disabled or if the battery is missing or has a
fault condition.
36Planning: RAID Controller Modules
Page 37
4
Planning: MD3200i Series Storage
Array Terms and Concepts
This chapter explains terms and concepts used for configuration and
operation of MD3200i Series storage arrays.
Physical Disks, Virtual Disks, and Disk Groups
Physical disks in your storage array provide the physical storage capacity for
your data. Before you can begin writing data to the storage array, you must
configure the physical storage capacity into logical components, called disk
groups and virtual disks.
A disk group is a set of physical disks upon which multiple virtual disks are
created. The maximum number of physical disks supported in a disk group is
120 disks (or 192 disks with Premium Feature activation) for RAID 0, RAID 1,
and RAID 10, and 30 drives for RAID 5 and RAID 6. You can create disk
groups from unconfigured capacity on your storage array.
A virtual disk is a partition in a disk group that is made up of contiguous data
segments of the physical disks in the disk group. A virtual disk consists of data
segments from all physical disks in the disk group. Virtual disks and disk
groups are set up according to how you plan to organize your data. For
example, you may have one virtual disk for inventory, a second virtual disk for
financial and tax information, and so on.
All virtual disks in a disk group support the same RAID level. The storage
array supports up to 255 virtual disks (minimum size of 10 MB each) that can
be assigned to host servers. Each virtual disk is assigned a Logical Unit
Number (LUN) that is recognized by the host operating system.
Physical Disks
Only Dell supported 6.0-Gbps SAS physical disks are supported in the storage
array. If the storage array detects unsupported physical disks, it marks the disk
as unsupported and the physical disk becomes unavailable for all operations.
Planning: MD3200i Series Storage Array Terms and Concepts37
Page 38
NOTE: The MD3200i Series storage enclosure must contain at least two physical
disks for proper operation. This is necessary because the physical disks are used to
store configuration information.
Physical Disk States
Table 4-1 describes the various states of the physical disk, which are
recognized by the storage array and reported in the MDSM application.
Table 4-1. RAID Controller Physical Disk States
StatusModeDescriptionPhysical Disk
Status LED
Optimal AssignedThe physical disk in the indicated slot
is configured as part of a disk group.
Optimal UnassignedThe physical disk in the indicated slot
is unused and available to be
configured.
Optimal Hot Spare
standby
Optimal Hot Spare in
use
FailedAssigned,
Unassigned,
Hot Spare in
use, or Hot
Spare standby
Replaced AssignedTh e physical disk in th e indicated slot
Pending
Failure
Assigned,
Unassigned,
Hot spare in
use, or Hot
Spare standby
The physical disk in the indicated slot
is configured as a hot spare.
The physical disk in the indicated slot
is in use as a hot spare within a disk
group.
The physical disk in the indicated slot
has failed because of an unrecoverable
error, an incorrect drive type or drive
size, or by its operational state being
set to failed.
is replaced and is ready to be, or is
actively being configured into a disk
group.
A Self-Monitoring Analysis and
Reporting T echnology (SMART) error
is detected on the physical disk in the
indicated slot.
Steady Green
Steady Green
Steady Green
Steady Green
Amber flashing
(150 ms)
Green flashing
(On 400 ms, Off
100 ms)
Green flashing
(500 ms), Amber
(500 ms), and Off
(1000 ms)
38Planning: MD3200i Series Storage Array Terms and Concepts
Page 39
Table 4-1. RAID Controller Physical Disk States
StatusModeDescriptionPhysical Disk
OfflineNot applicable The physical disk has either been spun
down or had a rebuild aborted by user
request.
IdentifyAssigned,
Unassigned,
Hot Spare in
use, or Hot
Spare standby
N/AN/AThe indicated slot is empty, or the
The physical disk is being identified.Green flashing
array cannot detect the physical disk.
(continued)
Status LED
Green flashing
(3000 ms), Amber
(3000 ms), and
Off (3000 ms)
(250 ms)
If a disk drive rebuild fails because of a source drive failure or because the
drive is too small, the MDSM reports a failure of the physical disk eve n
though the LED state on the drive indicates the rebuild was aborted (green
for 3 seconds, amber for 3 seconds, then off for 3 seconds).
Self-Monitoring Analysis and Reporting Techno logy
SMART monitors the internal performance of all physical disk components to
detect faults indicating the potential for physical disk failure. SMART uses
this information to report whether failure is imminent so that a physical disk
can be replaced before failure occurs. The storage array monitors all attached
drives and notifies you when a predicted failure is reported by a physical disk.
Virtual Disks and Disk Groups
When configuring a storage array, you must:
1
Organize the physical disks into disk groups.
2
Create virtual disks within these disk groups.
3
Provide host server access.
4
Create mappings to associate the virtual disks with the host servers.
NOTE: Host server access must be created before mapping virtual disks.
Planning: MD3200i Series Storage Array Terms and Concepts39
Page 40
Disk groups are always created in the unconfigured capacity of a storage array .
Unconfigured capacity is the available physical disk space not already
assigned in the storage array.
Virtual disks are cr eated within the free ca pacity of a disk group. F r ee capacity
is the space in a disk group that has not been assigned to a virtual disk.
Virtual Disk States
The storage array recognizes the following virtual disk states.
Table 4-2. RAID Controller Virtual Disk States
StateDescription
OptimalThe virtual disk contains physical disks that are all online.
DegradedThe virtual disk with a redundant RAID leve l conta ins an in accessible
physical disk. The system can still work properly, but performance may
be affected and additional disk failures may result in data loss.
OfflineA virtual disk with one or more member disks is in an inaccessible
(failed, missing, or offline) state. Data on the virtual disk is no longer
accessible.
Force onlineThe storage array forces a virtual disk that is in an Offline state to an
Optimal state. If all the member physical disks are not available, the
storage array forces the virtual disk to a Degraded state. The storage
array can force a virtual disk to an Online state only when a sufficient
number of physical disks are available to support the virtual disk.
RAID Levels
RAID levels determine the way in which data is written to physical disks.
Different RAID levels provide different l evels of accessibility, redundancy, and
capacity.
Using multiple physical disks has the following advantages over using a single
physical disk:
•Placing data on multiple physical disks (striping) allows input/output (I/O)
operations to occur simultaneously and improve performance.
•Storing r edundant data on multiple physical disks using mirroring or parity
supports reconstruction of lost data if an error occurs, even if that error is
the failure of a physical disk.
40Planning: MD3200i Series Storage Array Terms and Concepts
Page 41
Each RAID level provides different performance and protection. You must
select a RAID level based on the type of application, access, fault tolerance,
and data you are storing.
The storage array supports RAID levels 0, 1, 5, 6, and 10. The maximum
number of physical disks that can be used in a disk group depends on the
RAID level:
•192 for RAID
•30 for RAID
levels
levels
0, 1, and 10
5 and 6.
RAID Level Usage
To ensure best performance, you must select an optimal RAID level when you
create a system physical disk. The optimal RAID level for your disk array
depends on:
•Number of physical disks in the disk array
•Capacity of the physical disks in the disk array
•Need for redundant access to the data (fault tolerance)
•Disk performance requirements
RAID 0
RAID 0 uses disk striping to provide high data throughput, especially for large
files in an environment that requires no data redundancy. RAID 0 breaks the
data down into segments and writes each segment to a separate physical disk.
I/O performance is greatly improved by spreading the I/O load across many
physical disks. Although it offers the best performance of any RAID level,
RAID 0 lacks data redundancy. Select this option only for non-critical data,
because failure of one physical disk results in the loss of all data. Examples of
RAID 0 applications include video editing, image editing, prepress
applications, or any application requiring high bandwidth.
RAID 1
RAID 1 uses disk mirroring so that data written to one physical disk is
simultaneously written to another physical disk. This RAID level offers fast
performance, the best data availability, and the highest disk overhead. RAID 1
Planning: MD3200i Series Storage Array Terms and Concepts41
Page 42
is recommended for small databases or other applications that do not require
large capacity. RAID 1 provides full data redundancy. F or example accounting,
payroll, or financial applications.
RAID 5
RAID 5 uses parity and striping data across all physical disks (distributed
parity) to provide high data throughput and data redundancy, especially for
small random access. This is a versatile RAID level and is suited for multi-user
environments where typical I/O size is small a n d the re is a high proportion of
read activity such as file, application, database, web, e-mail, news, and
intranet servers.
RAID 6
RAID 6 is similar to RAID 5 but provides an additional parity disk for better
redundancy. This is the most versatile RAID level and is suited for multi-user
environments where typical I/O size is small a n d the re is a high proportion of
read activity. RAID 6 is recommended when large size physical disks are used
or large number of physical disks are used in a disk group.
RAID 10
RAID 10, a combination of RAID 1 and RAID 0, uses disk striping across
mirrored disks. It provides high data throughput and complete data
redundancy. Utilizing an even number of physical disks (four or more) creates
a RAID level 10 disk group and/or virtual disk. Because RAID levels 1 and 10
use disk mirroring, half of the capacity of the physical disks is utilized for
mirroring. This leaves the remaining half of the physical disk capacity for
actual storage. RAID 10 is automatically used when a RAID level of 1 is
chosen with four or more physical disks. RAID 10 works well for mediumsized databases or any environment that requires high performance and fault
tolerance and moderate-to-medium capacity.
Segment Size
Disk striping enables data to be written across multiple physical disks. Disk
striping enhances performance because striped disks are accessed
simultaneously.
42Planning: MD3200i Series Storage Array Terms and Concepts
Page 43
The segment size or stripe element size specifies the size of data in a stripe
written to a single disk. The storage array supports stripe element sizes of 8
KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, and 512 KB. The default stripe
element size is 128 KB.
Stripe width, or depth, refers to the number of disks involved in an array
where striping is implemented. For example, a four-disk group with disk
striping has a stripe width of four.
NOTE: Although disk striping delivers excellent performance, striping alone does
not provide data redundancy.
Virtual Disk Operations
Virtual Disk Initialization
Every virtual disk must be initialized. Initialization can be done in the
foreground or the background. A maximum of four virtual disks can be
initialized concurrently on each RAID controller module.
Background Initialization
The storage array executes a background initialization when the virtual disk is
created to establish parity, while allowing full host server access to the virtual
disks. Background initialization does not run on RAID 0 virtual disks. The
background initialization rate is controlled by MDSM. To change the rate of
background initialization, you must stop any existing background
initialization. The rate change is implemented when the background
initialization restarts automatically.
Foreground Initialization
The storage array supports foreground initialization for virtual disks. All
access to the virtual disk is blocked during foreground initialization. During
foreground initialization, zeros (0x00) are written to every sector of the virt ual
disk. The virtual disk is available after foreground initialization is completed.
Planning: MD3200i Series Storage Array Terms and Concepts43
Page 44
Consistency Check
A consistency check verifies the correctness of data in a redundant array
(RAID levels 1, 5, 6, and 10). For example, in a system with parity, checking
consistency involves computing the data on one physical disk and comparing
the results to the contents of the parity physical disk.
A consistency check is similar to a background initialization. The difference is
that background initialization cannot be started or stopped manually, while
consistency check can.
NOTE: Dell recommends that you run data consistency checks on a redundant
array at least once a month. This allows detection and automatic replacement of
unreadable sectors. Finding an unreadable sector during a rebuild of a failed
physical disk is a serious problem, beca use the system does not have the
redundancy to recover the data.
Media Verification
Another background task performed by the storage array is media verification
of all configured physical disks in a disk group. The storage array uses the
Read operation to perform verification on the space configured in virtual
disks and the space reserved for the metadata.
Cycle Time
The media verification operation runs only on selected disk groups,
independent of other disk groups. Cycle time is the time taken to complete
verification of the metadata region of the disk group and all virtual disks in
the disk group for which media verification is configured. The next cycle for a
disk group starts automatically when the current cycle completes. You can set
the cycle time for a media verification operation between 1 and 30 days. The
storage controller throttles the media verification I/O accesses to disks based
on the cycle time.
The storage array tracks the cycle for each disk group independent of other
disk groups on the controller and creates a checkpoint. If the media
verification operation on a disk group is pr eempted or blocked by another
operation on the disk group, the storage array resumes after the current cycle.
If the media verification process on a disk group is stopped due to a RAID
controller module restart, the storage array resumes the process from the last
checkpoint.
44Planning: MD3200i Series Storage Array Terms and Concepts
Page 45
Virtual Disk Operations Limit
The maximum number of active, concurrent virtual disk processes per RAID
controller module installed in the storage array is four. This limit is applied to
the following virtual disk processes:
•Background initialization
•Foreground initialization
•Consistency check
•Rebuild
•Copy back
If a redundant RAID controller module fails with existing virtual disk
processes, the processes on the failed controller are transferred to the peer
controller . A transferr ed process is placed in a suspended state if there ar e four
active processes on the peer controller. The suspended processes are resumed
on the peer controller when the number of active processes falls below four.
Disk Group Operations
RAID Level Migration
You can migrate from one RAID level to another depending on your
requirements. For example, fault-tolerant characteristics can be added to a
stripe set (RAID 0) by converting it to a RAID 5 set. MDSM provides
information about RAID attributes to assist you in selecting the appropriate
RAID level. You can perform a RAID level mig ration while the system is still
running and without rebooting, which maintains data availability.
Segment Size Migration
Segment size refers to the amount of data (in KB) that the storage array
writes on a single physical disk in a virtual disk before writing data on the next
physical disk. Valid values for the segment size are 8 KB, 16 KB, 32 KB, 64 KB,
128 KB, 256 KB, and 512 KB.
Dynamic segment size migration enables the segment size of a given virtual
disk to be changed. A default segment size is set when the virtual disk is
created, based on such factors as the RAID level and expected usage. You can
change the default value if segment size usage does not match your needs.
Planning: MD3200i Series Storage Array Terms and Concepts45
Page 46
When considering a segment-size change, two scenarios illustrate different
approaches to the limitations:
•If I/O activity stretches beyond the segment size, you can increase it to
reduce the number of disks required for a single I/O . Using a single physical
disk for a single request frees disks to service other requests, especially
when you have multiple users accessing a database or storage environment.
•If you use the virtual disk in a single-user, large I/O environment (such as
for multimedia application storage), performance can be optimized when
a single I/O request is serviced with a single data stripe (the segment size
multiplied by the number of physical disks in the disk group used for data
storage). In this case, multiple disks are used for the same request, but
each disk is only accessed once.
Virtual Disk Capacity Expansion
When you configure a virtual disk, you select a capacity based on the amount
of data you expect to store. However, you may need to increase the virtual disk
capacity for a standard virtual disk by adding free capacity to the disk group.
This creates more unused space for new virtual disks or to expand existing
virtual disks.
Disk Group Expansion
Because the storage array supports hot pluggable physical disks, you can add
two physical disks at a time for each disk group while the storage array
remains online. Data remains accessible on virtual disk groups, virtual disks,
and physical disks throughout the operation. The data and increased unused
free space are dynamically redistributed across the disk group. RAID
characteristics are also reapplied to the disk group as a whole.
Disk Group Defragmentation
Defragmenting consolidates the free capacity in the disk group into one
contiguous area. Defragmentation does not change the way in which the data
is stored on the virtual disks.
46Planning: MD3200i Series Storage Array Terms and Concepts
Page 47
Disk Group Operations Limit
The maximum number of active, concurrent disk group processes per
installed RAID controller module is one. This limit is applied to the following
disk group processes:
•Virtual disk RAID level migration
•Segment size migration
•Virtual disk capacity expansion
•Disk group expansion
•Disk group defragmentation
If a redundant RAID controller module fails with an existing disk group
process, the process on the failed controller is transferred to the peer
controller. A transferred process is placed in a suspended state if there is an
active disk group process on the peer controller. The suspended processes are
resumed when the active process on the peer controller completes or is
stopped.
NOTE: If you try to start a disk group process on a controller that does not have an
existing active process, the start attempt fails if the first virtual disk in the disk group
is owned by the other controller and there is an active process on the other
controller.
RAID Background Operations Priority
The storage array supports a common configurable priority for the following
RAID operations:
•Background initialization
•Rebuild
•Copy back
•Virtual disk capacity expansion
•Raid level migration
•Segment size migration
•Disk group expansion
•Disk group defragmentation
Planning: MD3200i Series Storage Array Terms and Concepts47
Page 48
The priority of each of these operations can be changed to address
performance requirements of the environment in which the operations are to
be executed.
NOTE: Setting a high priority level impacts storage array performance. It is not
advisable to set priority levels at the maximum level. Priority must also be assessed
in terms of impact to host server access and time to complete an operation. For
example, the longer a rebuild of a degraded virtual disk takes, the greater the risk
for secondary disk failure.
Virtual Disk Migration and Disk Roaming
Virtual disk migration is moving a virtua l disk or a hot spare from one array to
another by detaching the physical disks and re-attaching them to the new
array. Disk roaming is moving a physical disk from one slot to another on the
same array.
Disk Migration
You can move virtual disks from one array to another without taking the
target array offline. However, the disk group being migrated must be offline
before your perform disk migration. If the disk group is not offline prior to
migration, the source array holding the physical and virtual disks within the
disk group marks them as missing. However, the disk groups themselves
migrate to the target array.
An array can import a virtual disk only if it is in an optimal state. You can
move virtual disks that are part of a disk group only if all members of the disk
group are being migrated. The virtual disks automatically become available
after the target array has finished importing all the disks in the disk group.
When you migrate a physical disk or a disk group from one MD3200i array to
another, the MD 3200 i array you migrate to, recognizes any data structures
and/or metadata you had in place on the migrating MD3200i array. However,
if you are migrating from any device other than a MD3200i Series storage
array, the MD3200i array does not recognize the migrating metadata and that
data is lost. In this case, MD3200i initializes the physical disks and mark
them as unconfigured capacity.
48Planning: MD3200i Series Storage Array Terms and Concepts
Page 49
NOTE: Only disk groups and associated virtual disks with all member physical disks
present can be migrated from one storage array to another. It is recommended that
you only migrate disk groups that have all their associated member virtual disks in
an optimal state.
NOTE: The number of physical disks and virtual disks that a storage array supports
limits the scope of the migration.
Use either of the following methods to move disk groups and virtual disks:
•Hot virtual disk migration—Disk migration with the destination storage
array power turned on.
•Cold virtual disk migration—Disk migration with the destination storage
array power turned off.
NOTE: To ensure that the migrating disk groups and virtual disks are correctly
recognized when the target storage array has an existing physical disk, use hot
virtual disk migration.
When attempting virtual disk migration, follow these recommendations:
•Moving physical disks to the destination array for migration—When
inserting drives into the destination storage array during hot virtual disk
migration, wait for the inserted physical disk to be displayed in MDSM
before inserting the next physical disk.
WARNING: Without the delay between drive insertions, the storage array can
become unstable and manageability is temporarily lost.
•Migrating virtual disks from multiple storage arrays into a single storage
array—When migrating virtual disks from multiple or different storage
arrays into a single destination storage array, move all of the physical disks
from the same storage array as a set into the new destination storage array.
Ensure that all of the physical disks from a storage array are migrated to
the destination storage array before starting migration from the next
storage array.
NOTE: If the drive modules are not moved as a set to the destination storage
array, the newly relocated disk groups may not be accessible.
•Migrating virtual disks to a storage array with no existing physical disks—
Turn off the destination storage array , when migrating disk groups or a
complete set of physical disks from a storage array to another storage array
Planning: MD3200i Series Storage Array Terms and Concepts49
Page 50
that has no existing physical disks. After the destination storage array is
turned on and has successfully recognized the newly migrated physical
disks, migration operations can continue.
NOTE: Disk groups from multiple storage arrays must not be migrated at the
same time to a storage array that has no existing physical disks. Use cold
virtual disk migration for the disk groups from one storage array.
•Enabling premium features before migration—Before migrating disk
groups and virtual disks, enable the required premium features on the
destination storage array. If a disk group is migrated from a MD3200i
storage array that has a premium feature enabled and the destination array
does not have this feature enabled, an
Out of Compliance
error message
may be generated.
Disk Roaming
You can move physical disks within an array. The RAID controller module
automatically recognizes the relocated physical disks and logically places
them in the proper virtual disks that are part of the disk group. Disk roaming
is permitted when the RAID controller module is either online or powered
off.
NOTE: The disk group must be exported before moving the physical disks.
Advanced Features
The RAID enclosure supports several advanced features:
•Virtual Disk Snapshots
•Virtual Disk Copy
•High Performance Tier
NOTE: Virtual Disk Snapshot, Virtual Disk Copy, and High Performance Tier are
premium features that must be activated separately. If you have purchased these
features, an activation card is supplied that contains instructions for enabling this
functionality.
50Planning: MD3200i Series Storage Array Terms and Concepts
Page 51
Host Server-to-Virtual Disk Mapping
The host server attached to a storage array accesses various virtual disks on
the storage array through its host ports. Specific virtual disk-to-LUN
mappings to an individual host server can be defined. In addition, the host
server can be part of a host group that shares access to one or more virtual
disks.
You can manually configure a host server-to-virtual disk mapping. When you
configure host server-to-virtual disk mapping, consider these guidelines:
•You can define one host server-to-virtual disk mapping for each virtual disk
in the storage array.
•Host server-to-virtual disk mappings are shared between RAID controller
modules in the storage array.
•A unique LUN must be used by a host group or host server to access a
virtual disk.
•Not every operating system has the same number of LUNs available for
use.
Host Types
A host server is a server that accesses a storage array. Host servers are mapped
to the virtual disks and use one or more iSCSI initiator ports. Host servers
have the following attributes:
•Host name—A name that uniquely identifies the host server.
•Host group (used in Cluster solutions only)—Two or more host servers
associated together to share access to the same virtual disks.
This host group is a logical entity you can create in MDSM. All host servers
in a host group must be running the same operating system.
•Host type—The operating system running on the host server.
Snapshot Virtual Disks
A snapshot is a point-in-time image of a virtual disk. The snapshot provides
an image of the virtual disk at the time the snapshot was created. You create a
snapshot so that an application (for example, a backup application) can
access the snapshot and read the data while the source virtual disk remains
Planning: MD3200i Series Storage Array Terms and Concepts51
Page 52
online and user-accessible. When the backup is completed, the snapshot
virtual disk is no longer needed. You can create up to four snapshots per
virtual disk.
Snapshots are used to recover previous versions of files that have changed
since the snapshot was taken. Snapshots are implemented using a copy-onwrite algorithm, which makes a backup copy of data the instant an error
occurs. Data on a virtual disk is copied to the snapshot repository before it is
modified. Snapshots can be created instantaneously or can be scheduled and
take up less overhead than a full physical copy process.
Snapshot Repository Virtual Disk
When you create a snapshot virtual disk, it automatically creates a snapshot
repository virtual disk. A snapshot repository is a virtual disk created in the
storage array as a resource for a snapshot virtual disk. A snapshot repository
virtual disk contains snapshot virtual disk metadata and copy-on-write data
for a particular snapshot virtual disk. The repository supports one snapshot
only.
You cannot select a snapshot repository virtual disk as a source virtual disk or
as a target virtual disk in a virtual disk copy. If you select a Snapshot source
virtual disk as the target virtual disk of a virtual disk copy, you must disable all
snapshot virtual disks associated with the source virtual disk.
CAUTION: Before using the Snapshot Virtual Disks Premium Feature in a
Windows Clustered configuration, you must map the snapshot virtual disk to the
cluster node that owns the source virtual disk. This ensures that the cluster nodes
correctly recognize the snapshot virtual disk.
CAUTION: Mapping the snapshot virtual disk to the node that does not own the
source virtual disk before the snapshot enabling process is completed can result
in the operating system misidentifying the snapshot virtual disk. This can result in
data loss or an inaccessible snapshot.
For more information on mapping the snapshot virtual disk to the secondary
node, see the Dell PowerVault MD3200i and MD3220i Storage Arrays With Microsoft Windows Server FailoverClusters on dell.com/support/manuals.
52Planning: MD3200i Series Storage Array Terms and Concepts
Page 53
Virtual Disk Copy
Virtual disk copy is a premium feature to:
•Back up data
•Copy data from disk groups that use smaller-capacity physical disks to disk
groups using greater capacity physical disks
•Restore snapshot virtual disk data to the source virtual disk.
Virtual disk copy generates a full copy of data from the source virtual disk to
the target virtual disk in a storage array and can be performed either online or
offline.
Source Virtual Disk
When you create a virtual disk copy, a copy pair consisting of a source virtual
disk and a target virtual disk is created on the same storage array. When a
virtual disk copy is started, data from the source virtual disk is copied
completely to the target virtual disk.
Target Virtual Disk
When you start a virtual disk copy, the target virtual disk maintains a copy of
the data from the source virtual disk. You can choose whether to use an
existing virtual disk or create a new virtual disk as the target virtual disk. If
you choose an existing virtual disk as the target, all data on the target is
overwritten. A target virtual disk can be a standard virtual disk or the source
virtual disk of a failed or disabled snapshot virtual disk.
NOTE: The target virtual disk capacity must be equal to or greater than the source
virtual disk capacity.
When you begin the disk copy process, you must define the rate at which the
copy is completed. Giving the copy process top priority slightly impacts I/O
performance, while giving it lowest priority makes the copy process longer to
complete. You can modify the copy priority while the disk copy is in progress.
For more information, see the online help topics.
Planning: MD3200i Series Storage Array Terms and Concepts53
Page 54
Virtual Disk Recovery
You can use the Edit host server-to-virtual disk mappings feature to recover
data from the backup virtual disk. This functionality enables you to unmap
the original source virtual disk from its host server, then map the backup
virtual disk to the same host server.
Ensure that you record the LUN used to provide access to the source virtual
disk. You need this information when you define a host server-to-virtual disk
mapping for the target (backup) virtual disk. Also, be sure to stop all I/O
activity to the source virtual disk before beginning the virtual disk recovery
procedure.
Using Snapshot and Disk Copy Together
You can use the Snapshot Virtual Disk and Virtual Disk Copy premium
features together to back up data on the same storage array, or to restore the
data on the snapshot virtual disk to its original source virtual disk.
You can copy data from a virtual disk by:
•Taking a point-in-time snapshot of the data (online)
•Copying the data to another virtual disk using a virtual disk copy (offline)
You can select a snapshot virtual disk as the source virtual disk for a virtual
disk copy. This configuration is one of the best ways you can apply the
snapshot virtual disk feature, since it enables complete backups without any
impact on the storage array I/O.
You cannot use a snapshot repository virtual disk as a source virtual disk or as
a target virtual disk in a virtual disk copy. If you select the source virtual disk
as the target virtual disk of a virtual disk copy, you must disable all snapshot
virtual disks associated with the source virtual disk.
Multi-Path Software
Multi-path software (also referred to as the failover driver) is a software
resident on the host server that provides management of the redundant data
path between the host server and the storage array. For the multi-path
software to correctly manage a redundant path, the configuration must have
redundant iSCSI connections and cabling.
54Planning: MD3200i Series Storage Array Terms and Concepts
Page 55
The multi-path software identifies the existence of multiple paths to a virtual
disk and establishes a preferred path to that disk. If any component in the
preferred path fails, the multi-path software automatically re-routes I/O
requests to the alternate path so that the storage array continues to operate
without interruption.
NOTE: Multi-path software available on the MD3200i series resource media.
Preferred and Alternate Controllers and Paths
A preferred controller is a R AID controller module designated as the owner of
a virtual disk or disk group. The preferred controller is automatically selected
by MDSM when a virtual disk is created. You can change the preferred RAID
controller module owner of a virtual disk after it is created. If a host is only
connected to only one RAID controller module, the preferred owner must
manually be assigned to the RAID controller module the host can access.
Ownership of a virtual disk is moved from the preferred controller to the
secondary controller (also called the alternate controller) when the preferred
controller is:
•Physically removed
•Updating firmware
•Involved in an event that caused failover to the alternate controller
P aths used by the pr eferr ed R AID controller module to access either the disks
or the host server are called the preferred paths; redundant paths are called
the alternate paths. If a failure causes the preferred path to become
inaccessible, the storage array automatically uses the alternate path to access
data. When this occurs the enclosure status LED blinks amber.
Virtual Disk Ownership
MDSM can be used to automatically build and view virtual disks. It uses
optimal settings to stripe the disk group. Virtual disks are assigned to
alternating RAID controller modules when they are created. This default
assignation provides a simple means for load balancing the workload of the
RAID controller modules.
Planning: MD3200i Series Storage Array Terms and Concepts55
Page 56
Ownership can later be modified to balance workload according to actual
usage. If virtual disk ownership is not manually balanced, it is possible for one
controller to have the majority of the work, while the other controller is idle.
Limit the number of virtual disks in a disk group. If multiple virtual disks are
in a disk group, consider:
•The impact each virtual disk has on other virtual disks in the same disk
group.
•The patterns of usage for each virtual disk.
•Different virtual disks have higher usage at different times of day.
Load Balancing
A load balance policy is used to determine which path is used to process I/O.
Multiple options for setting the load balance policies lets you optimize I/O
performance when mixed host interfaces are configured.
You can choose one of these load balance policies to optimize I/O
performance:
•Round robin with subset—The round robin with subset I/O load balance
policy routes I/O requests, in rotation, to each available data path to the
RAID controller module that owns the virtual disks. This policy treats all
paths to the RAID controller module that owns the virtual disk equally for
I/O activity. Paths to the secondary RAID controller module are ignored
until ownership changes. The basic assumption for the round-robin policy
is that the data paths are equal. With mixed host support, the data paths
may have different bandwidths or different data transfer speeds.
•Least queue depth with subset—The least queue depth with subset policy
is also known as the least I/Os or least requests policy. This policy routes
the next I/O request to a data path that has the least outstanding I/O
requests queued. For this policy, an I/O request is simply a command in
the queue. The type of command or the number of blocks that are
associated with the command are not considered. The least queue depth
with subset policy treats large block requests and small block requests
equally. The data path selected is one of the paths in the path group of the
RAID controller module that owns the virtual disk.
56Planning: MD3200i Series Storage Array Terms and Concepts
Page 57
•Least path weight with subset (Windows operating systems only)—The
least queue depth with subset policy is also known as the least I/Os or least
requests policy. This policy routes the next I/O request to a data path that
has the least outstanding I/O requests queued. For this policy, an I/O
request is simply a command in the queue. The type of command or the
number of blocks that are associated with the command are not
considered. The least queue depth with subset policy treats large block
requests and small block requests equally. The data path selected is one of
the paths in the path group of the RAID controller module that owns the
virtual disk.
Monitoring MD3200i Series System Performance
You can use the Performance Monitor to select virtual disks and RAID
controller modules to monitor or to change the polling interval.
Keep these guidelines in mind when using the Performance Monitor:
•The Performance Monitor does not dynamically update its display if any
configuration changes occur while the window is open. You must close the
Performance Monitor
displayed.
•Using the Performance Monitor to retrieve performance data can affect
the normal storage array performance depending on the polling interval
that you set.
•If the storage array you are monitoring begins in or transitions to an
unresponsive state, an informational dialog is displayed. The dialog
informs you that the Performance Monitor cannot poll the storage array
for performance data.
To monitor the performance of the arrays:
1
Open MDSM and select the appropriate storage array.
2
Open the
array
3
In the AMW, select
4
Click
a
Array Management Window
Settings
Select the items that you want to monitor.
You can monitor:
.
window and reopen it for the changes to be
(AMW) for the selected storage
Storage Array Monitor Per f ormance
.
Planning: MD3200i Series Storage Array Terms and Concepts57
Page 58
•RAID Controller modules
•Virtual disks
•Storage array totals
NOTE: By default, all items are selected.
b
In
Polling interval
, select how often you want to update the
performance statistics.
NOTE: For an accurate elapsed time, do not use the Set RAID Controller
Module Clocks option while using the Performance Monitor.
Each time the polling interval elapses, the Performance Monitor queries
the storage array again and updates the statistics in the table.
5
Click
Start
.
Values are displayed for the selected storage arrays in the Performance
Monitor data table. The table is updated at the interval specified in the
Polling Interval setting.
6
Click
7
8
Click
Click
Update
Stop
Save As
to force an immediate poll of the storage array.
to stop monitoring the storage array.
on the Performance Monitor main dialog to save the
currently displayed performance statistics.
9
Select an appropriate directory.
10
Type a file name in the
File name
text box.
NOTE: The .perf extension is the default.
11
Select a file type from the
Files of type
list.
•Use the Report format (ASCII text) file type if you want to save the
data to a report form for viewing or printing.
•Use the Comma Delimited Format file type if you want to save the
data in a form that can be imported into a commercial spreadsheet
application for further analysis. Most leading commercial spreadsheet
applications recognize a comma delimiter. These applications use the
delimiter to import the data into spreadsheet cells.
12
Click
Save
.
58Planning: MD3200i Series Storage Array Terms and Concepts
Page 59
The Performance Monitor data provides information about how your storage
array is performing. The data is presented in eight columns, which are
described in this table. Use this data to make performance tuning decisions
for your storage array.
Table 4-3. Performance Monitor Table Description
Column HeadingsDescription
DevicesController, virtual disk or storage array total.
Total IOs Cumulative IO’s per second from last start time.
Read Percentage Percentage of cumulative IO’s that are READs.
Cache Hit Percentage Percentage of cumulative IO’s that are in-cache.
Current KB/second Snapshot of throughput value per second (1KB
= 1024 bytes).
Maximum KB/second Maximum recorded throughput value from last
start time.
Current IO/second Snapshot of IO’s per second (IOP =
Input/output per second or one completed I/O
transaction).
Maximum IO/second Maximum recorded IOP from last start time.
For more information, see the online help topics.
Planning: MD3200i Series Storage Array Terms and Concepts59
Page 60
60Planning: MD3200i Series Storage Array Terms and Concepts
Page 61
5
Configuration: Overview
Dell PowerVault Modular Disk Storage Manager (MDSM) online help
contains information on how to use the MDSM application to perform the
configuration and management tasks described in this document. You can
access online help by clicking Help located at the top right corner of MDSM
interface. For information on installing the MDSM, see the MD3200i and
MD3220i Storage Array’s Deployment Guide at dell.com/support/manuals.
NOTE: MDSM supports MD3000i and MD3200i storage arrays and can
automatically detect both these storage arrays.
User Interface
The Storage Manager screen is divided into two primary windows:
•Enterprise Management Window (EMW)—The EMW provides highlevel management of the storage arrays. You can launch the Array
Management Window from the EMW.
•Array Management Window (AMW)—The AMW provides management
functions for a single storage array. You can launch more than one AMW at
the same time to manage different storage arrays.
The EMW and the AMW consist of the following:
•The title bar at the top of the window shows the name of the application.
•Beneath the title bar, is the menu bar. You can select menu options from
the menu bar to perform tasks on a storage array.
•Beneath the menu bar, is the toolbar. You can select options in the toolbar
to perform tasks on a storage array.
•Beneath the toolbar, are the tabs. Tabs are used to group the tasks that you
can perform on a storage array.
•Beneath the tabs, is the status bar. The status bar shows status messages
and status icons related to the storage array.
NOTE: By default, the toolbar and status bar are not displayed. To view the toolbar
or the status bar, select ViewToolbar or View Status Bar, respectively .
Configuration: Overview61
Page 62
Enterprise Management Window
The EMW provides high-level management of storage arrays. When you start
MDSM, the EMW is displayed. The EMW has these tabs:
•
Devices
•
Setup
storage arrays and configuring alerts.
The Devices tab has a Tree view on the left side of the window that shows
discovered storage arrays, unidentified storage arrays, and the status
conditions for the storage arrays. Discovered storage arrays are managed by
MDSM. Unidentified storage arrays are available to MDSM but not
configured for management. The right side of the Devices tab has a Table
view that shows detailed information for each storage array.
In the EMW, you can:
•Discover hosts and managed storage arrays on the local sub-network.
•Manually add and remove hosts and storage arrays.
•Locate the storage arrays.
•Name or Rename discovered storage arrays.
•Add storage array comments to the Table view.
•Sort rows in the Table view according to different criteria.
•Store your EMW view preferences and configuration data in local
configuration files. The next time you open the EMW, data from the local
configuration files is used to show customized view and preferences.
•Monitor the status of managed storage arrays and indicate status using
appropriate icons.
•Add or remove management connections.
•Configure alert notifications for all selected storage arrays through e-mail
or SNMP traps.
•Report critical events to the configured alert destinations.
•Launch the AMW for a selected storage array.
•Run a script to perform batch management tasks on specific storage arrays.
•Impo rt the operating system theme settings into the MDSM.
•Upgrade firmware on multiple storage arrays concurrently.
tab—Provides information about the storage arrays.
tab—Presents the initial setup tasks that guide you through adding
62Configuration: Overview
Page 63
Inheriting the System Settings
Use the Inherit System Settings option to import the operating system theme
settings into the MDSM. Importing system theme settings affects features
like font type, font size, color, and contrast in the MDSM.
1
Open the
•Select
•Select the
2
Select
3
Click OK.
Inherit System Settings
Tools
Inherit System Settings
Setup
tab and click
window in one of these ways:
.
Inherit System Settings
Inherit system settings for color and font
.
.
Array Management Window
You can launch the AMW from the EMW. The AMW provides management
functions for a single storage array. You can have multiple AMWs open
simultaneously to manage different storage arrays.
To launch the AMW:
1
In the
EMW
, on the
array.
The context menu for the selected storage is displayed.
2
In the context menu, select
The AMW for the selected storage is displayed.
Devices
tab, double-click on the relevant storage
Manage Storage Array.
The AMW has the following tabs:
•Summary tab—You can view the following information about the storage
array:
•Status
•Hardware components
•Capacity
•Hosts and Mappings
•Storage partitions
•Disk groups and virtual disks
Configuration: Overview63
Page 64
•Logical tab—You can view the organization of the storage array by virtual
disks, disk groups, free capacity nodes, and any unconfigured capacity for
the storage array.
•Physical tab—You can view the organization of the storage array by RAID
controller modules, physical disks, and other hardware components.
•Mappings tab—You can define the hosts, host groups, and host ports. You
can change the mappings to grant virtual disk access to host groups and
hosts and create storage partitions.
•Setup tab—You can complete the initial setup tasks to configure the
storage array.
•Support—You can complete common support tasks like downloading
RAID controller module firmware, viewing the online help, and so on.
In the AMW, you can:
•Provide storage array options, for example, renaming a storage array,
changing a password, or enabling a background media scan.
•Provide the ability to configure virtual disks from the storage array
capacity, define hosts and host groups, and grant host or host group access
to sets of virtual disks called storage partitions.
•Monitor the health of storage array co mponents and r eport detailed status
using applicable icons.
•Provide applicable recovery procedures for a failed logical component or a
failed hardware component.
•Present a view of the Event Log for the storage array.
•Present profile information about hardware components, such as RAID
controller modules and physical disks.
•Provide RAID controller module management options, such as changing
ownership of virtual disks or placing a RAID controller module online or
offline.
•Provide physical disk management options, such as assignment of hot
spares and locating the physical disk.
•Monitor storage array performance.
64Configuration: Overview
Page 65
6
Configuration: About Your Storage
Array
Out-of-Band and In-Band Management
You can manage a storage array in two ways:
•Out-of-band management
•In-b a nd management
Out-of-Band Management
Using the out-of-band management method, data is separate from
commands and events. Data travels through the host-to-controller interface,
while commands and events travel through the management port Ethernet
cables.
This management method lets you configure the maximum number of
virtual disks that are supported by your operating system and host adapters. A
maximum of eight storage management stations can concurrently monitor an
out-of-band managed storage array. This limit does not apply to systems that
manage the storage array through the in-band management method.
When you use out-of-band management, you must set the network
configuration for each RAID controller module’s management Ethernet port.
This includes the Internet Protocol (IP) address, subnetwork mask (subnet
mask), and gateway. If you are using a Dynamic Host Configuration Protocol
(DHCP) server, you can enable automatic network configuration, but if you
are not using a DHCP server, you must enter the network configuration
manually.
NOTE: RAID controller module network configurations can be assigned using a
DHCP server (the default setting). However , if a DHCP server is no t available for 150
seconds, the RAID controller modules assign static IP addresses. The addresses
assigned are 192.168.128.101 for controller 0 and 192.168.128.102 for controller 1.
Configuration: About Your Storage Array65
Page 66
In-Band Management
Using in-band management, commands, events, and data travel through the
host-to-controller interface. Unlike out-of-band management, commands and
events are mixed with data.
NOTE: For detailed information on settin g up in-band and out-of-band management
see the Deployment Guide.
When you add storage arrays by using this management method, you need to
specify only the host name or IP address of the host. After you add the
specific host name or IP address, the host-agent software automatically
detects any storage arrays that are connected to that host.
CAUTION: Some operating systems can be used only as storage management
stations. For more information about the operating system that you are using, see
MD PowerVault Support Matrix
the
at dell.com/support/manuals.
For more information, see the PowerVault Modular Disk Storage Manager
online help topics.
Access Virtual Disk
Each RAID controller module in an MD3200i Series storage array maintains a
special virtual disk, called the access virtual disk. The host-agent software uses
the access virtual disk to communicate management requests and event
information between the storage management station and the RAID
controller module in an in-band–managed storage array. The access virtual
disk is not available for application data storage. The default LUN is 31.
Storage Arrays
You must add the storage arrays to MDSM before you can setup the storage
array for optimal use.
Adding Storage Arrays
You can add storage arrays only in the EMW. You can:
•Automatically discover storage arrays
•Manually add storage arrays
NOTE: Verify that your host or management station network configuration—
including station IP address, subnet mask, and default gateway—is correct before
adding a new storage array using the Automatic option.
66Configuration: About Your Storage Array
Page 67
NOTE: For Linux, set the default gateway so that broadcast packets are sent to
255.255.255.0. For Red Hat Enterprise Linux, if n o gateway exists on the network, set
the default gateway to the IP address of the NIC.
NOTE: MDSM uses TCP/UDP port 2463 for communication to the MD Storage
Array.
Automatic Discovery of Storage Arrays
The Automatic Discovery process sends out a broadcast message across the
local subnetwork (subnet) and adds any storage array that responds to the
message. The Automatic Discovery process finds both in-band and out-ofband storage arrays.
NOTE: The Automatic Discovery option and the Re-scan Hosts option in the
Enterprise Management Window provide automatic methods to discover managed
storage arrays.
Manual Addition of a Storage Array
Use Manual Addition if the storage array resides outside of the local subnet.
This process requires specific identification information to manually add a
storage array.
To add a storage array that uses out-of-band management, specify the host
name or management port IP address of each controller in the storage array.
Before using this option, verify that the applicable network configuration
tasks are performed.
To add an in-band storage array, add the host through which the storage array
is attached to the network.
NOTE: It can take several minutes for MDSM to connect to the specified storage
array.
To add a storage array manually:
1
Select
Edit
2
Select the relevant management method:
Out-of-band management
•
the
•
In-band management
Add Storage Array
.
—Enter a host name or an IP address for
RAID controller Modules
—Enter a name or an IP address for the
in the storage array.
through which the storage array is attached to the network.
Configuration: About Your Storage Array67
Host
Page 68
NOTE: When adding a storage array using in-band management with iSCSI, a
session must first be established between the initiator on the host server and
the storage array. For more information, see "Configuration: Using iSCSI" on
page 83.
NOTE: The host agent must be restarted before in-band management
communication can be established. See "Starting or Restarting the Host
Context Agent Software" on page260.
3
Click
Add
.
4
Use one of these methods to name a storage array:
•In the EMW, select the
Arrays
.
•In the AMW, select the
Setup
tab, and select
Setup
tab, and select
Name/Rename Storage
Rename Storage Array
.
•In the EMW, right-click the icon corresponding to the array and select
Rename
.
Setting Up Your Storage Array
A list of initial setup tasks is displayed on the Setup tab in the AMW. The list
of initial setup tasks shows you how to set up a storage array. Using the steps
outlined in the Initial Setup Tasks Area, ensures that the basic setup steps are
completed properly.
Use the Initial Setup Tasks list the first time that you set up a storage array to
perform these tasks:
•Locate the storage array—F ind the physical location of the storage array on
your network by turning on the unit identify LEDs. The storage array can
be identified with a label.
•Give a new name to the storage array—Use a unique name that identifies
each storage array.
•Set a storage array password—Configure the storage array with a password
to protect it from unauthorized access. MDSM prompts for the password
when an attempt is made to change the storage array configuration, such
as, when a virtual disk is created or deleted.
•Configure iSCSI host ports—Configure network parameters for each
iSCSI host port automatically or specify the configuration information for
each iSCSI host port.
68Configuration: About Your Storage Array
Page 69
•Configure the storage array—Create disk groups, virtual disks, and hot
spare physical disks by using the Automatic configuration method or the
Manual configuration method. For more information, see the
Modular Disk Storage Manager online help
topics.
PowerVault
•Map vi rtu a l disks—Map virtual disks to hosts or host groups.
•Save configuration—Save the configuration parameters in a file that you
can use to restore the configuration, or reuse the configuration on another
storage array. For more information, see the
Storage Manager online help
topics.
P owerVault Modular Disk
After you complete the basic steps for configuring the storage array, you can
perform these optional tasks:
•Manually define hosts—Define the hosts and the host port identifiers that
are connected to the storage array. Use this option only if the host is not
automatically recognized and shown in the Mappings tab.
•Configure ethernet management ports—Configure the network
parameters for the Ethernet management ports on the RAID controller
modules if you are managing the storage array by using the out-of-band
management connections.
•View and enable premium features—Your MDSM may include premium
features. View the premium features that are available and the premium
features that are already started. You can start available premium features
that are currently stopped.
•Manage iSCSI settings—You can configure iSCSI settings for
authentication, identification, and discovery.
Locating Storage Arrays
You can use the Blink option to physically locate and identify a storage array.
NOTE: If the LEDs from the Blink Storage Array operation do not stop blinking,
select Stop All Indications to stop the process manually.
To locate the storage array:
1
Select the relevant storage array and:
•In the EMW, right-click the appropriate storage array, and select
Storage Array
•In the AMW, select the
.
Setup
tab, click
Configuration: About Your Storage Array69
Blink Storage Array
Blink
.
Page 70
•In the AMW, select
The LEDs blink on the physical disks in the storage array.
2
After locating the storage array, click OK.
The LEDs stop blinking.
3
If the LEDs do not stop blinking, select
Indications
A confirmation message is displayed.
4
Click OK.
.
Storage ArrayBlinkStorage Array
Storage ArrayBlink Stop All
.
Naming or Renaming Storage Arrays
Each storage array must be assigned a unique up to 30-character
alphanumeric name. A name can consist of letters, numbers, and the special
characters underscore (_), dash (–), and pound sign (#). No other special
characters are allowed.
To rename a selected storage array:
1
Perform one of these actions:
•In the AMW
•In the EMW
•In the EMW
•In the EMW
icon and select
The
Name/Rename Storage Arrays
Setup
tab, select
Devices
Devices
Devices
Rename
Renam e Storage Array
tab Tree view, select
tab Table view, select
tab Tree view, right-click on the desired array
.
dialog is displayed.
Edit Rename
Edit
.
Rename
.
.
2
Select the relevant storage array from the
If you do not know the name or physical location of the storage array, click
Blink
. After locating the storage array, click OK to turn off the LEDs.
The name of the storage array is displayed in the
3
In
Storage array name
applicable, add a comment for the storage array in
4
Click
Apply
.
A message is displayed warning you about the implications of changing the
storage array name.
70Configuration: About Your Storage Array
, type the new name of the storage array. If
Select storage array
Storage array name
Additional comment
table.
.
.
Page 71
5
Click
Yes
.
The new storage array name is displayed in the
6
Repeat step 2 through step4 to name or rename additional storage arrays.
NOTE: Avoid arbitrary names or names that may lose meaning in the future.
Select storage array
table.
Setting a Password
You can configure each storage array with a password to protect it from
unauthorized access. MDSM prompts for the password when an attempt is
made to change the storage array configuration, such as, when a virtual disk is
created or deleted. View operations do not change the storage array
configuration and do not require a password. You can create a new password
or change an existing password.
NOTE: It is recommended that you use a long password with at least 15
alphanumeric characters to increase security.
To set a new password or change an existing password:
1
Select the relevant storage array and navigate to the AMW for that storage
array. See "Array Management Window" on page 63.
The AMW for the selected storage array is displayed.
2
In the AMW, perform one of these actions:
•Select the storage array in the
Array
Set Password
•Select the
Setup
.
tab, and then click
•In the AMW, select the
Password.
The
Set Password
dialog is displayed.
Logical
Logical
pane, and then select
Set a Storage Array Password
tab, right-click and select
Storage
.
Set
3
If you are resetting the password, type the
NOTE: If you are setting the password for the first time, leave the Current
password blank.
4
Type the
5
Re-type the new password in
New password
.
Confirm new password
NOTE: The password in Confirm new password and New password must be
exactly the same.
Configuration: About Your Storage Array71
Current password
.
.
Page 72
6
Click OK.
NOTE: You are not prompted for a password when you attempt to change the
storage array configuration in the current management session.
Password Guidelines
Follow these guidelines when you create a password:
•Use secure passwords for your storage array. A password must be easy for
you to remember but difficult for others to determine. Consider using
numbers or special characters in the place of letters, such as a 1 in the
place of the letter I, or the at sign (@) in the place of the letter a.
•For increased protection, use a long password with at least 15
alphanumeric characters. The maximum password length is 30 characters.
•Passwords are case sensitive.
NOTE: You can attempt to enter a password up to ten times before the storage
array enters a lockout state. Before you can try to enter a password again, you must
wait 10 minutes for the storage array to reset. To reset the password, press the
password reset switch on your RAID controller module, see Figure3-1.
Viewing Storage Array Connections
You can use the View Connections option to view the expansion enclosures
connected to the RAID controller module.
To view the storage array connections:
1
From the toolbar in AMW, select
The
<Storage Array>:Connections
Storage Array View
dialog is displayed.
Connections
.
2
Click the column name to sort the connections according to your
preference.
3
Click
Close
.
If you receive an error message for a port, you can use this dialog to identify
the components on the port that may have caused the error. By isolating these
components, you prevent accidentally disconnecting components that are
still in operation, which could cause an interruption in data flow.
72Configuration: About Your Storage Array
Page 73
Adding/Editing a Comment to an Existing Storage Array
A descriptive comment, with an applicable storage array name, is a helpful
identification tool. You can add or edit a comment for a storage array in the
EMW only.
To add or edit a comment:
1
In the EMW, select the
storage array.
2
Select
Edit
The
Edit Comment
3
Type a 60-character comment.
4
Click OK.
This option updates the comment in the table view and saves it in your local
storage management station file system. The comment is not displayed to
administrators who are using other storage management stations.
Comment
Devices
dialog is displayed.
tab and select the relevant managed
.
Removing Storage Arrays
You can remove a storage array from the list of managed arrays if you no
longer want to manage it from a specific storage management station.
Removing a storage array does not affect the storage array or its data in any
way. Removing a storage array simply removes it from the list of storage arrays
that are displayed in the drop-down list in the Array Selector. If a storage array
is accidentally removed, it can be added again. See "Adding Storage Arrays"
on page 66.
You can remove the storage array only from the EMW.
To remove the storage array:
1
In the EMW, select the
storage array.
2
Select
Edit
A message prompts you for a confirmation for the removal of the selected
storage array.
Remove
Devices
tab and select the relevant managed
Storage Array
.
3
To remove the storage array, click
Yes
.
Configuration: About Your Storage Array73
Page 74
Enabling Premium Features
You can enable premium features on the storage array. T o enable the pr emium
features, you must obtain a feature key file specific to the premium feature
that you want to enable from your storage supplier.
To enable premium features:
1
From the toolbar in AMW, select
The
Premium Features and Feature Pack Information
displayed.
2
Select the relevant premium feature, and click
The
Select Feature Key File
3
Navigate to the relevant folder, select the appropriate key file, and click
OK
.
4
Click
Close
.
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
Storage Array Premium Features
window is
Enable
.
dialog is displayed.
.
Failover Alert Display
You can change the failover alert delay for a storage array. The failover alert
delay lets you delay the logging of a critical event if the multi-path driver
transfers virtual disks to the non-preferred controller. If the multi-path driver
transfers the virtual disks back to the preferred controller within the specified
delay period, a critical event is not logged. If the transfer exceeds this delay
period, then a virtual disk-not-on-preferred-path alert is issued as a critical
event. You can also use this option to minimize multiple alerts when more
than one virtual disk fails over because of a system error, such as a failed host
adapter.
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
Changing the Cache Settings on the Storage Array
To change the storage array cache settings:
1
In the AMW, select
The
Change Cache Settings
74Configuration: About Your Storage Array
Storage Array
window is displayed.
Change
Cache Settings
.
Page 75
2
Select or enter the percentage of unwritten data in the cache to trigger a
cache flush in
3
Select or enter the percentage of unwritten data in the cache to stop a
cache flush in progress in
4
Select the appropriate
A smaller cache size is a good choice for file-system use or database-
application use. A larger cache size is a good choice for applications that
generate sequential I/O, such as multimedia.
5
In the
Enter Password
array, and click
Start flushing
Stop flushing
Cache block size
dialog, type the current password for the storage
OK
.
.
.
.
Changing Expansion Enclosure ID Numbers
When an MD1200 Series expansion enclosure is connected to an MD3200i
Series storage array for the first time, an enclosure ID number is assigned and
maintained by the expansion enclosure. This enclosure ID number is also
shown in the MDSM.
To change the enclosure ID numbers:
1
In the AMW, select the storage array, and select
Change
2
Select a new enclosure ID number from the
The enclosure ID must be between 0 and 99 (inclusive).
Enclosure ID
.
Storage Array
Change Enclosure ID
list.
3
To save the changed enclosure ID, click
Change
.
Changing the Enclosure Order in the Physical Pane
You can change the order of the RAID controller modules and the expansion
enclosures in the Physical pane to match the hardware configuration in your
storage array. The Physical pane that initially is displayed is a default view
that may not match your storage array. The enclosure order change remains in
effect until it is modified again.
To change the enclosure order in the Physical pane:
1
In the AMW, select
2
From the enclosures list, select the enclosure you want to move and click
either
Up
or
Storage Array
Down
to move the enclosure to the new position.
Configuration: About Your Storage Array75
Change
Enclosure Order
.
Page 76
3
Click OK.
If you have set a password for the selected storage array , the
Password
4
Type the current password for the storage array.
5
Click OK.
dialog is displayed.
Enter
Configuring Alert Notifications
MDSM can send an alert for any condition on the storage array that requires
your attention. Alerts can be sent as e-mail messages or as Simple Network
Management Protocol (SNMP) trap messages.
You can configure alert notifications either for all the storage arrays or a single
storage array.
To configure alert notifications for all storage arrays:
1
In the EMW, select the
2
Select
Configure Alerts
The
Configure Alerts
3
Select
All storage arrays
4
Click OK.
The
Configure Alerts
"Configuring E-mail Alerts" on page 76. To configure SNMP alerts, see
"Configuring SNMP Alerts" on page 79.
Setup
tab.
.
dialog is displayed.
.
dialog is displayed. To configure e-mail alerts, see
To configure alert notifications for a single storage array:
In the EMW, select the
1
2
Select the relevant storage array, then select
The
Configure Alerts
"Configuring E-mail Alerts" on page 76. To configure SNMP alerts, see
"Configuring SNMP Alerts" on page 79.
Devices
dialog is displayed. To configure e-mail alerts, see
tab.
EditConfigure Alerts
Configuring E-mail Alerts
For more information on configuring alert notifications, see "Configuring
Alert Notifications" on page 76.
76Configuration: About Your Storage Array
.
Page 77
To configure e-mail alerts:
1
Open the
•In the Tree view or the Table view on the
•In the
2
Select one of the following radio buttons to specify an alert level:
Configure Alerts
dialog by performing one of these actions:
select a node, and then select
Setup
tab in the EMW, select
Edit
Configure Alerts
Configure Alerts
Devices
tab in the EMW,
. Go to step 3.
. Go to step 2.
•All storage arrays—Select this option to send an alert e-mail about
events on all storage arrays.
•An individual storage array—Select this option to send an alert e-mail
about events that occur on only a specified storage array.
These results occur, depending on your selection:
•If you selected all storage arrays, the
Configure Alerts
dialog is
displayed.
•If you selected an individual storage array, the
Select Storage Array
dialog is displayed. Select the storage array for which you want to
receive e-mail alerts and click
OK
. The
Configure Alerts
dialog is
displayed.
•If you do not know which storage array to select, click
Blink
to turn on
the LEDs of the storage array.
3
In the
4
Configure Alerts
In
Mail server
, type the name of the Simple Mail Transfer Protocol
dialog, select the
Mail Server
tab.
(SMTP) mail server.
The SMTP mail server is the name of the mail server that forwards the
alert e-mails to the configured e-mail addresses.
5
In
Email sender address
, type the valid sender e-mail address.
The e-mail address of the sender (the network administrator) is displayed
on each e-mail alert sent to the destination.
6
To include the contact information of the sender in the e-mail alert, select
Include contact information with the alerts
, and type the contact
information.
NOTE: Including the contact information in the e-mail alert is optional.
7
Select the
E-mail
tab to configure the e-mail destinations.
Configuration: About Your Storage Array77
Page 78
•Adding an e-mail address—In
and click
•Replacing an e-mail address—In the
select the e-mail address to be replaced, type the replacement e-mail
address in
•Deleting an e-mail address—In the
select the e-mail address, and click
•Validating an e-mail address—Type the e-mail address in
address
area, and c lick
A dialog with the results of the test and any error is displayed.
8
For the selected e-mail address, in
•
Event Only
This alert type is the default.
•
Event + Profile
the storage array profile.
•
Event + Support
and a compressed file tha t contains complete support information for
the storage array that has generated the alert.
9
For the selected e-mail address, in
•
Every event
the default option.
•
Every x hours
if an event occurred during that time interval. You can select this
option only if you have selected either
Support
10
Click OK.
An alert icon is displayed next to each node in the Tree view where an alert
is set.
Add
.
Email address
or select the e-mail address in the
Test
. A test e-mail is sent to the selected e-mail address.
—The alert e-mail contains only the event information.
—The alert e-mail contains the event information and
—The alert e-mail contains the event information
—Sends an alert e-mail whenever an event occurs. This is
—Sends an alert e-mail after the specified time interval
in the
Information To Send
Email address
Configured email addresses
, and click
Replace
Configured email addresses
Delete
Information To Send
Frequency
Event + Profile
drop down list.
, type the e-mail address,
.
.
Email
Configured email addresses
, select:
, select:
or
Event +
area,
area,
To ensure that the e-mail is sent successfully:
•You must provide an SMTP mail server name and an e-mail sender address
for the e-mail addresses to work.
78Configuration: About Your Storage Array
Page 79
•The e-mail addresses that you had previously configured are displayed in
the Configured e-mail addresses area.
•You must use fully qualified e-mail addresses, for example,
name@mycompany.com.
•You can configure multiple e-mail addresses before you click
OK
.
Configuring SNMP Alerts
To add a management console to the list of addresses configured to receive
SNMP alerts:
1
Open the
•In the Tree view or the Table view on the
•In the
2
Select one of the following radio buttons to specify an alert level:
•All storage arrays—Select this option to send an alert notification
•An individual storage array—Select this option to send an alert
These results occur, depending on your selection:
Configure Alerts
select a node, and select
Setup
tab in the EMW, select
dialog by performing one of these actions:
Edit
Devices
Configure Alerts
Configure Alerts
tab in the EMW,
. Go to step 3.
. Go to step 2.
about events on all storage arrays.
notification about events that occur in only a specified storage array.
•If you selected All storage arrays, the
Configure Alerts
displayed.
•If you selected An individual storage array, the
Select Storage Array
dialog is displayed. Select the storage array for which you want to
receive alert notifications and click
OK
. The
Configure Alerts
displayed.
NOTE: If you do not know which storage array to select, click Blink to turn on
the LEDs of the storage array.
3
Select the
•Adding an SNMP address—In
SNMP
tab to configure the SNMP alert destinations.
community name. In
click
Add
.
Community name
Trap destination
Configuration: About Your Storage Array79
, type the trap destination, and
, type the
dialog is
dialog is
Page 80
NOTE: The community name is an American Standard Code for Information
Interchange (ASCII) string that identifies a known set of network management
stations and is set by the network administrator. The def ault community name
is the string “public”. The trap destination is the IP address or the host name
of a computer running an SNMP management application. An example of a
SNMP enabled management application is the Dell Management Console. For
more information on Dell Management Console, see
dell.com/support/manuals.
•Replacing an SNMP address—Select the SNMP address in the
Configured SNMP addresses
name in
destination
Community name
, and click
area, type the replacement community
and the trap destination in
Replace
.
Trap
•Deleting an SNMP address—Select the SNMP address in the
Configured SNMP addresses
area, and click
Delete
.
•Validating an SNMP address—Select the SNMP address in the
Configured SNMP addresses
area, and click
Test
. A test message is
sent to the SNMP address. A message box with the results of the
validation and any error information is displayed.
4
Click OK.
An alert icon is displayed next to each node in the Tree view for which an
alert is set.
Follow these guideline for SNMP alerts:
•Any SNMP addresses that you had previously configured are displayed in
the Configured SNMP addresses area.
•The SNMP Community Name is determined by the system administrator
and configured within the management application, such as the Dell
Management Console. More information about the Dell Management
Console is available at
•You can configure multiple SNMP addresses before you click
dell.com/support/manuals
.
OK
.
80Configuration: About Your Storage Array
Page 81
Battery Settings
A smart battery backup unit (BBU) can perform a learn cycle. The smart BBU
module includes the battery, a battery gas gauge, and a battery charger. The
learn cycle calibrates the smart battery gas gauge so that it provides a
measurement of the charge of the battery module. A learn cycle can only start
when the battery is fully charged.
The learn cycle completes the following operations:
•Discharges the battery to a predetermined threshold
•Charges the battery back to full capacity
A learn cycle starts automatically when you install a new battery module.
Learn cycles for batteries in both RAID controller modules in a duplex system
occur simultaneously.
Learn cycles are scheduled to start automatically at regular intervals, at the
same time and on the same day of the week. The interval between cycles is
described in weeks.
Use the following guidelines to adjust the interval:
•You can use the default interval.
•You can run a learn cycle at any time.
•You can set the learn cycle earlier than the currently scheduled time.
•You cannot set the learn cycle to start more than seven days later than the
currently scheduled time.
To change the battery settings perform these steps:
In the AMW, select
1
The
Battery Settings
Storage ArrayChange
dialog is displayed.
Battery Settings
.
2
In
Battery location
3
Check these details about the battery:
•Battery status
•Battery age
•Days until replacement
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
, select a battery.
Configuration: About Your Storage Array81
Page 82
Setting the Storage Array RAID Controller Module Clocks
You can use the Synchronize RAID Controller Module Clocks option to
synchronize the storage array R AID contr oller module clocks with t he stor age
management station. This option makes sure that the event timestamps
written by the RAID controller modules to the Event Log match the event
timestamps written to host log files. The RAID controller modules remain
available during synchronization.
To synchronize the RAID controller module clocks with the storage
management station:
1
In the AMW, select
Module Clocks
2
If a password is set, in the
password for the storage array, and click
The RAID controller module clocks are synchronized with the storage
management station.
Storage Array
.
Synchronize RAID Controller
Enter Password
dialog, type the current
Synchronize
.
82Configuration: About Your Storage Array
Page 83
Configuration: Using iSCSI
Changing the iSCSI Target Authentication
1
In the AMW, select the
2
Select
Manage iSCSI Settings
The
Manage iSCSI Settings
Target Authentication
settings, select:
•
None
—If you do not require initiator authentication. If you select
None
, any initiator can access the target.
•
CHAP
—To enable an initiator that tries to authenticate the target
using Challenge Handshake Authentication P rotocol (CHAP). Define
the CHAP secret only if you want to use mutual CHAP
authentication. If you select
defined, an error message is displayed. See "Creating CHAP Secrets"
on page 84.
3
To enter the CHAP secret, click
The
Enter Target CHAP Secret
Setup
tab.
.
window is displayed and by default, the
tab is selected. To change the authentication
CHAP
, but no CHAP target secret is
CHAP secret
dialog is displayed.
.
7
4
Enter the
5
Enter the exact same target CHAP secret in
6
Click OK.
Target CHAP secret
NOTE: The Target CHAP secret must be between 12 and 57 characters.
NOTE: If you do not want to create a CHAP secret, you can generate a
random CHAP secret automatically. To generate a random CHAP secret, click
Generate Random CHAP Secret.
NOTE: You can select the None and CHAP at the same time, for example,
when one initiator may not have CHAP and the other initiator has only CHAP
selected.
.
Confirm target CHAP secret
Configuration: Using iSCSI83
.
Page 84
Entering Mutual Authentication Permissions
Mutual authentication or two-way authentication enables a client or a user to
verify themselves to a host server and for the host server to validate itself to
the user. This validation is accomplished in such a way that both parties are
sure of the other’s identity.
To add mutual authentication permissions:
1
In the AMW, select the
2
Select
Manage iSCSI Settings
The
Manage iSCSI Settings
3
Select the
4
Select an initiator in the
The initiator details are displayed.
5
Click
that is displayed.
6
Click OK.
7
Click OK in the
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
Mutual Authentication
CHAP Secret
Setup
tab.
.
window is displayed.
tab.
Select an Initiator
to enter the initiator CHAP permissions in the dialog
Manage iSCSI Settings
area.
window.
Creating CHAP Secrets
When you set up an authentication method, you can choose to create a
CHAP secret. The CHAP secret is a password that is recognized by the
initiator and the target. If you are using mutual authentication to configure
the storage array, you must enter the same CHAP secret that is defined in the
host server iSCSI initiator, and you must define a CHAP secret on the target
(the storage array) that must be configured in every iSCSI initiator that
connects to the target storage array. For more information on CHAP, see
“Understanding CHAP Authentication” in the Deployment Guide.
84Configuration: Using iSCSI
Page 85
Initiator CHAP Secret
The initiator CHAP secret is set on the host using the iSCSI initiator
configuration program provided with the host operating system. If you are
using the mutual authentication method, you must define the initiator
CHAP secret when you set up the host. This must be the same CHAP secret
that is defined for the target when defining mutual authentication settings.
Target CHAP Secret
If you are using CHAP secrets, you must define the CHAP secret for the
target.
Valid Characters for CHAP Secrets
The CHAP secret must be between 12 and 57 characters. The CHAP secret
supports characters with ASCII values of 32 to 126 decimal. See Table 7-1 for
a list of valid ASCII characters.
Table 7-1. Valid ASCII Characters for CHAP Secrets
Valid ASCII Characters for CHAP Secrets
Space !"# $ %& ’()* +
,-./01234567
89:;<=>? @ABC
DEFGHIJKLMNO
PQRSTUVWXYZ[
\]^_abcdefgh
ijklmnopqrst
u vwxyz {| }~
Configuration: Using iSCSI85
Page 86
Changing the iSCSI Target Identification
You cannot change the iSCSI target name, but you can associate an alias with
the target for simpler identification. Aliases are useful because the iSCSI
target names are not intuitive. Pr ovide an iSCSI target alias that is
meaningful and easy to remember.
To change the iSCSI target identification:
1
In the AMW, select the
2
Select
Manage iSCSI Settings
The
Manage iSCSI Settings
3
Select the
4
Type the alias in
5
Click OK.
NOTE: Aliases can contain up to 30 characters. Aliases can include letters,
numbers, and the special characters underscore (_), minus (-), and pound sign (#).
No other special characters are permitted.
NOTE: Open iSCSI (which is used by Red Hat Enterprise Linux 5 and SUSE Linux
Enterprise Server 10 with SP1) does not support using target alias.
Target Identification
Setup
tab.
.
window is displayed.
tab.
iSCSI target alias
.
Changing the iSCSI Target Discovery Settings
To change the iSCSI target discovery settings:
1
In the AMW, select the
2
Select
Manage iSCSI Settings
The
Manage iSCSI Settings
Setup
tab.
.
window is displayed.
3
Select the
4
Select
To activate iSCSI target discovery , you can use one of the following
methods:
•Select
86Configuration: Using iSCSI
Target Discovery
Use iSNS
automatically activate target discovery for IPv4 settings using the
Dynamic Host Configuration Protocol (DHCP). You can also refresh
the DHCP.
to activate iSCSI target discovery.
Obtain configuration automatically from DHCP server
tab.
to
Page 87
•Select
Specify Configuration
, and type the IPv4 address to activate
the target discovery.
•Type the
iSNS server IP address
in the IPv6 settings area to activate
the target discovery.
After you manually enter an IP address, you can also click
Advanced
to
configure the customized TCP listening ports.
If you do not want to allow discovery sessions that are not named, select
the
Disallow un-named discovery sessions
NOTE: Un-named discovery sessions are discovery sessions that are
permitted to run without a target name. With an un-named discovery session,
the target name or the target portal group tag is not available to enforce the
iSCSI session identifier (ISID) rule.
5
Click OK.
.
Configuring the iSCSI Host Ports
The default method for configuring the iSCSI host ports, for IPv4 addressing,
is DHCP. Always use this method unless your network does not have a DHCP
server. It is advisa ble to assign static DHCP addresses to the iSCSI ports to
ensure continuous connectivity. For Ipv6 addressing, the default method is
Stateless auto-configuration. Always use this method for IPv6.
To configure the iSCSI host ports:
1
In the AMW, select the
2
Select
Configure iSCSI Host Ports
The
Configure iSCSI Host P o rts
Setup
tab.
.
window is displayed.
3
In the
iSCSI host port
list, select an appropriate RAID controller module
and an iSCSI host port.
The connection status between the storage array and the host is displayed
in the Status area when you select an iSCSI host port. The connection
status is either connected or disconnected. Additionally, the media access
control (MAC) address of the selected iSCSI host port is displayed in the
MAC address area.
NOTE: For each iSCSI host port, you can use either IPv4 settings, IPv6
settings, or both.
Configuration: Using iSCSI87
Page 88
4
To use the IPv4 settings for the iSCSI host port, select
select the
5
To use the IPv6 settings for the iSCSI host port, select
select the
6
To configure the IPv4 and IPv6 settings:
IPv4 Settings
IPv6 Settings
tab.
tab.
•To automatically configure the settings, select
automatically
•To manually configure the settings, select
NOTE: If you select the automatic configuration method, the configuration is
obtained automatically using the DHCP for IPv4 settings. Similarly for IPv6
settings, the configuration is obtained automatically based on the MAC
address and the IPv6 routers present on the subnetwork.
You can click
. This option is selected by default.
Advanced IPv4 Settings
and
Specify configuration
Advanced IPv6 Settings
Enable IPv4
Enable IPv6
Obtain configuration
configure the Virtual Local Area Network (VLAN) support and Ethernet
priority. You can also click the
the
TCP listening port settings
7
To enable the Internet Control Message Protocol (ICMP), select
ICMP PING responses
Advanced Host Port Settings
and
Jumbo frame
.
to configure
settings.
The ICMP setting applies to all the iSCSI host ports in the storage array
configured for IPv4 addressing.
NOTE: The ICMP is one of the core protocols of the Internet Protocol suite.
The ICMP messages determine whether a host is reachable and how long it
takes to get packets to and from that host.
8
Click OK.
and
and
.
to
Enable
88Configuration: Using iSCSI
Page 89
Advanced iSCSI Host Ports Settings
NOTE: Configuring the advanced iSCSI host ports settings is optional.
Use the advanced settings for the individual iSCSI host ports to specify the
TCP frame size, the virtual LAN, and the network priority.
Table 7-2. Advanced iSCSI Host Port Settings
SettingDescription
Virtual LAN (VLAN) A method of creating independent logical networks within a
physical network. Several VLANs can exist within a network.
VLAN 1 is the default VLAN.
For more information on creating and configuring a VLAN
with MD Support Manager, in the AMW, click the Support
tab, then click View Online Help.
Ethernet PriorityThe network priority can be set from lowest to highest.
Although network managers must determine these mappings,
the IEEE has made broad recommendations:
• 0—lowest priority (default).
• 1 to 4—ranges from “loss eligible” traffic to controlled- load
applications, such as streaming multimedia and businesscritical traffic.
• 5 and 6—delay-sensitive applic ati on s s uc h a s in te ra ct iv e
video and voice.
• 7—highest priority reserv ed fo r network-critical traffic (do
not use with the MD3200i).
TCP Listening PortThe default Transmission Control Protocol (TCP) listening
port is 3260.
Jumbo FramesThe maximum transmission units (MTUs). It can be set
between 1501 and 9000 Bytes per frame. If the Jumbo Frames
are disabled, the default MTU is 1500 Bytes per frame.
NOTE: Changing any of these settings resets the iSCSI port. I/O is interrupted to
any host accessing that port. You can access the I/O automatically after the port
restarts and the host logs in again.
Configuration: Using iSCSI89
Page 90
Viewing or Ending an iSCSI Session
You may want to end an iSCSI session for the following reasons:
•Unauthorized access—If an initiator whom you consider to not have
access is logged on, you can end the iSCSI session. Ending the iSCSI
session forces the initiator to log off the storage array. The initiator can log
on if
None
authentication method is available.
•System do wntime—If you need to turn off a storage array and initiators ar e
logged on, you can end the iSCSI session to log off the initiators from the
storage array.
To view or end an iSCSI session:
1
In the AMW toolbar, select
2
Select the iSCSI session that you want to view in the
area.
The details are shown below in the
save the entire iSCSI sessions topology as a text file.
3
To end the session:
a
Select the session that you want to end, and then click
The
End Session confirmation
b
Type
yes
to confirm that you want to end the iSCSI session.
c
Click OK.
Storage Array iSCSI End Sessions
Current sessions
Details
area. You can click
Save As
End Session
window is displayed.
.
to
.
NOTE: If you end a session, any corresponding connections terminate the link
between the host and the storage array, and the data on the storage array is no
longer available.
NOTE: When a session is manually terminated using the MDSM, the iSCSI initiator
software automatically attempts to re-establish the terminated connection to the
storage array. This may cause an error message.
90Configuration: Using iSCSI
Page 91
Viewing iSCSI Statistics and Setting Baseline
Statistics
To view iSCSI statis tics and set baseline statistics:
1
In the AMW toolbar, select
The
View iSCSI Statistics
2
Select the iSCSI statistic type you want to view in the
Type
area. You can select:
•
Ethernet MAC statistics
•
Ethernet TCP/IP statistics
•
Target (protocol) statistics
3
In the
Options
•
Raw statistics
statistics that are gathered since the RAID controller modules were
powered on.
•
Baseline statistics
are point-in-time statistics that are gathered since you set the baseline
time.
After you select the statistics type and either raw or baseline statistics, the
details of the statistics are displayed in the statistics tables.
NOTE: You can click Save As to save the statistics that you are viewing in a
text file.
4
To set the baseline for the statistics:
a
Select
b
Click
c
Confirm that you want to set the baseline statistics in the dialog that
is displayed.
The baseline time shows the latest time you set the baseline. The sampling
interval is the difference in time from when you set the baseline until you
launch the dialog or click
area, select:
—To view the raw statistics. Raw statistics are all the
Baseline statistics
Set Baseline
Storage Array iSCSI
window is displayed.
—To view the baseline statistics. Baseline statistics
.
.
Refresh
.
Statistics
iSCSI Statistics
.
NOTE: You must first set a baseline before you can compare baseline
statistics.
Configuration: Using iSCSI91
Page 92
Edit, Remove, or Rename Host Topology
If you give access to the wrong host or the wrong host group, you can remove
or edit the host topology. To correct the host topology:
Table 7-3. Host Topology Actions
Desired ActionSteps
Move a host.
Move a host group.
Manually delete the
host, the host group.
Rename the host, the
host group.
For more information about Host, Host Groups, and Host Topology, see
"Configuration: About Your Host" on page 95.
1
Click the
2
Select the Host that you want to move, and then click
Mappings
3
Select a host group to move the host to and click OK.
1
Click the
2
Select the item that you want to remove and select
Mappings
1
Click the
2
Select the item that you want to rename and select
Mappings
3
Type a new label for the host and click OK.
Mappings
Move
Mappings
Remove
Mappings
Rename
tab.
.
tab.
.
tab.
.
92Configuration: Using iSCSI
Page 93
8
Configuration: Event Monitor
An event monitor is provided with Dell PowerVault Modular Disk Storage
Manager (MDSM). The event monitor runs continuously in the background
and monitors activity on the managed storage arrays. If the event monitor
detects any critical problems, it can notify a host or remote system using email, Simple Network Management Protocol (SNMP) trap messages, or both.
For the most timely and continuous notification of events, enable the event
monitor on a management station that runs 24 hours a day. Enabling the
event monitor on multiple systems or having a combination of an event
monitor and MDSM active can result in duplicate events, but this does not
indicate multiple failures on the array.
To use the Event Monitor:
•Set up alert destinations for the managed device that you want to monitor.
A possible alert destination would be the Dell Management Console. More
information about the Dell Management Console can be found at
dell.com/support
•Replicate the alert settings from a particular managed device by copying
the
emwdata.bin
want to receive alerts.
Each managed device shows a check mark that indicates that alerts are set.
.
file to every storage management station from which you
Enabling or Disabling the Event Monitor
You can enable or disable the event monitor at any time.
Disable the event monitor if you do not want the system to send alert
notifications. If you are running the event monitor on multiple systems,
disabling the event monitor on all but one system prevents the sending of
duplicate messages.
NOTE: It is recommended that you configure the event monitor to start by default
on a management station that runs 24 hours a day.
Configuration: Event Monitor93
Page 94
Windows
To enable or disa ble the event monitor:
1
Click
Start
or
Click
Start Settings Control Panel Administrative Tools
Services
2
From the list of services, select
Monitor
3
Select
4
To enable the event monitor, in the
5
To disable the event monitor, in the
Administrative Tools
.
.
Action
Properties
Services
Modular Disk Storage Manager Event
.
Service Status
Service Status
.
area, click
area, click
Start
Stop
.
.
Linux
To enable the event monitor, at the command prompt, type SMmonitor
start and press <Enter>. When the program startup begins, the system
displays the following message:
SMmonitor started.
To disable the event monitor, start terminal emulation application (console
ox xterm) and at the command prompt, type SMmonitor stop and press
<Enter>. When the program shutdown is complete, the following message
is displayed:
Stopping Monitor process.
94Configuration: Event Monitor
Page 95
9
Configuration: About Your Host
Configuring Host Access
Modular Disk Storage Manager (MDSM) software is comprised of multiple
modules. One of these modules is the Host Context Agent. The host context
agent is installed as part of the MDSM installation and runs continuously in
the background.
If the host context agent is running on a host, the host and the host ports
connected from it to the storage array are automatically detected by MDSM
and are displayed on the Mappings tab in the Array Management Window
(AMW). For more information on the Mappings tab, see "Using the
Mappings Tab" on page 96.
NOTE: The host context agent is not dynamic and must be restarted after
establishing iSCSI sessions for MD3200i Series storage arrays to automatically
detect them.
If the hosts are not detected automatically, you can use the Define Host
Wizard to define the hosts that access the virtual disks in the storage array.
Defining a host is one of the steps required to let the storage array know
which hosts are attached to it and to allow access to the virtual disks. For
more information on defining the hosts, see "Defining a Host" on page 96.
To enable the host to write to the storage array, you must map the host to the
virtual disk. This mapping grants a host or a host group access to a particular
virtual disk or to a number of virtual disks in a storage array. You can define
the mappings on the Mappings tab in the AMW.
On the Summary tab in the AMW, the Hosts & Mappings area indicates how
many hosts are configured to access the storage array. You can click
Configured Hosts in the Hosts & Mappings area to see the names of the
hosts
A collection of elements, such as default host groups, hosts, and host ports,
are displayed as nodes in the Topology pane of the Mappings tab in the
AMW.
The host topology is reconfigurable. You can perform the following tasks:
Configuration: About Your Host95
Page 96
•Create a host and assign an alias or user label.
•Add or associate a new host port identifier to a particular host.
•Change the host port identifier alias or user label.
•Move or associate a host port identifier to a different host.
•Replace a host port identifier with a new host port identifier.
•Manually activate an inactive host port so that the port can gain access to
host specific or host group specific LUN mappings.
•Change the ho st port type to another type.
•Move a host from one host group to another host group.
•Remove a host group, a host, or a host port identifier.
•Rename a host group or a host.
Using the Mappings Tab
In the Mappings tab, you can:
•Define hosts and hosts groups
•Add mappings to the selected host groups
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
Defining a Host
You can use the Define Host Wizard in the AMW to define a host for a
storage array. Either a known unassociated host port identifier or a new host
port identifier can be added.
NOTE: A user label must be specified before the host port identifier may be added
(the add button is disabled until one is entered).
To define a host:
1
In the AMW, select the
array.
2
Perform one of the actions:
•Select
•Select the
96Configuration: About Your Host
Mappings Define Host
Setup
Mappings
tab, and click
tab and select the appropriate storage
.
Manually Define Hosts
.
Page 97
•Select the
name),
pane to which you want to add the host, and select
Mappings
tab. Right-click the root node (storage array
Default Group
node, or
Host Group
node in the
Define Host
Topology
from the pop-up menu.
The
Specify Host Name
3
In
Host name
4
Select the relevant option in
this storage array
The
Specify Host Port Identifier
5
Select the relevant option to add a host port identifier to the host, you can
, enter an up to 30 character alphanumeric name.
? and click
window is displayed.
Do you plan to use the storage partitions in
Next
.
s window is displayed.
select:
•
Add by selecting a known unassociated host port identifier
Known unassociated host port identifiers
, select the relevant host
—In
port identifier.
•
Add by creating a new host port identifier
identifier
character
NOTE: The host port identifier name is in hexadecimal and must contain the letters A
through F and numbers 0 through 9.
6
Click
Next
The
Specify Host Type
, enter a 16 hexadecimal character name and an up to 30
Alias
for the host port identifier and click
.
window is displayed.
—In
New host port
Add
.
7
In
Host
type, select the relevant operating system for the host.
Host Group Question
The
8
In this window, you can select:
•
Yes
—this host shares access to the same virtual disks with other hosts.
•
No
—this host does NOT share access to the same virtual disks with
window is displayed.
other hosts.
9
Click
Next
.
If you select
select
10
Enter the name of the host group or select an existing host group and click
Next
.
Yes
No
, see step 11.
, the
Specify Host Group
window is displayed. If you
Configuration: About Your Host97
Page 98
The
11
Preview
Click
window is displayed.
Finish
.
Removing Host Access
To remove host access:
1
In the AMW, select the
Topology
2
Perform one of these actions:
•Select
•Right-click the host node and select
The
3
Type
4
Click OK.
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
pane.
Mappings Remove
Remove confirmation
yes
.
Mappings
tab, select the host node in the
.
dialog is displayed.
Remove
from the pop-up menu.
Managing Host Groups
A host group is a logical entity of two or more hosts that share access to
specific virtual disks on the storage array. You create host groups with MDSM.
All hosts in a host group must have the same host type (operating system). In
addition, all hosts in the host group must have special software, such as
clustering software, to manage virtual disk sharing and accessibility.
If a host is part of a cluster, every host in the cluster must be connected to the
storage array, and ever y host in the cluster must be added to the host group.
Creating a Host Group
To crea te host groups:
1
In the AMW, select the
2
In the
Topology
3
Perform one of the following actions:
•Select
98Configuration: About Your Host
pane, select the storage array or the
Mappings Define Host Group
Mappings
tab.
Default Group
.
Page 99
•Right-click the storage array or the
Define Host Group
4
Type the name of the new host group in
5
Select the appropriate hosts in the
6
Click
Add
.
NOTE: To remove hosts, select the hosts in the Hosts in group area, and click
Remove.
7
Click OK.
from the pop-up menu.
Default Group,
and select
Enter new host group name
Select hosts to add
area.
.
The host group is added to the storage array.
Adding a Host to a Host Group
You can add a host to an existing host group or a new host group using the
Define Host Wizard. For more information, see "Defining a Host" on
page 96.
You can also move a host to a different host group. For more information, see
"Moving a Host to a Different Host Group" on page 99.
Removing a Host From a Host Group
You can remove a host from the Topology pane on the Mappings tab of the
Array Management Window. For more information, see "Removing a Host
Group" on page 100.
Moving a Host to a Different Host Group
To move a host to a different host group:
1
In the AMW, select the
Topology
2
Perform one of these actions:
•Select
pane.
Mappings Move
•Right-click the host node, and select
The
3
Move Host
In the
Select host group
dialog is displayed.
the host.
The
Move Host Confirmation
Mappings
tab, select the host node in the
.
, select the host group to which you want to move
dialog is displayed.
Configuration: About Your Host99
Move
from the pop-up menu.
Page 100
4
Click
Yes
.
The host is moved to the selected host group with the following mappings:
•The host retains the specific virtual disk mappings assigned to it.
•The host inherits the virtual disk mappings assigned to the host group
to which it is moved.
•The host loses the virtual disk mappings assigned to the host group
from which it was moved.
Removing a Host Group
To remove a host group:
1
In the AMW, select the
Topology
2
Perform one of these actions:
•Select
•Right-click the host node, and select
The
3
Click
The selected host group is removed.
For more information, see the PowerVault Modular Disk Storage Manager online help topics.
pane.
Mappings Remove
Remove
dialog is displayed.
Yes
.
Mappings
tab, select the host node in the
.
Remove
from the pop-up menu.
Host Topology
Host topology is the organization of hosts, host groups, and host interfaces
configured for a storage array. You can view the host topology in the
Mappings tab of the AMW. For more informatio n, see "Using the Mappings
Tab" on page 96.
The following tasks change the host topology:
•Moving a host or a host connection
•Renaming a host group, a host, or a host connection
•Adding a host connection
•Replacing a host connection
100Configuration: About Your Host
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.