intellectual property laws. Dell™ and the Dell logo are trademarks of Dell Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
Dell PowerVault Modular Disk Storage Manager .............................................................................. 15
User Interface.......................................................................................................................................15
Physical Disk States........................................................................................................................21
Virtual Disks And Disk Groups.......................................................................................................22
Virtual Disk States.......................................................................................................................... 22
Disk Pools............................................................................................................................................ 23
CAUTION: See the Safety, Environmental, and Regulatory Information document for important
safety information before following any procedures listed in this document.
The following MD Series systems are supported by the latest version of Dell PowerVault Modular Disk
Manager (MDSM):
•2U MD Series systems:
– Dell PowerVault MD 3400/3420
– Dell PowerVault MD 3800i/3820i
– Dell PowerVault MD 3800f/3820f
•4U (dense) MD Series systems:
– Dell PowerVault MD 3460
– Dell PowerVault MD 3860i
– Dell PowerVault MD 3860f
NOTE: Your Dell MD Series storage arrays supports two expansion enclosures (180 physical disks)
after you install the Additional Physical Disk Support Premium Feature Key. To order the
Additional Physical Disk Support Premium Feature Key, contact Dell Support.
1
Dell PowerVault Modular Disk Storage Manager
Dell PowerVault Modular Disk Storage Manager (MD Storage Manager) is a graphical user interface (GUI)
application used to configure and manage one or more MD Series storage arrays. The MD Storage
Manager software is located on the MD Series resource DVD.
User Interface
The Storage Manager screen is divided into two primary windows:
•Enterprise Management Window (EMW) — The EMW provides high-level management of multiple
storage arrays. You can launch the Array Management Windows for the storage arrays from the EMW.
•Array Management Window (AMW) — The AMW provides management functions for a single storage
array.
The EMW and the AMW consist of the following:
•The title bar at the top of the window — Shows the name of the application.
•The menu bar, beneath the title bar — You can select menu options from the menu bar to perform
tasks on a storage array.
•The toolbar, beneath the menu bar — You can select options in the toolbar to perform tasks on a
storage array.
15
NOTE: The toolbar is available only in the EMW.
•The tabs, beneath the toolbar — Tabs are used to group the tasks that you can perform on a storage
array.
•The status bar, beneath the tabs — The status bar shows status messages and status icons related to
the storage array.
NOTE: By default, the toolbar and status bar are not displayed. To view the toolbar or the status
bar, select View → Toolbar or View → Status Bar.
Enterprise Management Window
The EMW provides high-level management of storage arrays. When you start the MD Storage Manager,
the EMW is displayed. The EMW has the:
•Devices tab — Provides information about discovered storage arrays.
•Setup tab — Presents the initial setup tasks that guide you through adding storage arrays and
configuring alerts.
The Devices tab has a Tree view on the left side of the window that shows discovered storage arrays,
unidentified storage arrays, and the status conditions for the storage arrays. Discovered storage arrays are
managed by the MD Storage Manager. Unidentified storage arrays are available to the MD Storage
Manager but not configured for management. The right side of the Devices tab has a Table view that
shows detailed information for the selected storage array.
In the EMW, you can:
•Discover hosts and managed storage arrays on the local sub-network.
•Manually add and remove hosts and storage arrays.
•Blink or locate the storage arrays.
•Name or rename discovered storage arrays.
•Add comments for a storage array in the Table view.
•Schedule or automatically save a copy of the support data when the client monitor process detects an
event.
•Store your EMW view preferences and configuration data in local configuration files. The next time
you open the EMW, data from the local configuration files is used to show customized view and
preferences.
•Monitor the status of managed storage arrays and indicate status using appropriate icons.
•Add or remove management connections.
•Configure alert notifications for all selected storage arrays through e-mail or SNMP traps.
•Report critical events to the configured alert destinations.
•Launch the AMW for a selected storage array.
•Run a script to perform batch management tasks on specific storage arrays.
•Import the operating system theme settings into the MD Storage Manager.
•Upgrade firmware on multiple storage arrays concurrently.
•Obtain information about the firmware inventory including the version of the RAID controller
modules, physical disks, and the enclosure management modules (EMMs) in the storage array.
16
Inheriting The System Settings
Use the Inherit System Settings option to import the operating system theme settings into the MD
Storage Manager. Importing system theme settings affects the font type, font size, color, and contrast in
the MD Storage Manager.
1.From the EMW, open the Inherit System Settings window in one of these ways:
– Select Tools → Inherit System Settings.
– Select the Setup tab, and under Accessibility, click Inherit System Settings.
2.Select Inherit system settings for color and font.
3.Click OK.
Array Management Window
You can launch the AMW from the EMW. The AMW provides management functions for a single storage
array. You can have multiple AMWs open simultaneously to manage different storage arrays.
In the AMW, you can:
•Select storage array options — For example, renaming a storage array, changing a password, or
enabling a background media scan.
•Configure virtual disks and disk pools from the storage array capacity, define hosts and host groups,
and grant host or host group access to sets of virtual disks called storage partitions.
•Monitor the health of storage array components and report detailed status using applicable icons.
•Perform recovery procedures for a failed logical component or a failed hardware component.
•View the Event Log for a storage array.
•View profile information about hardware components, such as RAID controller modules and physical
disks.
•Manage RAID controller modules — For example, changing ownership of virtual disks or placing a
RAID controller module online or offline.
•Manage physical disks — For example, assignment of hot spares and locating the physical disk.
•Monitor storage array performance.
To launch the AMW:
1.In the EMW, on the Devices tab, right-click on the relevant storage array.
The context menu for the selected storage is displayed.
2.In the context menu, select Manage Storage Array.
The AMW for the selected storage array is displayed.
NOTE: You can also launch the AMW by:
– Double-clicking on a storage array displayed in the Devices tab of the EMW.
– Selecting a storage array displayed in the Devices tab of the EMW, and then selecting Tools
→ Manage Storage Array.
The AMW has the following tabs:
•Summary tab — You can view the following information about the storage array:
– Status
17
– Hardware
– Storage and copy services
– Hosts and mappings
– Information on storage capacity
– Premium features
•Performance tab — You can track a storage array’s key performance data and identify performance
bottlenecks in your system. You can monitor the system performance in the following ways:
– Real-time graphical
– Real-time textual
– Background (historical)
•Storage & Copy Services tab — You can view and manage the organization of the storage array by
virtual disks, disk groups, free capacity nodes, and any unconfigured capacity for the storage array.
•Host Mappings tab — You can define the hosts, host groups, and host ports. You can change the
mappings to grant virtual disk access to host groups and hosts and create storage partitions.
•Hardware tab — You can view and manage the physical components of the storage array.
•Setup tab — Shows a list of initial setup tasks for the storage array.
Dell PowerVault Modular Disk Configuration Utility
NOTE: Dell PowerVault Modular Disk Configuration Utility (MDCU) is supported only on MD Series
storage arrays that use the iSCSI protocol.
MDCU is an iSCSI Configuration Wizard that can be used in conjunction with MD Storage Manager to
simplify the configuration of iSCSI connections. The MDCU software is available on the MD Series
resource media.
Other Information You May Need
WARNING: See the safety and regulatory information that shipped with your system. Warranty
information may be included within this document or as a separate document.
NOTE: All the documents, unless specified otherwise, are available at dell.com/support/manuals.
•The Getting Started Guide provides an overview of setting up and cabling your storage array.
•The Deployment Guide provides installation and configuration instructions for both software and
hardware.
•The Owner’s Manual provides information about system features and describes how to troubleshoot
the system and install or replace system components.
•The CLI Guide provides information about using the command line interface (CLI).
•The MD Series resource media contains all system management tools.
•The Dell PowerVault MD Series Support Matrix provides information on supported software and
hardware for MD systems.
•Information Updates or readme files are included to provide last-minute updates to the enclosure or
documentation or advanced technical reference material intended for experienced users or
technicians.
•For video resources on PowerVault MD storage arrays, go to dell.com/techcenter.
•For the full name of an abbreviation or acronym used in this document, see the Glossary at dell.com/
support/manuals.
18
NOTE: Always check for updates on dell.com/support/manuals and read the updates first because
they often supersede information in other documents.
19
20
2
About Your MD Series Storage Array
This chapter describes the storage array concepts, which help in configuring and operating the Dell MD
Series storage arrays.
Physical Disks, Virtual Disks, And Disk Groups
Physical disks in your storage array provide the physical storage capacity for your data. Before you can
begin writing data to the storage array, you must configure the physical storage capacity into logical
components, called disk groups and virtual disks.
A disk group is a set of physical disks upon which multiple virtual disks are created. The maximum
number of physical disks supported in a disk group is:
•96 disks for RAID 0, RAID 1, and RAID 10
•30 disks for RAID 5 and RAID 6
You can create disk groups from unconfigured capacity on your storage array.
A virtual disk is a partition in a disk group that is made up of contiguous data segments of the physical
disks in the disk group. A virtual disk consists of data segments from all physical disks in the disk group.
All virtual disks in a disk group support the same RAID level. The storage array supports up to 255 virtual
disks (minimum size of 10 MB each) that can be assigned to host servers. Each virtual disk is assigned a
Logical Unit Number (LUN) that is recognized by the host operating system.
Virtual disks and disk groups are set up according to how you plan to organize your data. For example,
you can have one virtual disk for inventory, a second virtual disk for financial and tax information, and so
on.
Physical Disks
Only Dell supported physical disks are supported in the storage array. If the storage array detects
unsupported physical disks, it marks the disk as unsupported and the physical disk becomes unavailable
for all operations.
For the list of supported physical disks, see the Support Matrix at dell.com/support/manuals.
Physical Disk States
The following describes the various states of the physical disk, which are recognized by the storage array
and reported in the MD Storage Manager.
StatusModeDescription
OptimalAssignedThe physical disk in the indicated slot is
configured as part of a disk group.
OptimalUnassignedThe physical disk in the indicated slot is unused
and available to be configured.
21
StatusModeDescription
OptimalHot Spare StandbyThe physical disk in the indicated slot is
configured as a hot spare.
OptimalHot Spare in useThe physical disk in the indicated slot is in use as a
hot spare within a disk group.
FailedAssigned, Unassigned, Hot
Spare in use, or Hot Spare
Standby
ReplacedAssignedThe physical disk in the indicated slot has been
Pending FailureAssigned, Unassigned, Hot
Spare in use, or Hot Spare
Standby
OfflineNot applicableThe physical disk has either been spun down or
IdentifyAssigned, Unassigned, Hot
Spare in use, or Hot Spare
Standby
The physical disk in the indicated slot has failed
because of an unrecoverable error, an incorrect
drive type or drive size, or by its operational state
being set to failed.
replaced and is ready to be, or is actively being,
configured into a disk group.
A Self-Monitoring Analysis and Reporting
Technology (SMART) error has been detected on
the physical disk in the indicated slot.
had a rebuild aborted by user request.
The physical disk is being identified.
Virtual Disks And Disk Groups
When configuring a storage array, you must:
•Organize the physical disks into disk groups.
•Create virtual disks within these disk groups.
•Provide host server access.
•Create mappings to associate the virtual disks with the host servers.
NOTE: Host server access must be created before mapping virtual disks.
Disk groups are always created in the unconfigured capacity of a storage array. Unconfigured capacity is
the available physical disk space not already assigned in the storage array.
Virtual disks are created within the free capacity of a disk group. Free capacity is the space in a disk group
that has not been assigned to a virtual disk.
Virtual Disk States
The following table describes the various states of the virtual disk, recognized by the storage array.
Table 1. RAID Controller Virtual Disk States
StateDescription
OptimalThe virtual disk contains physical disks that are online.
DegradedThe virtual disk with a redundant RAID level contains
an inaccessible physical disk. The system can still
22
StateDescription
function properly, but performance may be affected
and additional disk failures may result in data loss.
OfflineA virtual disk with one or more member disks in an
inaccessible (failed, missing, or offline) state. Data on
the virtual disk is no longer accessible.
Force onlineThe storage array forces a virtual disk that is in an
Offline state to an Optimal state. If all the member
physical disks are not available, the storage array
forces the virtual disk to a Degraded state. The
storage array can force a virtual disk to an Online
state only when a sufficient number of physical disks
are available to support the virtual disk.
Disk Pools
Disk pooling allows you to distribute data from each virtual disk randomly across a set of physical disks.
Although there is no limit on the maximum number of physical disks that can comprise a disk pool, each
disk pool must have a minimum of 11 physical disks. Additionally, the disk pool cannot contain more
physical disks than the maximum limit for each storage array.
Thin Virtual Disks
Thin virtual disks can be created from an existing disk pool. Creating thin virtual disks allows you to set up
a large virtual space, but only use the actual physical space as you need it.
RAID Levels
RAID levels determine the way in which data is written to physical disks. Different RAID levels provide
different levels of accessibility, redundancy, and capacity.
Using multiple physical disks has the following advantages over using a single physical disk:
•Placing data on multiple physical disks (striping) allows input/output (I/O) operations to occur
simultaneously and improve performance.
•Storing redundant data on multiple physical disks using mirroring or parity supports reconstruction of
lost data if an error occurs, even if that error is the failure of a physical disk.
Each RAID level provides different performance and protection. You must select a RAID level based on
the type of application, access, fault tolerance, and data you are storing.
The storage array supports RAID levels 0, 1, 5, 6, and 10. The maximum and minimum number of physical
disks that can be used in a disk group depends on the RAID level:
•120 (180 with PFK) for RAID 0, 1, and 10
•30 for RAID 5 and 6
Maximum Physical Disk Support Limitations
Although PowerVault MD Series storage arrays with premium feature kit can support up to 180 physical
disks, RAID 0 and RAID 10 configurations with more than 120 physical disks are not supported. MD
23
Storage Manager does not enforce 120-physical disk limit when you setup a RAID 0 or RAID 10
configuration. Exceeding the 120-physical disk limit may cause your storage array to be unstable.
RAID Level Usage
To ensure best performance, you must select an optimal RAID level when you create a system physical
disk. The optimal RAID level for your disk array depends on:
•Number of physical disks in the disk array
•Capacity of the physical disks in the disk array
•Need for redundant access to the data (fault tolerance)
•Disk performance requirements
RAID 0
CAUTION: Do not attempt to create virtual disk groups exceeding 120 physical disks in a RAID 0
configuration even if premium feature is activated on your storage array. Exceeding the 120physical disk limit may cause your storage array to be unstable.
RAID 0 uses disk striping to provide high data throughput, especially for large files in an environment that
requires no data redundancy. RAID 0 breaks the data down into segments and writes each segment to a
separate physical disk. I/O performance is greatly improved by spreading the I/O load across many
physical disks. Although it offers the best performance of any RAID level, RAID 0 lacks data redundancy.
Choose this option only for non-critical data, because failure of one physical disk results in the loss of all
data. Examples of RAID 0 applications include video editing, image editing, prepress applications, or any
application that requires high bandwidth.
RAID 1
RAID 1 uses disk mirroring so that data written to one physical disk is simultaneously written to another
physical disk. RAID 1 offers fast performance and the best data availability, but also the highest disk
overhead. RAID 1 is recommended for small databases or other applications that do not require large
capacity. For example, accounting, payroll, or financial applications. RAID 1 provides full data redundancy.
RAID 5
RAID 5 uses parity and striping data across all physical disks (distributed parity) to provide high data
throughput and data redundancy, especially for small random access. RAID 5 is a versatile RAID level and
is suited for multi-user environments where typical I/O size is small and there is a high proportion of read
activity such as file, application, database, web, e-mail, news, and intranet servers.
RAID 6
RAID 6 is similar to RAID 5 but provides an additional parity disk for better redundancy. RAID 6 is the most
versatile RAID level and is suited for multi-user environments where typical I/O size is small and there is a
high proportion of read activity. RAID 6 is recommended when large size physical disks are used or large
number of physical disks are used in a disk group.
24
RAID 10
CAUTION: Do not attempt to create virtual disk groups exceeding 120 physical disks in a RAID 10
configuration even if premium feature is activated on your storage array. Exceeding the 120physical disk limit may cause your storage array to be unstable.
RAID 10, a combination of RAID 1 and RAID 0, uses disk striping across mirrored disks. It provides high
data throughput and complete data redundancy. Utilizing an even number of physical disks (four or
more) creates a RAID level 10 disk group and/or virtual disk. Because RAID levels 1 and 10 use disk
mirroring, half of the capacity of the physical disks is utilized for mirroring. This leaves the remaining half
of the physical disk capacity for actual storage. RAID 10 is automatically used when a RAID level of 1 is
chosen with four or more physical disks. RAID 10 works well for medium-sized databases or any
environment that requires high performance and fault tolerance and moderate-to-medium capacity.
Segment Size
Disk striping enables data to be written across multiple physical disks. Disk striping enhances
performance because striped disks are accessed simultaneously.
The segment size or stripe element size specifies the size of data in a stripe written to a single disk. The
storage array supports stripe element sizes of 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, and 256 KB. The default
stripe element size is 128 KB.
Stripe width, or depth, refers to the number of disks involved in an array where striping is implemented.
For example, a four-disk group with disk striping has a stripe width of four.
NOTE: Although disk striping delivers excellent performance, striping alone does not provide data
redundancy.
Virtual Disk Operations
Virtual Disk Initialization
Every virtual disk must be initialized. Initialization can be done in the foreground or the background. A
maximum of four virtual disks can be initialized concurrently on each RAID controller module.
•Background initialization — The storage array executes a background initialization when the virtual
disk is created to establish parity, while allowing full host server access to the virtual disks. Background
initialization does not run on RAID 0 virtual disks. The background initialization rate is controlled by
MD Storage Manager. To change the rate of background initialization, you must stop any existing
background initialization. The rate change is implemented when the background initialization restarts
automatically.
•Foreground Initialization — The storage array executes a background initialization when the virtual
disk is created to establish parity, while allowing full host server access to the virtual disks. Background
initialization does not run on RAID 0 virtual disks. The background initialization rate is controlled by
MD Storage Manager. To change the rate of background initialization, you must stop any existing
background initialization. The rate change is implemented when the background initialization restarts
automatically.
Consistency Check
A consistency check verifies the correctness of data in a redundant array (RAID levels 1, 5, 6, and 10). For
example, in a system with parity, checking consistency involves computing the data on one physical disk
and comparing the results to the contents of the parity physical disk.
25
A consistency check is similar to a background initialization. The difference is that background
initialization cannot be started or stopped manually, while consistency check can.
NOTE: It is recommended that you run data consistency checks on a redundant array at least once
a month. This allows detection and automatic replacement of unreadable sectors. Finding an
unreadable sector during a rebuild of a failed physical disk is a serious problem, because the system
does not have the redundancy to recover the data.
Media Verification
Another background task performed by the storage array is media verification of all configured physical
disks in a disk group. The storage array uses the Read operation to perform verification on the space
configured in virtual disks and the space reserved for the metadata.
Cycle Time
The media verification operation runs only on selected disk groups, independent of other disk groups.
Cycle time is the time taken to complete verification of the metadata region of the disk group and all
virtual disks in the disk group for which media verification is configured. The next cycle for a disk group
starts automatically when the current cycle completes. You can set the cycle time for a media verification
operation between 1 and 30 days. The storage controller throttles the media verification I/O accesses to
disks based on the cycle time.
The storage array tracks the cycle for each disk group independent of other disk groups on the controller
and creates a checkpoint. If the media verification operation on a disk group is preempted or blocked by
another operation on the disk group, the storage array resumes after the current cycle. If the media
verification process on a disk group is stopped due to a RAID controller module restart, the storage array
resumes the process from the last checkpoint.
Virtual Disk Operations Limit
The maximum number of active, concurrent virtual disk processes per RAID controller module installed in
the storage array is four. This limit is applied to the following virtual disk processes:
•Background initialization
•Foreground initialization
•Consistency check
•Rebuild
•Copy back
If a redundant RAID controller module fails with existing virtual disk processes, the processes on the failed
controller are transferred to the peer controller. A transferred process is placed in a suspended state if
there are four active processes on the peer controller. The suspended processes are resumed on the peer
controller when the number of active processes falls below four.
Disk Group Operations
RAID Level Migration
You can migrate from one RAID level to another depending on your requirements. For example, faulttolerant characteristics can be added to a stripe set (RAID 0) by converting it to a RAID 5 set. The MD
Storage Manager provides information about RAID attributes to assist you in selecting the appropriate
26
RAID level. You can perform a RAID level migration while the system is still running and without
rebooting, which maintains data availability.
Segment Size Migration
Segment size refers to the amount of data (in kilobytes) that the storage array writes on a physical disk in
a virtual disk before writing data on the next physical disk. Valid values for the segment size are 8 KB, 16
KB, 32 KB, 64 KB, 128 KB, and 256 KB.
Dynamic segment size migration enables the segment size of a given virtual disk to be changed. A default
segment size is set when the virtual disk is created, based on such factors as the RAID level and expected
usage. You can change the default value if segment size usage does not match your needs.
When considering a segment size change, two scenarios illustrate different approaches to the limitations:
•If I/O activity stretches beyond the segment size, you can increase it to reduce the number of disks
required for a single I/O. Using a single physical disk for a single request frees disks to service other
requests, especially when you have multiple users accessing a database or storage environment.
•If you use the virtual disk in a single-user, large I/O environment (such as for multimedia application
storage), performance can be optimized when a single I/O request is serviced with a single data stripe
(the segment size multiplied by the number of physical disks in the disk group used for data storage).
In this case, multiple disks are used for the same request, but each disk is only accessed once.
Virtual Disk Capacity Expansion
When you configure a virtual disk, you select a capacity based on the amount of data you expect to store.
However, you may need to increase the virtual disk capacity for a standard virtual disk by adding free
capacity to the disk group. This creates more unused space for new virtual disks or to expand existing
virtual disks.
Disk Group Expansion
Because the storage array supports hot-swappable physical disks, you can add two physical disks at a
time for each disk group while the storage array remains online. Data remains accessible on virtual disk
groups, virtual disks, and physical disks throughout the operation. The data and increased unused free
space are dynamically redistributed across the disk group. RAID characteristics are also reapplied to the
disk group as a whole.
Disk Group Defragmentation
Defragmenting consolidates the free capacity in the disk group into one contiguous area.
Defragmentation does not change the way in which the data is stored on the virtual disks.
Disk Group Operations Limit
The maximum number of active, concurrent disk group processes per installed RAID controller module is
one. This limit is applied to the following disk group processes:
•Virtual disk RAID level migration
•Segment size migration
•Virtual disk capacity expansion
•Disk group expansion
•Disk group defragmentation
27
If a redundant RAID controller module fails with an existing disk group process, the process on the failed
controller is transferred to the peer controller. A transferred process is placed in a suspended state if
there is an active disk group process on the peer controller. The suspended processes are resumed when
the active process on the peer controller completes or is stopped.
NOTE: If you try to start a disk group process on a controller that does not have an existing active
process, the start attempt fails if the first virtual disk in the disk group is owned by the other
controller and there is an active process on the other controller.
RAID Background Operations Priority
The storage array supports a common configurable priority for the following RAID operations:
•Background initialization
•Rebuild
•Copy back
•Virtual disk capacity expansion
•Raid level migration
•Segment size migration
•Disk group expansion
•Disk group defragmentation
The priority of each of these operations can be changed to address performance requirements of the
environment in which the operations are to be executed.
NOTE: Setting a high priority level impacts storage array performance. It is not advisable to set
priority levels at the maximum level. Priority must also be assessed in terms of impact to host server
access and time to complete an operation. For example, the longer a rebuild of a degraded virtual
disk takes, the greater the risk for potential secondary disk failure.
Virtual Disk Migration And Disk Roaming
Virtual disk migration is moving a virtual disk or a hot spare from one array to another by detaching the
physical disks and re-attaching them to the new array. Disk roaming is moving a physical disk from one
slot to another on the same array.
Disk Migration
You can move virtual disks from one array to another without taking the target array offline. However, the
disk group being migrated must be offline prior to performing the disk migration. If the disk group is not
offline prior to migration, the source array holding the physical and virtual disks within the disk group
marks them as missing. However, the disk groups themselves migrate to the target array.
An array can import a virtual disk only if it is in an optimal state. You can move virtual disks that are part of
a disk group only if all members of the disk group are being migrated. The virtual disks automatically
become available after the target array has finished importing all the disks in the disk group.
When you migrate a physical disk or a disk group from:
•One MD storage array to another MD storage array of the same type (for example, from an MD3460
storage array to another MD3460 storage array), the MD storage array you migrate to, recognizes any
data structures and/or metadata you had in place on the migrating MD storage array.
28
•Any storage array different from the MD storage array you migrate to (for example, from an MD3460
storage array to an MD3860i storage array), the receiving storage array (MD3860i storage array in the
example) does not recognize the migrating metadata and that data is lost. In this case, the receiving
storage array initializes the physical disks and marks them as unconfigured capacity.
NOTE: Only disk groups and associated virtual disks with all member physical disks present can be
migrated from one storage array to another. It is recommended that you only migrate disk groups
that have all their associated member virtual disks in an optimal state.
NOTE: The number of physical disks and virtual disks that a storage array supports limits the scope
of the migration.
Use either of the following methods to move disk groups and virtual disks:
•Hot virtual disk migration — Disk migration with the destination storage array power turned on.
•Cold virtual disk migration — Disk migration with the destination storage array power turned off.
NOTE: To ensure that the migrating disk groups and virtual disks are correctly recognized when the
target storage array has an existing physical disk, use hot virtual disk migration.
When attempting virtual disk migration, follow these recommendations:
•Moving physical disks to the destination array for migration — When inserting drives into the
destination storage array during hot virtual disk migration, wait for the inserted physical disk to be
displayed in the MD Storage Manager, or wait for 30 seconds (whichever occurs first), before inserting
the next physical disk.
WARNING: Without the interval between drive insertions, the storage array may become
unstable and manageability may be temporarily lost.
•Migrating virtual disks from multiple storage arrays into a single storage array — When migrating
virtual disks from multiple or different storage arrays into a single destination storage array, move all
of the physical disks from the same storage array as a set into the new destination storage array.
Ensure that all of the physical disks from a storage array are migrated to the destination storage array
before starting migration from the next storage array.
NOTE: If the drive modules are not moved as a set to the destination storage array, the newly
relocated disk groups may not be accessible.
•Migrating virtual disks to a storage array with no existing physical disks — Turn off the destination
storage array, when migrating disk groups or a complete set of physical disks from a storage array to
another storage array that has no existing physical disks. After the destination storage array has been
turned on and has successfully recognized the newly migrated physical disks, migration operations
can continue.
NOTE: Disk groups from multiple storage arrays must not be migrated at the same time to a
storage array that has no existing physical disks. Use cold virtual disk migration for the disk
groups from one storage array.
•Enabling premium features before migration — Before migrating disk groups and virtual disks, enable
the required premium features on the destination storage array. If a disk group is migrated from a
storage array that has a premium feature enabled and the destination array does not have this feature
enabled, an Out of Compliance error message can be generated.
Disk Roaming
You can move physical disks within an array. The RAID controller module automatically recognizes the
relocated physical disks and logically places them in the proper virtual disks that are part of the disk
group. Disk roaming is permitted when the RAID controller module is either online or powered off.
NOTE: The disk group must be exported before moving the physical disks.
29
Host Server-To-Virtual Disk Mapping
The host server attached to a storage array accesses various virtual disks on the storage array through its
host ports. Specific virtual disk-to-LUN mappings to an individual host server can be defined. In addition,
the host server can be part of a host group that shares access to one or more virtual disks. You can
manually configure a host server-to-virtual disk mapping. When you configure host server-to-virtual disk
mapping, consider these guidelines:
•You can define one host server-to-virtual disk mapping for each virtual disk in the storage array.
•Host server-to-virtual disk mappings are shared between RAID controller modules in the storage
array.
•A unique LUN must be used by a host group or host server to access a virtual disk.
•Not every operating system has the same number of LUNs available for use.
Host Types
A host server is a server that accesses a storage array. Host servers are mapped to the virtual disks and
use one or more iSCSI initiator ports. Host servers have the following attributes:
•Host name — A name that uniquely identifies the host server.
•Host group (used in Cluster solutions only) — Two or more host servers associated together to share
access to the same virtual disks.
NOTE: This host group is a logical entity you can create in the MD Storage Manager. All host
servers in a host group must be running the same operating system.
•Host type — The operating system running on the host server.
Advanced Features
The RAID enclosure supports several advanced features:
•Virtual Disk Snapshots.
•Virtual Disk Copy.
NOTE: The premium features listed above must be activated separately. If you have purchased these
features, an activation card is supplied that contains instructions for enabling this functionality.
Types Of Snapshot Functionality Supported
The following types of virtual disk snapshot premium features are supported on the MD storage array:
•Snapshot Virtual Disks using multiple point-in-time (PiT) groups — This feature also supports snapshot
groups, snapshot images, and consistency groups.
•Snapshot Virtual Disks (Legacy) using a separate repository for each snapshot
For more information, see Premium Feature---Snapshot Virtual Disk and Premium Feature—Snapshot
Virtual Disks (Legacy).
Snapshot Virtual Disks, Snapshot Images, And Snapshot Groups
A snapshot image is a logical image of the content of an associated base virtual disk created at a specific
point-in-time. This type of image is not directly readable or writable to a host since the snapshot image is
used to save data from the base virtual disk only. To allow the host to access a copy of the data in a
snapshot image, you must create a snapshot virtual disk. This snapshot virtual disk contains its own
30
Loading...
+ 213 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.