intellectual property laws. Dell™ and the Dell logo are trademarks of Dell Inc. in the United States and/or other
jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.
Set Consistency Group Attributes....................................................................................................285
Set Consistency Group Snapshot Virtual Disk.................................................................................286
Set Disk Group ................................................................................................................................. 286
Set Disk Group Forced State............................................................................................................ 288
Set Disk Pool.....................................................................................................................................288
Set Disk Pool Complete....................................................................................................................289
Set Enclosure Attribute.....................................................................................................................290
Set Enclosure Identification............................................................................................................. 290
Set Event Alert Filtering.....................................................................................................................291
Set Foreign Physical Disk to Native..................................................................................................292
Set Host.............................................................................................................................................293
Set Host Channel..............................................................................................................................294
Set Host Group................................................................................................................................. 295
Set Host Port..................................................................................................................................... 295
Set iSCSI Initiator...............................................................................................................................296
Set iSCSI Target Properties...............................................................................................................297
Set Physical Disk Channel Status......................................................................................................297
Set Physical Disk Hot Spare..............................................................................................................298
Set Physical Disk State......................................................................................................................299
Set RAID Controller Module.............................................................................................................299
11
Page 12
Set Read-Only Snapshot Virtual Disk To A Read/Write Virtual Disk............................................... 303
Set Remote Replication....................................................................................................................303
Set Remote Replication Group........................................................................................................ 306
Set Session........................................................................................................................................ 309
Set Snapshot Group Attributes......................................................................................................... 310
Set Snapshot Group Media Scan.......................................................................................................311
Set Snapshot Group Repository Virtual Disk Capacity.....................................................................312
Set Snapshot Group Schedule..........................................................................................................313
Set Snapshot (Legacy) Virtual Disk................................................................................................... 314
Set Snapshot Virtual Disk Media Scan...............................................................................................317
Set Snapshot Virtual Disk Repository Virtual Disk Capacity.............................................................318
Set Storage Array...............................................................................................................................319
Set Storage Array Enclosure Positions............................................................................................. 322
Set Storage Array ICMP Response....................................................................................................323
Set Storage Array iSNS Server IPv4 Address.....................................................................................323
Set Storage Array iSNS Server IPv6 Address.....................................................................................324
Set Storage Array iSNS Server Listening Port...................................................................................324
Set Storage Array Learn Cycle.......................................................................................................... 325
Set Storage Array Redundancy Mode.............................................................................................. 325
Set Storage Array Security Key......................................................................................................... 326
Set Storage Array Time..................................................................................................................... 326
Set Storage Array Unnamed Discovery Session...............................................................................327
Set Thin Virtual Disk Attributes......................................................................................................... 327
Set Virtual Disk.................................................................................................................................. 329
Set Virtual Disk Attributes For A Disk Pool....................................................................................... 332
Set Virtual Disk Copy.........................................................................................................................337
Set Virtual Disk Mapping...................................................................................................................338
Show Blocked Events....................................................................................................................... 340
Show Consistency Group Snapshot Image.....................................................................................340
Show Current iSCSI Sessions............................................................................................................341
Show Disk Group.............................................................................................................................. 342
Show Disk Group Export Dependencies..........................................................................................343
Show Disk Group Import Dependencies......................................................................................... 343
Show Disk Pool.................................................................................................................................344
Show Host Ports............................................................................................................................... 344
Show Physical Disk........................................................................................................................... 345
Show Physical Disk Channel Statistics............................................................................................. 347
Show Physical Disk Download Progress..........................................................................................347
Show RAID Controller Module.........................................................................................................348
Show RAID Controller Module NVSRAM......................................................................................... 349
Show Remote Replication Group.................................................................................................... 349
Show Remote Replication Group Synchronization Progress.........................................................350
12
Page 13
Show Remote Replication Virtual Disk Candidates......................................................................... 351
Show Remote Replication Virtual Disk Synchronization Progress................................................. 352
Show SNMP Communities............................................................................................................... 352
Show SNMP MIB II System Group Variables....................................................................................354
Show Snapshot Group......................................................................................................................354
Show Snapshot Image......................................................................................................................356
Show Snapshot Virtual Disks............................................................................................................ 357
Show SSD Cache.............................................................................................................................. 358
Show SSD Cache Statistics...............................................................................................................359
Show Storage Array.......................................................................................................................... 363
Show Storage Array Auto Configure................................................................................................365
Show Storage Array Core Dump......................................................................................................366
Show Storage Array DBM Database................................................................................................. 367
Show Storage Array Host Topology.................................................................................................367
Show Storage Array LUN Mappings.................................................................................................368
Show Storage Array Negotiation Defaults....................................................................................... 369
Show Storage Array ODX Setting.....................................................................................................369
Show Storage Array Power Information.......................................................................................... 370
Show Storage Array Unconfigured iSCSI Initiators..........................................................................370
Show Storage Array Unreadable Sectors..........................................................................................371
Show String........................................................................................................................................371
Show Thin Virtual Disk...................................................................................................................... 372
Show Virtual Disk...............................................................................................................................373
Show Virtual Disk Action Progress................................................................................................... 374
Show Virtual Disk Copy.....................................................................................................................375
Show Virtual Disk Copy Source Candidates.................................................................................... 376
Show Virtual Disk Copy Target Candidates..................................................................................... 376
Show Virtual Disk Performance Statistics.........................................................................................377
Show Virtual Disk Reservations.........................................................................................................377
Configuration Script Example 1........................................................................................................ 415
Configuration Script Example 2........................................................................................................417
15
Page 16
1
CLI Command Updates
This chapter reflects new and updated commands that are available for use with the Dell PowerVault MD
32XX/34XX/36XX/38XX Series storage arrays.
NOTE: Not all commands are valid with all storage arrays. Some commands are specific to certain
platforms.
CAUTION: Script commands are capable of changing the configuration and may cause loss of
data if not used correctly. Command operations are performed as soon as you run the
commands. Before using the script commands, ensure that you have backed up all data, and have
saved the current configuration so that you can reinstall it if the changes do not work.
New Commands
The following commands have been added to this guide to reflect the additional functionality available in
the PowerVault MD32XX, MD34XX, MD36XX, and MD38XX.
•Recover SAS Port Mis-Wire
•Reduce Disk Pool Capacity
•Register SNMP Community
•Register SNMP Trap Destination
•Set Event Alert Filtering
•Show Blocked Events
•Show SNMP Communities
•Show SNMP MIB II System Group Variables
•Show Storage Array Power Information
•Test SNMP Trap Destination
•Unregister SNMP Community
•Unregister SNMP Trap Destination
•Update SNMP Community
•Update SNMP Trap Destination
•Update SNMP MIB II System Group Variables
Updated Commands
The syntax of the following commands has been modified, updated, or enhanced since the last release of
this document.
16
Page 17
NOTE: not all commands are valid with all storage arrays. Some commands are specific to certain
platforms.
•Activate Remote Replication (Legacy)
•Download Storage Array NVSRAM
•Resume Remote Replication Group
•Resume Snapshot (Legacy) Rollback
•Set RAID Controller Module
•Set Remote Replication (Legacy)
•Set Storage Array
•Start Remote Replication Synchronization
•Stop Snapshot (Legacy) Rollback
17
Page 18
2
About The Command Line Interface
This guide is intended for system administrators, developers, and engineers who need to use the
command line interface (CLI) tool and its associated commands and script files. Selected CLI commands
perform functions that can also be accessed from the Modular Disk (MD) Storage Manager, which is the
graphical user interface (GUI) to the storage array. See the Administrator's Guide, which describes the
Storage Manager software that is used to create and manage multiple storage arrays. For additional
information, see the hardware and software manuals that shipped with your system.
NOTE: Always check for updates on dell.com/support and read the updates first because they often
supersede information in other documents.
NOTE: CLI commands do not have interactive warnings for destructive commands.
CLI is a software tool that enables storage array installers, developers, and engineers to configure and
monitor storage arrays. Using the command line interface, you can issue commands from an operating
system prompt, such as the Microsoft Windows command prompt (C:\) or a Linux operating system
terminal.
Each command performs a specific action for managing a storage array or returning information about
the status of a storage array. You can enter individual commands, or run script files when you need to
perform operations more than once (such as installing the same configuration on several storage arrays).
A script file can be loaded and run from the command line interface. You can also run commands in an
interactive mode. Interactive mode enables you to connect to a specific storage array and rapidly enter a
command, determine the effect on the storage array, and then enter a new command.
The command line interface gives you direct access to a script engine utility in the Dell PowerVault
Modular Disk Storage Manager software (MD Storage Manager). The script engine reads the commands,
or runs a script file, from the command line and performs the operations instructed by the commands.
You can use the command line interface to perform the following functions:
•Directly access the script engine and run commands in interactive mode or using a script file.
•Create script command batch files to be run on multiple storage arrays when you need to install the
same configuration on different storage arrays.
•Run script commands on a storage array directly connected to a host, a storage array connected to a
host by an Ethernet, or a combination of both.
•Display configuration information about the storage arrays.
•Add storage arrays to and remove storage arrays from the management domain.
•Perform automatic discovery of all storage arrays attached to the local subnet.
•Add or delete Simple Network Management Protocol (SNMP) trap destinations and e-mail alert
notifications.
•Specify the mail server and sender e-mail address or Simple Mail Transport Protocol (SMTP) server for
alert notifications.
•Direct the output to a standard command line display or to a named file.
18
Page 19
How To Use The Command Line Interface
Using the CLI commands, you can access the script engine, specify which storage array receives the
script commands, and set operation environment parameters.
A CLI command consists of the following elements:
•The term SMcli
•Storage array identifier
•Parameters
•Script commands
The following syntax is the general form of a CLI command:
SMcli storageArray parameters script-commands;
where,
SMcli invokes the command line interface.
storageArray is the host name or IP address of the storage array.
parameters are the CLI parameters that define the environment and purpose of the command.
script-commands are the commands or name of the script file containing the script commands.
The script commands are the storage array configuration commands. About The Script Commands
presents an overview of the script commands. Script commands provide definitions, syntax, and
parameters for the script commands.
Usage Notes
If you enter SMcli and a storage array name but do not specify CLI parameters, script commands, or a
script file, the command line interface runs in interactive mode. Interactive mode enables you to run
individual commands without prefixing the commands with SMcli. You can enter a single command, view
the results, and enter the next command without typing the complete SMcli string. Interactive mode is
useful for determining configuration errors and quickly testing configuration changes.
If you enter SMcli without any parameters or with an incorrect parameter, the script engine returns usage
information.
NOTE: The SMcli command is installed under the client directory of the selected path during a
management station install of the MD Storage Manager software.
NOTE: The SMcli command should be a component of the system environment command path.
CLI Commands
This section lists the CLI commands you can use to perform the following functions:
•Identify storage arrays
19
Page 20
•Set passwords
•Add storage arrays
•Specify communication parameters
•Enter individual script configuration commands
•Specify a file containing script configuration commands
The following are general forms of the CLI commands, showing the parameters and terminals used in
each command. The table below lists definitions for the parameters shown in the CLI commands.
Table 1. Command Name Conventions
ParameterDefinition
a|b
pipe symbol indicating alternative ("a" or "b")
italicized-wordsterminals
[...] (square brackets)zero or one occurrence
{...} (curly brackets)zero or more occurrences
<...> (angle brackets)occurrence exceeds maximum limit of 30
SMcli -A [host-name-or-IP-address
[host-name-or-IP-address]] [-S]
SMcli -X (-n storage-array-name | -w WWID |
-h host-name)
SMcli -?
Command Line Parameters
Table 2. Command Line Parameters
ParameterDefinition
host‑name-or-IP-address
-A
-a
Specify either the host name or the Internet
Protocol (IP) address of an in-band managed
storage array (IPv4 or IPv6) or an out-of-band
managed storage array (IPv4 or IPv6).
•If you manage a storage array by using a host
connected directly to the storage array (in‑band
storage management), you must use the -n
parameter if more than one storage array is
connected to the host.
•If you manage a storage array through an
Ethernet connection (out-of-band storage
management), you must specify the
host‑name‑or‑IP‑address of the redundant array
of independent disks (RAID) controller modules.
•If you have previously configured a storage
array in the graphical user interface (GUI) of the
MD Storage Manager, you can specify the
storage array by its user‑supplied name by
using the -n parameter.
Use to add a storage array to the configuration
files. If you do not follow the -A parameter with a
host‑name‑or‑IP‑address, automatic discovery
scans the local subnet for storage arrays.
Use to add an SNMP trap destination or an e-mail
address alert destination.
•When adding an SNMP trap destination, the
SNMP community is automatically defined as
the community name for the trap and the host
is the IP address or Domain Name Server (DNS)
host name of the system to which the trap
should be sent.
•When adding an e-mail address for an alert
destination, the email-address is the e-mail
address to which to send the alert message.
21
Page 22
ParameterDefinition
-c
-d
-e
-F (uppercase)
-f (lowercase)
Use to indicate that you are entering one or more
script commands to run on the specified storage
array. Terminate each command by using a
semicolon (;).
You cannot place more than one -c parameter on
the same command line. You can include more
than one script command after the -c parameter.
Use to display the contents of the script
configuration file.
Use to disable syntax checking when executing the
current CLI command.
Use to specify the e-mail address from which all
alerts will be sent.
Use to specify a file name containing script
commands intended to run on the specified
storage array.
This parameter is similar to the -c parameter in
that both are intended for running script
commands. The -c parameter allows you to
execute individual script commands. The -f
(lowercase)
script commands contained in a file.
parameter allows you to execute
-g
NOTE: By default, any errors encountered
when running the script commands in a file
are ignored, and the file continues to run. To
override this behavior, use the set session errorAction=stop command in the script
file.
Use to specify an ASCII file that contains e-mail
sender contact information to include in all e-mail
alert notifications. The CLI assumes the ASCII file is
text only, without delimiters or any expected
format. A typical file contains the following
information:
•Name
•Title
•Company
•Phone
•Pager
NOTE: You can use any file name that your
operating system supports. You must not use
userdata.txt. Some operating systems reserve
userdata.txt for system information.
22
Page 23
ParameterDefinition
-h
-I
-i
-m
-n
Use with the -a and -x parameters to specify the
host name that is running the SNMP agent to
which the storage array is connected.
Use to specify the type of information to be
included in the e-mail alert notifications. The
following are valid information arguments:
•eventOnly— Only event information is
included in the e-mail.
•profile— Event and array profile information
is included in the e-mail.
•supportBundle— Event and support bundle
information is included in the e-mail.
NOTE: You can enter only one information
argument each time you execute the
command. If you want all of the information,
you must run the command three times.
Use with the -d parameter to display the IP address
of the known storage arrays.
Use to specify the host name or IP address of the
e-mail server from which to send e-mail alert
notifications.
Use to specify the name of the storage array on
which to run the script commands. This name is
optional when you use host‑name-or-IP-address; however, if you are using the in-band
method for managing the storage array, you must
use the
array is connected to the host at the specified
address. The storage array name is required when
host‑name-or-IP-address is not used;
however, the name of the storage array configured
for use in the MD Storage Manager GUI (that is,
listed in the configuration file) must not be a
duplicate name of any other configured storage
array.
-n parameter if more than one storage
-o
-p
Use with the -c or -f parameter to specify a file
name for all output text that is a result of running
the script commands.
Use to specify the password for the storage array
on which to run commands. A password is not
necessary under the following conditions:
•A password has not been set on the storage
array.
•The password is specified in a script file that is
running.
23
Page 24
ParameterDefinition
•The storage array password is specified by
using the -c parameter and the set session
password=password command.
-q
-r
Use to specify how frequently to include additional
profile or support bundle information in the e-mail
alert notifications. An e-mail alert notification that
contains at least the basic event information is
always generated for every critical event. If you set
the-I parameter to eventOnly, the only valid
argument for -q is everyEvent. If you set the-I
parameter to either profile or supportBundle,
this information is included with the e-mails with
the frequency specified by the-q parameter.
Valid frequency arguments are:
•everyEvent— Information is returned with
every e-mail alert notification.
•2— Information is returned no more than once
every two hours.
•4— Information is returned no more than once
every four hours.
•8— Information is returned no more than once
every eight hours.
•12— Information is returned no more than
once every 12 hours.
•24— Information is returned no more than
once every 24 hours.
Use with the -a or -x parameter to specify the
name of a management station. The name of a
management station can be either direct_sa (outof-band storage array) or host_sa (in-band storage
arrays [host-agent]). The -r parameter enables you
to set or change the alert notifications for all
storage arrays under each management station.
-S (uppercase)
-s (lowercase)
24
Use to suppress the informational messages
describing command progress that appear when
running script commands. (Suppressing
informational messages is also called silent mode.)
This parameter suppresses the following messages:
•Performance syntax check
•Syntax check complete
•Executing script
•Script execution complete
•SMcli completed successfully
Use with the -d parameter to display the alert
settings in the configuration file.
Page 25
ParameterDefinition
-v
-X (uppercase)
-x (lowercase)
-?
Use with the -d parameter to display the current
global status of the known devices in the storage
array configuration file. (The configuration file lists
all of the devices in a storage array configuration
and the relationship between the devices. Use the
configuration file to reconstruct a storage array).
Use to delete a storage array from the
configuration file. (The configuration file lists all of
the devices in a storage array configuration and the
relationship between the devices. Use the
configuration file to reconstruct a storage array).
Use to remove an SNMP trap destination or an email address alert destination. The community is
the SNMP community name for the trap, and the
host is the IP address or DNS host name of the
system to which you want the trap sent.
Use this parameter to display usage information
about the CLI commands.
Formatting Considerations
Quotation marks (" ") used as part of a name or label require special consideration when you run the CLI
and script commands on a Microsoft Windows operating system. The following explains the use of
quotation marks in names while running CLI and script commands on Windows.
When quotation marks (" ") are part of an argument, you must insert a backslash (\) before each quotation
mark character unless you are in interactive mode. For example:
-c "set storageArray userLabel=\"Engineering\";"
where, Engineering is the storage array name.
You cannot use quotation marks (" ") as part of a character string (also called string literal) within a script
command. For example, you cannot enter the following string to set the storage array name to
"Finance"Array:
On a Linux operating system, the delimiters around names or labels are single quotation marks (‘ ’). The
Linux versions of the previous examples are:
-c ‘set storageArray userLabel="Engineering";’
Detailed Error Reporting
Error data collected from an error encountered by the CLI is written to a file. Detailed error reporting
under the CLI works as follows:
•If the CLI must abnormally end execution or abort script command execution, error data is collected
and saved before the CLI aborts.
25
Page 26
•The CLI automatically saves the error data by writing the data to a file with a standard name.
•The CLI does not have any provisions to avoid overwriting an existing version of the file containing
error data.
For error processing, errors appear as two types:
•Parameter or syntax errors you might enter
•Exceptions that occur as a result of an operational error
When the CLI encounters either type of error, it writes information describing the error directly to the
command line and sets a return code. Depending on the return code, the CLI might also write additional
information about which parameter caused the error. The CLI also writes information about what
command syntax was expected to help you identify any syntax errors you might have entered.
When an exception occurs while executing a command, the CLI automatically saves the error
information to a file named excprpt.txt. The CLI attempts to place excprpt.txt in the directory specified by
the system property devmgr.datadir, which by default is the "client/data" directory under the main
installation directory in Windows and the /var/opt/SM directory in Linux. If for any reason the CLI cannot
place the file in the devmgr.datadir-specified directory, the CLI saves the excprpt.txt file in the same
directory from which the CLI is running. You cannot change the file name or location. The excprpt.txt file
is overwritten every time an exception occurs. To save the information in the excprpt.txt file, you must
copy the information to a new file or directory.
Exit Status
After you run a CLI command or a CLI and script command, status is displayed that indicates the success
of the operation defined by the command. The status values are shown in the following table.
Table 3. Exit Status
Status ValueMeaning
0The command terminated without an error.
1The command terminated with an error. Error information is also
displayed.
2The script file does not exist.
3An error occurred while opening an output file.
4A storage array is not at the specified address.
5Addresses specify different storage arrays.
6A storage array name does not exist for the host agent connected.
7The storage array name was not at the specified address.
8The storage array name was not in the configuration file.
10A management class does not exist for the storage array.
11A storage array was not found in the configuration file.
12An internal error occurred.
13Invalid script syntax was found.
26
Page 27
Status ValueMeaning
14The RAID controller module was unable to communicate with the
storage array.
15A duplicate argument was entered.
16An execution error occurred.
17A host was not at the specified address.
18The World Wide Identifier (WWID) was not in the configuration file.
19The WWID was not at the address.
20An unknown IP address was specified.
21The event monitor configuration file was corrupted.
22The storage array was unable to communicate with the event
monitor.
23The RAID controller module was unable to write alert settings.
24The wrong management station was specified.
25The command was not available.
26The device was not in the configuration file.
27An error occurred while updating the configuration file.
28An unknown host error occurred.
29The sender contact information file was not found.
30The sender contact information file could not be read.
31The userdata.txt file exists.
32An invalid -I value in the e-mail alert notification was specified.
33An invalid -f value in the e-mail alert notification was specified.
Usage Examples
The following examples show how to enter CLI commands on a command line. The examples show the
syntax, form, and in some examples, script commands. Examples are shown for both Windows and Linux
operating systems. The usage for the -c parameter varies depending on your operating system. On
Windows operating systems, put quotation marks (" ") around the script command following the -c
parameter. On Linux operating systems, put single quotation marks (‘ ’) around the script command
following the -c parameter.
NOTE: See Script Commands for descriptions of the script commands used in the following
examples.
This example shows how to change the name of a storage array. The original name of the storage array is
Payroll_Array. The new name is Finance_Array.
This example shows how to delete an existing virtual disk and create a new virtual disk on a storage array.
The existing virtual disk name is Stocks_<_Bonds. The new virtual disk name is Finance. The RAID
controller module host names are finance1 and finance2. The storage array is protected and requires the
password TestArray.
This example shows how to run commands in a script file named scriptfile.scr on a storage array named
Example. The -e parameter runs the file without checking syntax. Executing an SMcli command without
checking syntax enables the file to run more quickly; however, the
correctly if the syntax is incorrect.
SMcli -n Example -f scriptfile.scr -e
SMcli command may not execute
This example shows how to run commands in a script file named scriptfile.scr on a storage array named
Example. In this example, the storage array is protected by the password My_Array. Output, as a result of
commands in the script file, goes to file output.txt.
Windows:
SMcli -n Example -f scriptfile.scr -p "My_Array" - o output.txt
Linux:
SMcli -n Example -f scriptfile.scr -p ‘My_Array’ - o output.txt
This example shows how to display all storage arrays that are currently discovered in the current
configuration. The command in this example returns the host name of each storage array.
SMcli -d
If you want to know the IP address of each storage array in the configuration, add the -i parameter to the
command.
SMcli -d -i
28
Page 29
3
About The Script Commands
You can use the script commands to configure and manage a storage array. The script commands are
distinct from the command line interface (CLI) commands; however, you enter the script commands
using the command line interface. You can enter individual script commands, or run a file of script
commands. When entering an individual script command, include it as part of a CLI command. When
running a file of script commands, include the file name as part of a CLI command. The script commands
are processed by a script engine that performs the following functions:
•Verifies command syntax
•Interprets the commands
•Converts the commands to the appropriate protocol-compliant commands, which is, in turn, run by
the RAID controller module
•Passes the commands to the storage array
At the storage array, the redundant array of independent disks (RAID) controller modules in the storage
array runs the script commands.
The script engine and script commands support the storage array configuration and management
operations listed in the following table.
Table 4. Configuration and Management Operations
OperationActivities
Virtual disk, disk group configurationCreating, deleting, and setting priority; labeling;
setting physical disk composition when creating
virtual disks; setting segment size; and setting
media scan control
Physical disk configurationConfiguring the hot spare
RAID controller module configurationDefining virtual disk ownership, changing mode
settings, defining network settings, and setting host
port IDs
General storage array configurationResetting a configuration to defaults, labeling,
checking the health status, setting the time of day,
clearing the Major Event Log, and setting the
media scan rate
NVSRAM configurationDownloading and modifying the user configuration
region at the bit and byte level, displaying
nonvolatile static random access memory
(NVSRAM) values
Product identificationRetrieving the enclosure profile display data
Battery managementSetting the battery installation date
management module (EMM), and physical disk
firmware
Script Command Structure
All script commands have the following structure:
command operand-data {statement-data}
where, command identifies the action to be performed, operand-data represents the storage array
component to configure or manage (such as a RAID controller module, physical disk, or disk group), and
statement-data is what you want to do to the component (such as, specifying the RAID level or
availability of a disk group).
The general form of the syntax for operand-data is as follows:
An operand-data object can be identified four ways:
•The object types and object qualifiers
•The all parameter
•Brackets
•A list of identifiers
NOTE: You can use any combination of alphanumeric characters, hyphens, and underscores for the
names. Command names can have a maximum of 30 characters. If you exceed the maximum
character limit, replace square brackets ([ ]) with angle brackets (< >) to overcome this limitation.
Use an object type when the command is not referencing a specific object. The all parameter means all
objects of the specified type in the storage array (for example,
To perform a command on a specific object, use brackets to identify the object (for example,
virtualDisk[engineering]). Specify a subset of objects with a list of identifiers in brackets (for example,
virtualDisks[sales engineering marketing]). In a list of identifiers, use a blank space as the delimiter. A
qualifier is necessary if you want to include additional information to describe the objects.
The following table lists the object type and identifiers associated with the object types.
Table 5. Object Types and Identifiers
Object TypeIdentifier
controller0 or 1
physicalDiskEnclosure ID and the slot ID
physicalDiskChannelPhysical disk channel identifier
GroupNameRemote Replication virtual disk user label
allVirtualDisks).
30
Page 31
Object TypeIdentifier
hostUser label
hostChannelHost channel identifier
hostGroupUser label
hostPortUser label
snapVirtualDiskNameVirtual disk user label
snapshotVirtual disk user label
snapGroupA snapshot group contains a sequence of snapshot
images of an associated base virtual disk. A snapshot
group has a repository virtual disk that is used to
save data for all of the snapshot images that are part
of the snapshot group.
snapGroupNameVirtual disk group user label
storageArrayNot applicable
enclosureEnclosure ID
virtualDiskVirtual disk user label or the World Wide Identifier
(WWID) for the virtual disk (set command only)
virtualDiskCopyTarget virtual disk and, optionally, the source virtual
disk user labels
diskGroupVirtual disk group number
Statement data is in the form of attribute=value (such as raidLevel=5), an attribute name (such as
batteryInstallDate), or an operation name (such as consistencyCheck).
Script Command Synopsis
Because you can use the script commands to define and manage the different aspects of a storage array
(such as host topology, physical disk configuration, RAID controller module configuration, virtual disk
definitions, and disk group definitions), the actual number of commands is extensive. The commands,
however, fall into general categories that are reused when you apply the commands to the different
aspects of a storage array.
The following table lists the general form of the script commands and provides a definition of each
command.
Table 6. General Form of the Script Commands
Command SyntaxDescription
activate object {statement‑data}Sets up the environment so that an operation can
take place or performs the operation if the
environment is already correctly set up.
autoConfigure storageArray{statement-data}Automatically creates a configuration based on
parameters specified in the command.
31
Page 32
Command SyntaxDescription
check object {statement‑data}Starts a synchronous operation to report on errors
in the object.
clear object {statement‑data}Discards the contents of some attribute of an
object. This is a destructive operation that cannot
be reversed.
create object {statement‑data}Creates an object of the specified type.
deactivate object {statement‑data}Removes the environment for an operation.
delete objectDeletes a previously created object.
diagnose object {statement‑data}Runs a test and displays the results.
disable object {statement‑data}Prevents a feature from operating.
download object {statement‑data}Transfers data to the storage array or hardware
associated with the storage array.
enable object {statement‑data}Allows a feature to operate.
recopy object {statement‑data}Restarts a virtual disk copy operation by using an
existing virtual disk copy pair. You can change
attributes before the operation is restarted.
recover object {statement‑data}Re-creates an object from saved configuration
data and the statement attributes (similar to the
create command).
recreate object {statement‑data}Restarts a snapshot operation using an existing
snapshot virtual disk. You can change attributes
before the operation is restarted.
remove object {statement‑data}Removes a relationship from between objects.
repair object {statement‑data}Repairs errors found by the check command.
reset object {statement‑data}Returns the hardware or object to an initial state.
resume objectStarts a suspended operation. The operation
begins where it left off when suspended.
revive objectForces the object from the Failed to the Optimal
state. Use only as part of an error recovery
procedure.
save object {statement‑data}Writes information about the object to a file.
set object {statement‑data}Changes object attributes. All changes are
completed when the command returns.
show object {statement‑data}Displays information about the object.
start object {statement‑data}Starts an asynchronous operation. You can stop
some operations after they have started. You can
query the progress of some operations.
32
Page 33
Command SyntaxDescription
stop object {statement‑data}Stops an asynchronous operation.
suspend object {statement‑data}Suspends an operation. You can then restart the
suspended operation, and it continues from the
point at which it was suspended.
Recurring Syntax Elements
Recurring syntax elements are a general category of variables and parameters you can use in one or
more script commands. The recurring syntax is used in the general definitions of the script commands
that are listed in Script Commands. The following table lists the recurring syntax and the syntax values
that you can use with the syntax.
NOTE: You must set the enableIPV4
parameter or the enableIPV6 parameter to
TRUE to ensure that the specific IPV4 or IPV6
setting is applied.
NOTE: The IPV6 address space is 128 bits. It is
represented by eight 16-bit hexadecimal
blocks separated by colons. You may drop
leading zeros, and use a double colon to
represent consecutive blocks of zeroes.
instance-based-repository-spec
repositoryRAIDLevel=repository-raidlevel
repositoryPhysicalDisks=(physical diskspec- list)
[enclosureLossProtect=boolean]
repositoryDiskGroup=virtual-diskgroup- number
[freeCapacityArea=integer-literal]
Specify repositoryRAIDLevel with
repositoryPhysicalDisks. Do not specify
RAID level or physical disks with a disk group. Do
not set
a disk group.
enclosureLossProtect when specifying
NOTE: For enclosure loss protection to work,
each physical disk in a disk group must be on
a separate enclosure. If you set
enclosureLossProtect=TRUE and have
selected more than one physical disk from any
one enclosure, the storage array returns an
error. If you set
enclosureLossProtect=FALSE, the
storage array performs operations, but the
disk group you create might not have
enclosure loss protection.
NOTE: To determine if a free capacity area
exists, issue the show diskGroup command.
NOTE: The physicalDiskType parameter is
not required if only one type of physical disk is
in the storage array. If you use the
physicalDiskType parameter, you must
also use the hotSpareCount and
diskGroupWidth parameters. If you do not
use the physicalDiskType parameter, the
configuration defaults to SAS physical disks.
Page 37
Recurring SyntaxSyntax Value
NOTE: The virtualDisksPerGroupCount
parameter is the number of equal capacity
virtual disks per disk group.
Table 8. Range of Values for Recurring Syntax Elements
Recurring SyntaxSyntax Values
IPV4Priority0 to 7
IPV4VlanID1 to 4094
IPV6Priority0 to 7
IPV6VlanID1 to 4094
IPV6HopLimit0 to 255 (default value is 64)
IPV6NdDetectDuplicateAddress0 to 256
IPV6NdReachableTime0 to 65535 (default value is 30000 milliseconds)
IPV6RetransmitTime0 to 65535 (default value is 1000 milliseconds)
IPV6NDTimeOut0 to 65535 (default value is 3000 milliseconds)
maxFramePayload1500
NOTE: The maxFramePayload parameter is
shared between IPv4 and IPv6. The payload
portion of a standard Ethernet frame is set at
1500 bytes, and a jumbo Ethernet frame is set
at 9000 bytes. When using jumbo frames,
make sure that all of the devices contained in
the network path can handle the larger frame
size.
tcpListeningPort (tcp-port-id)3260, or 49,152 to 65,536
The default value is 3260.
37
Page 38
Usage Guidelines
The following list provides guidelines for writing script commands on the command line:
•You must end all commands with a semicolon (;).
•You can enter more than one command on a line, but you must separate each command with a
semicolon (;).
•You must separate each base command and its associated primary and secondary parameters with a
space.
•The script engine is case sensitive.
•You can add comments to your scripts to make it easier for you and future users to understand the
purpose of the script commands. For information on how to add comments, see Adding Comments
To A Script File.
NOTE: User labels (such as for virtual disk, hosts, or host ports) are case sensitive. If you map to an
object identified by a user label, you must enter the user label exactly as it is defined, or the CLI and
script commands fails.
NOTE: You can use any combination of alphanumeric characters, hyphens, and underscores for the
names. Command names can have a maximum of 30 characters. If you exceed the maximum
character limit, replace square brackets ([ ]) with angle brackets (< >) to overcome this limitation.
NOTE: The capacity parameter returns an error if you specify a value greater than or equal to 10
without a space separating the numeric value and its unit of measure. For example, 10 GB returns
an error, but
9 GB does not return an error.
Adding Comments To A Script File
You can add comments to a script file in three ways:
•The script engine interprets as a comment any text typed after two forward slashes (/ /) until an
end‑of‑line character is reached. If the script engine does not find an end‑of‑line character in the
script after processing a comment, an error message is displayed, and the script operation is
terminated. This error commonly occurs when a comment is placed at the end of a script and you
have not pressed <Enter>.
// Deletes the existing configuration.
clear storageArray Configuration;
•The script engine interprets any text typed between / * and * / as a comment. If the script engine does
not find both a beginning and ending comment notation, an error message is displayed, and the script
operation is terminated.
/* Deletes the existing configuration */
clear storageArray Configuration;
•Use the show statement to embed comments in a script file that you want to display while the script
file is running. Enclose the text you want to display in quotation marks (" ").
show "Deletes the existing configuration";
clear storageArray Configuration;
38
Page 39
4
Configuring A Storage Array
This chapter explains how to run script commands from the command line to create a virtual disk from a
group of physical disks, and how to configure a redundant array of independent disks (RAID) storage
array. This chapter assumes that you understand basic RAID concepts and terminology. Before
configuring the storage array, become familiar with the concepts of physical disks, disk groups, virtual
disks, host groups, hosts, and RAID controller modules. Additional information about configuring a
storage array and related definitions is in the online help, the Deployment Guide, the MD Storage Manager online help, and the Owner’s Manual.
Configuring a RAID storage array requires caution and planning to ensure that you define the correct
RAID level and configuration for your storage array. The main purpose in configuring a storage array is to
create virtual disks addressable by the hosts from a collection of physical disks. The commands described
in this chapter enable you to set up and run a RAID storage array. Additional commands are also available
to provide more control and flexibility. Many of these commands, however, require a deeper
understanding of the firmware as well as various structures that need to be mapped. Use all the
command line interface (CLI) commands and script commands with caution.
The following sections in this chapter show some, but not all, of the CLI and script commands. The
purpose of showing these commands is to explain how you can use the commands to configure a
storage array. The presentation in this chapter does not explain all possible usage and syntax for the
commands. For complete definitions of the commands, including syntax, parameters, and usage notes,
see Script Commands.
This chapter contains examples of CLI and script command usage. The command syntax used in the
examples is for a host running a Microsoft Windows operating system. As part of the examples, the
complete C:\ prompt and DOS path for the commands are shown. Depending on your operating
system, the prompt and path construct varies.
For most commands, the syntax is the same for all Windows and Linux operating systems, as well as for a
script file. Windows operating systems, however, have an additional requirement when entering names in
a command. On Windows, you must enclose the name between two backslashes (\) in addition to other
delimiters. For example, the following name is used in a command that runs under Windows:
[\"Engineering\"]
For a Linux system when used in a script file, the name appears as:
["Engineering"]
39
Page 40
Configuring A Storage Array
When you configure a storage array, you can maximize data availability by ensuring that data is quickly
accessible while maintaining the highest level of data protection possible. The speed at which a host can
access data is affected by the disk group RAID level and the segment size settings. Data protection is
determined by the RAID level, hardware redundancy (such as global hot spares), and software
redundancy (such as the Snapshot feature).
In general, you configure a storage array by defining the following entities:
•A disk group and associated RAID level
•The virtual disks
•Which hosts have access to the virtual disks
This section explains how to use the script commands to create a configuration from an array of
physical disks.
Determining What Is On Your Storage Array
Even when you create a configuration on a previously unconfigured storage array, you still need to
determine the hardware and software features that must be included with the storage array. When you
configure a storage array with an existing configuration, you must ensure that your new configuration
does not inadvertently alter the existing configuration, unless you are reconfiguring the entire storage
array. For example, to create a new disk group on unassigned physical disks, you must determine which
physical disks are available. The commands described in this section enable you to determine the
components and features in your storage array.
The show storageArray command returns the following general information about the components
and properties of the storage array:
•A detailed profile of the components and features in the storage array
•The battery age
•The default host type (which is the current host type)
•Other available host types
•The hot spare locations
•The identifiers for enabled features
•The logical and physical component profiles
•The time to which both RAID controller modules are set
•The RAID controller module that currently owns each virtual disk in the storage array
To return the most information about the storage array, run the show storageArray command with
profile parameter. The following is an example of the complete CLI and script command running
This example identifies the storage array by the dummy IP address 123.45.67.89. You can also identify the
storage array by name.
The show storageArray profile command returns detailed information about the storage array. The
information is presented in several screens on a display. You might need to increase the size of your
40
Page 41
display buffer to see all of the information. Because this information is so detailed, you might want to
save the output to a file. To save the output to a file, enter the command as shown in the following
example:
In this example, the name folder is the folder in which you choose to place the profile file, and
storageArrayprofile.txt is the name of the file. You can choose any folder and any file name.
CAUTION: When you write information to a file, the script engine does not check to determine if
the file name already exists. If you choose the name of a file that already exists, the script engine
writes over the information in the file without warning.
When you save the information to a file, you can use the information as a record of your configuration
and as an aid during recovery.
To return a brief list of the storage array features and components, use the summary parameter. The
command is similar to the following example:
The summary information is also returned as the first section of information when you use the profile
parameter.
The following show commands return information about the specific components of a storage array. The
information returned by each of these commands is the same as the information returned by the show storageArray profile command, but is constrained to the specific component.
NOTE: The following commands are not complete commands. For information about a command,
see the referenced section next to the command.
•show virtualDiskCopy sourceCandidates (Show Virtual Disk Copy Source Candidates)
•show virtualDiskCopy targetCandidates (Show Virtual Disk Copy Target Candidates)
•show virtualDisk performanceStat (Show Disk Group Import Dependencies)
For descriptions of the show commands, including examples of the information returned by each
command, see Script Commands. Other commands can help you learn about your storage array. To see
a list of the commands, see Commands Listed By Function. These commands are organized by the
41
Page 42
storage array activities that the commands support. Examples include virtual disk commands, host
commands, enclosure commands, and others.
Saving A Configuration To A File
CAUTION: When you write information to a file, the script engine does not check to determine if
the file name already exists. If you choose the name of a file that already exists, the script engine
writes over the information in the file without warning.
After you have created a new configuration or if you want to copy an existing configuration for use on
other storage arrays, you can save the configuration to a file. To save the configuration, use the save storageArray configuration command. Saving the configuration creates a script file that you can
run on the command line. The following syntax is the general form of the command:
You can choose to save the entire configuration or specific configuration features. The command for
setting this parameter value looks like the following example:
In this example, the name folder is the folder in which you choose to place the configuration file, and
storageArrayconfig1.scr is the name of the file. Choose any folder and any file name. MD Storage
Manager uses the file extension .scr when it creates the configuration file.
Using The Create Virtual Disk Command
The create virtualDisk command enables you to create new virtual disks in the storage array in
three ways:
•Create a new virtual disk while simultaneously creating a new disk group to which you assign the
physical disks.
•Create a new virtual disk while simultaneously creating a new disk group to which the MD Storage
Manager software assigns the physical disks.
•Create a new virtual disk in an existing disk group.
You must have unassigned physical disks in the disk group. You do not need to assign the entire capacity
of the disk group to a virtual disk.
Creating Virtual Disks With User-Assigned Physical Disks
When you create a new virtual disk and assign the physical disks to use, the MD Storage Manager
software creates a new disk group. The RAID controller module firmware assigns a disk group number to
the new disk group. The following syntax is the general form of the command:
NOTE: The capacity, owner, segmentSize, and enclosureLossProtect parameters are
optional. You can use one or all of the optional parameters as needed to help define your
configuration. You do not, however, need to use any optional parameters.
The userLabel parameter is the name to give to the virtual disk. The virtual disk name can be any
combination of alphanumeric characters, hyphens, and underscores. The maximum length of the virtual
disk name is 30 characters. Spaces are not allowed. You must put quotation marks (" ") around the virtual
disk name.
The physicalDisks parameter is a list of the physical disks that you want to use for the disk group.
Enter the enclosure ID and slot ID of each physical disk that you want to use. Put parentheses around the
list. Separate the enclosure ID and slot ID of a physical disk by a comma. Separate each enclosure ID and
slot ID pair by a space. For example:
(0,0 0,1 0,2 0,3 0,4)
The capacity parameter defines the size of the virtual disk. You do not have to assign the entire capacity
of the physical disks to the virtual disk. You can later assign any unused space to another virtual disk.
The owner parameter defines the RAID controller module to which you want to assign the virtual disk. If
you do not specify a RAID controller module, the RAID controller module firmware determines the owner
of the virtual disk.
The segmentSize parameter is the same as described for the autoConfigure storageArray
command. See Using The Auto Configure Command.
The enclosureLossProtect parameter turns on or turns off enclosure loss protection for the disk
group. For a description of how enclosure loss protection works, see Enclosure Loss Protection.
Example Of Creating Virtual Disks With User-Assigned Physical Disks
NOTE: The capacity parameter returns an error if you specify a value greater than or equal to 10
without a space separating the numeric value and its unit of measure. (For example, 10 GB returns
an error, but 10 GB does not return an error).
The command in this example automatically creates a new disk group and a virtual disk with the name
Engineering_1. The disk group has a RAID level of 5 (RAID 5). The command uses three physical disks to
construct the disk group. The virtual disk created has a capacity of 20 GB. If each physical disk has a
capacity of 73 GB, the total capacity of the disk group is 219 GB.
Because only 20 GB are assigned to the virtual disk, 199 GB remain available for other virtual disks that
you can later add to this disk group. The segment size for each virtual disk is 64 KB. Hot spares have not
been created for this new disk group. You must create hot spares after running this command.
Creating Virtual Disks With Software-Assigned Physical Disks
You can let the MD Storage Manager software assign the physical disks when you create the virtual disk.
To have the software assign the physical disks, only specify the number of physical disks to use. The MD
Storage Manager software then chooses the physical disks on which the virtual disk is created. The RAID
43
Page 44
controller module firmware assigns a disk group number to the new disk group. The following syntax is
the general form for the command:
NOTE: The physicalDiskType, capacity, owner, segmentSize, and enclosureLossProtect
parameters are optional. You can use one or all of the optional parameters as needed to help define
your configuration. You do not, however, need to use any optional parameters.
This command is similar to the previous create virtualDisk command, which allows the user to
assign the physical disks. This version of the command requires only the number and the type of physical
disks to use in the disk group. You do not need to enter a list of physical disks. All other parameters are
the same. Enclosure loss protection is performed differently when MD Storage Manager assigns the
physical disks as opposed to when a user assigns the physical disks. For an explanation of the difference,
see Enclosure Loss Protection.
Example Of Creating Virtual Disks With Software-Assigned Physical Disks
The command in this example creates the same virtual disk as the previous create virtualDisk
command. However, in this case the user does not know which physical disks are assigned to this disk
group.
Creating Virtual Disks In An Existing Disk Group
To add a new virtual disk to an existing disk group, use the following command:
NOTE: The freeCapacityArea, capacity, owner, and segmentSize parameters are optional.
You can use one or all optional parameters as needed to help define your configuration, though you
do not need to use any of them.
The diskGroup parameter is the number of the disk group in which you want to create a new virtual
disk. If you do not know the disk group numbers on the storage array, you can use the
allVirtualDisks summary command. This command displays a list of the virtual disks and the disk
groups to which the virtual disks belong.
The userLabel parameter is the name you want to give to the virtual disk. The virtual disk name can be
any combination of alphanumeric characters, hyphens, and underscores. The maximum length of the
virtual disk name is 30 characters. You must enclose the virtual disk name with quotation marks (" ").
The freeCapacityArea parameter defines the free capacity area to use for the virtual disk. If a disk
group has several free capacity areas, you can use this parameter to identify which free capacity area to
show
44
Page 45
use for virtual disk creation. You do not have to assign the entire capacity of the physical disks to the
virtual disk. Assign any unused space to another virtual disk at another time.
The userLabel, capacity, owner, and segmentSize parameters are the same as in the previous
versions of the create virtualDisk command.
Enclosure Loss Protection
The enclosureLossProtect parameter is a boolean switch that turns enclosure loss protection on or
off. To work properly, each physical disk in a virtual disk group must be in a separate enclosure. Enclosure
loss protection is set under the following conditions:
•You assign the physical disks.
•The RAID controller module assigns the physical disks.
The following table shows possible results for the enclosureLossProtect parameter. The results
depend on whether you assign the physical disks or the RAID controller module assigns the physical
disks.
You assign the physical disks.If you select more than one
physical disk from any one
enclosure, the storage array
returns an error.
The RAID controller module
firmware assigns the physical
disks.
The enclosureLossProtect parameter is not valid when creating virtual disks on existing disk groups.
The storage array posts an error if
the RAID controller module
firmware cannot provide physical
disks to ensure that the new disk
group has enclosure loss
protection.
The storage array performs the
operation, but the created disk
group does not have enclosure
loss protection.
The storage array performs the
operation even if it means that
the disk group might not have
enclosure loss protection.
Using The Auto Configure Command
The autoConfigure storageArray command creates the disk groups on a storage array, the virtual
disks in the disk groups, and the hot spares for the storage array. When you use the autoConfigure storageArray command, define the following parameters:
•Type of physical disks (Serial Attached SCSI [SAS])
•RAID level
•Number of physical disks in a disk group
•Number of disk groups
•Number of virtual disks in each disk group
•Number of hot spares
•Size of each segment on the physical disks
After defining these parameters, the MD Storage Manager automatically creates the disk groups, virtual
disks, and hot spares. The RAID controller modules assign disk group and virtual disk numbers as they are
created. After MD Storage Manager creates the initial configuration, you can use the set virtualDisk
command to define virtual disk labels.
45
Page 46
Before running the autoConfigure storageArray command, run the show storageArray autoConfigure command. The show storageArray autoConfigure command returns a list of
parameter values that MD Storage Manager uses to create a storage array. Change any of the parameter
values by entering new values for the parameters when you run the
command. If you are satisfied with the parameter values that the show storageArray
autoConfiguration
new parameter values.
The following syntax is the general form of autoConfigure storageArray command:
NOTE: All parameters are optional. You can use one or all of the parameters as needed to define
your configuration.
When you use the autoConfigure storageArray command without specifying the number of disk
groups, the firmware determines how many virtual disks and disk groups to create. The firmware creates
one disk group and one virtual disk up to the maximum number that the storage array can support. When
you specify the number of disk groups, the firmware creates only that number of disk groups. When you
create more than one disk group, all of the disk groups have the same number of physical disks and the
same number of virtual disks.
command returns, run the autoConfigure storageArray command without
autoConfigure storageArray
•The diskGroupWidth parameter defines the number of unassigned physical disks wanted for each
new disk group.
•The diskGroupCount parameter defines the number of new disk groups wanted in the storage array.
•The virtualDisksPerGroupCount parameter defines the number of virtual disks wanted in each
disk group.
•The hotSpareCount parameter defines the number of hot spares wanted in each disk group.
•The segmentSize parameter defines the amount of data in kilobytes that the RAID controller module
writes on a single physical disk in a virtual disk before writing data on the next physical disk. The
smallest units of storage are data blocks. Each data block stores 512 bytes of data. The size of a
segment determines how many data blocks that it contains. An 8-KB segment holds 16 data blocks. A
64-KB segment holds 128 data blocks.
Valid values for the segment size are 8, 16, 32, 64, 128, 256, and 512.
When you enter a value for the segment size, the value is checked against the supported values
provided by the RAID controller module at run time. If the value you enter is not valid, the RAID
controller module returns a list of valid values.
If the virtual disk is for a single user with large I/O requests (such as multimedia), performance is
maximized when a single I/O request can be serviced with a single data stripe. A data stripe is the
segment size multiplied by the number of physical disks in the disk group that are used for data storage.
In this environment, multiple physical disks are used for the same request, but each physical disk is
accessed only once.
For optimal performance in a multi-user database or file system storage environment, set the segment
size to minimize the number of physical disks needed to satisfy an I/O request. Using a single physical
disk for a single request leaves other physical disks available to simultaneously service other requests.
46
Page 47
After you have finished creating the disk groups and virtual disks by using the autoConfigure storageArray command, you can further define the properties of the virtual disks in a configuration
using the
The command in this example creates a storage array configuration that uses SAS physical disks set to
RAID level 5. Three disk groups are created. Each disk group consists of eight physical disks configured
into four virtual disks. The storage array has two hot spares, and segment size for each virtual disk is 8 KB.
set virtualDisk command. (See Modifying Your Configuration.)
Modifying Your Configuration
After creating your initial configuration, modify the properties of the configuration to ensure that it meets
your requirements for data storage. Use the following commands to modify the properties of your
configuration:
•autoConfigure storageArray
•create virtualDisk
Use the set commands to modify a storage array configuration. This section explains how to modify the
following properties:
•Storage array password
•Simple Mail Transport Protocol (SMTP) and Simple Network Management Protocol (SNMP) alerts
•RAID controller module clocks
•Storage array host type
•Global hot spares
NOTE: Before modifying your configuration, save a copy of your current configuration to a file (see
Saving A Configuration To A File). If you have problems with your modifications, you can use the
information in the file to restore your previous configuration.
Setting The Storage Array Password
The set storageArray command enables you to define a password for a storage array. The following
syntax is the general form of the command:
set storageArray password="password"
The password parameter defines a password for the storage array. Passwords provide added security to
a storage array to reduce the possibility of implementing destructive commands.
CAUTION: Implementing destructive commands can cause serious damage, including data loss.
NOTE: CLI commands do not have interactive warnings for destructive commands.
Unless you define a password for the storage array, anyone can run all of the script commands. A
password protects the storage array from any command that the RAID controller modules consider
destructive. A destructive command is any command that can change the state of the storage array, such
47
Page 48
as virtual disk creation, reset, delete, rename, or change. If you have more than one storage array in a
storage configuration, each array has a separate password. Passwords can have a maximum length of 30
characters. You must put quotation marks (" ") around the password. The following example shows how
to use the set storageArray command to define a password:
The storage array can be set up to send automatic e-mail alert messages to specified email addresses
when specific events occur. View the current alert configuration settings using the following command:
SMcli -d -i -s -w -v -S
By default, all alert configuration settings are None.
The following example shows how to set the mail server IP and the sender address configurations for
SMTP alerts:
An example of a command to set the email alert destination and specify that only event information is to
be sent is:
SMcli -a email:MyCompanySupport@MyCompany.com
123.45.67.89 -I eventOnly
The following example shows how to set the SNMP trap alert configuration. In this example, the trap
destination is 123.45.67.891. The storage array is 123.45.67.892, and the community name is public.
SMcli -a trap:public, 123.45.67.891 123.45.67.892
Setting The RAID Controller Module Clocks
To synchronize the clocks on the RAID controller modules with the host, use the set storageArray
time command. Running this command helps ensure that event timestamps written by RAID controller
modules to the Major Event Log (MEL) match event timestamps written to the host log files. The RAID
controller modules remain available during synchronization. An example of the command is:
The set storageArray command enables you to define the default host type. The following syntax is
the general form of the command:
set storageArray defaultHostType=(hostTypeName | hostTypeIdentifier)
The defaultHostType parameter defines how the RAID controller modules communicate with the
operating system on undefined hosts connected to the storage array. This parameter defines the host
type only for storage array data I/O activities; it does not define the host type for the management
station. The operating system can be Windows or Linux. For example, if you set the defaultHostType
to Linux, the RAID controller module communicates with any undefined host if the undefined host is
48
Page 49
running Linux. Typically, you need to change the host type only when you are setting up the storage
array. The only time you might need to use this parameter is if you need to change how the storage array
behaves relative to the hosts.
Before you can define the default host type, you need to determine what host types are connected to the
storage array. To return information about host types connected to the storage array, you can use the
show storageArray command with the defaultHostType parameter or hostTypeTable parameter.
This command returns a list of the host types with which the RAID controller modules can communicate;
it does not return a list of the hosts. The following examples show how to use the defaultHostType
parameter and the hostTypeTable parameter:
The value 11 is the host type index value from the host type table.
Setting Modification Priority
Modification priority defines how much processing time is allocated for virtual disk modification
operations. Time allocated for virtual disk modification operations affects system performance. Increases
in virtual disk modification priority can reduce read/write performance. Operations affected by
modification priority include:
•Copyback
•Reconstruction
•Initialization
•Changing segment size
•Defragmentation of a disk group
•Adding free capacity to a disk group
•Changing the RAID level of a disk group
The lowest priority rate favors system performance, but the modification operation takes longer. The
highest priority rate favors the modification operation, but the system performance might be degraded.
The set virtualDisk command enables you to define the modification priority for a virtual disk. The
following syntax is the general form of the command:
set (allVirtualDisks | virtualDisk
[virtualDiskName] | virtualDisks [virtualDiskName1
... virtualDiskNamen] | virtualDisk <
accessVirtualDisk) modificationPriority=(highest |
high | medium | low | lowest)
The following example shows how to use this command to set the modification priority for virtual disks
named Engineering 1 and Engineering 2:
The modification rate is set to lowest so that system performance is not significantly reduced by
modification operations.
wwid> |
49
Page 50
Assigning Global Hot Spares
Hot spare physical disks can replace any failed physical disk in the storage array. The hot spare must be
the same type of physical disk as the physical disk that failed and must have capacity greater than or
equal to any physical disk that can fail. If a hot spare is smaller than a failed physical disk, the hot spare
cannot be used to rebuild the data from the failed physical disk. Hot spares are available only for RAID
levels 1 or 5.
You can assign or unassign global hot spares by using the set physicalDisk command. To use this
command, you must perform these steps:
1.Identify the location of the physical disks by enclosure ID and slot ID.
2.Set the hotSpare parameter to TRUE to enable the hot spare or FALSE to disable an existing hot
spare.
The following syntax is the general form of the command:
Enter the enclosure ID and slot ID of each physical disk that you want to use. You must put brackets ([ ])
around the list. Separate the enclosure ID and slot ID of a physical disk by a comma. Separate each
enclosure ID and slot ID pair by a space.
Selecting The Event Levels For Alert Notifications
The MD storage management software has four event levels: Critical, Informational, Warning, and Debug.
You can configure the MD storage management software to send alert notifications for all of these event
levels or only for certain event levels.
A background task called the persistent monitor runs independently of the MD storage management
software and monitors the occurrence of events on all of the managed storage arrays. The persistent
monitor is installed automatically with the MD storage management software. When an event occurs,
alert notifications in the form of emails and SNMP trap messages are sent to the destination addresses
that are specified in the Configure Alerts dialog. For more information about how to specify the
destination addresses, refer to the Configuring the Email and SNMP Alert Notification Settings online help
topic in the Enterprise Management Window (EMW).
When the persistent monitor starts for the first time, a properties file is created in the directory where the
MD storage management software files are located. The properties file can be configured to enable or
disable local logging in the Windows and UNIX operating systems. By default, local logging is enabled in
the properties file.
When local logging is enabled in the Windows operating system, the persistent monitor logs the event
information in the Windows Event Log file. When local logging is enabled in the UNIX operating system,
the persistent monitor logs the event information in the syslog file. The properties file can also be
configured to enable or disable remote syslog notification. You must restart the persistent monitor
service after configuring the properties file for the changes to take effect.
50
Page 51
You can configure the MD storage management software to notify only the event levels that you specify.
For example, you can configure alert notifications only for Critical and Warning events.
1.Open the Configure Alerts dialog by performing one of these actions:
a. Select a storage array in the Devices tab in the EMW.
b. Select Edit → Configure Alerts.
The Configure Alerts dialog is displayed. Go to step 3.
c. Select Configure Alerts in the Setup tab in the EMW. Go to step 2.
2.Select the All storage arrays radio button, and click OK.
The Configure Alerts dialog is displayed.
3.Select the Filtering tab.
4.Select the check boxes next to the event levels for which the MD storage management software
must send alert notifications.
NOTE: The configuration to send alert notifications for only certain event levels applies to all of
the managed storage arrays in the MD storage management software.
5.Click OK.
Configuring Alert Notifications
You can use the Configure Alerts dialog box to set up an email alert notification in the event of an error
on the network. Alert emails are sent to the specified global mail server and sender email addresses in the
event of an error on the selected hosts of storage arrays. You can choose to be alerted to all problems or
individual problems.
1.Open the Configure Alerts dialog box by performing one of these actions:
a. Select a storage array in the Devices tab, and then select Edit → Configure Alerts.
The Configure Alerts dialog box appears. Go to step 4.
b. On the Setup tab, select Edit → Configure Alerts. Go to step 2.
2.Select one of the following radio buttons to specify an alert level:
a. All storage arrays – To send an alert email about events on all storage arrays. Click OK.
The Configure Alerts dialog box appears. Go to step 4.
b. An individual storage array – To send an alert email about events that occur on only a specified
storage array. Click OK.
The Select Storage Array dialog box appears. Go to step 3.
3.Select the storage array for which you want to receive email alerts and click OK.
The Configure Alerts dialog box appears.
NOTE: If you do not know which storage array to select, click Blink to turn on the indicator
lights of the storage array.
4.Fill in the information for the selected tab and click OK.
The dialog box closes, and the Enterprise Management Window appears. An email alert is sent to the
specified email address when an error occurs on the storage arrays or hosts that you selected.
51
Page 52
5
Using The Snapshot Feature
The following types of virtual disk snapshot premium features are supported on the MD storage array:
•Snapshot Virtual Disks using multiple point-in-time (PiT) groups
•Snapshot Virtual Disks (Legacy) using a separate repository for each snapshot
NOTE: This section describes the Snapshot Virtual Disk using PiT groups. If you are using the
Snapshot Virtual Disk (Legacy) premium feature, see Using The Snapshot (Legacy) Feature.
A snapshot image is a logical image of the content of an associated base virtual disk created at a specific
point-in-time, often known as a restore point. This type of image is not directly readable or writable to a
host since the snapshot image is used to save data from the base virtual disk only. To allow the host to
access a copy of the data in a snapshot image, you must create a snapshot virtual disk. This snapshot
virtual disk contains its own repository, which is used to save subsequent modifications made by the host
application to the base virtual disk without affecting the referenced snapshot image.
Before Using Snapshot CLI Commands
There are two types of virtual disk snapshot premium features supported on your storage array.
Depending on your RAID controller firmware version, you may be using the legacy version of the
Snapshot feature. For more information, see Using The Snapshot (Legacy) Feature.
NOTE: Ensure that you know which type of snapshot premium feature you have activated on your
storage array.
For information on the differences between the two snapshot features, see the Administrator's Guide.
Snapshot Images And Groups
A snapshot image is a logical image of the content of an associated base virtual disk created at a specific
point-in-time, also called a restore point. This type of image is not directly readable or writable to a host
since the snapshot image is used to save data from the base virtual disk only. To allow the host to access
a copy of the data in a snapshot image, you must create a snapshot virtual disk. This snapshot virtual disk
contains its own repository, which is used to save subsequent modifications made by the host application
to the base virtual disk without affecting the referenced snapshot image.
A snapshot image differs from a snapshot (legacy) in the following ways:
•A snapshot image uses one repository for all snapshot images associated with a base virtual disk,
improving performance when there are updates to the base virtual disk.
•A snapshot image only exists within a snapshot group. To make the snapshot image read/write
accessible by hosts, you must create a snapshot virtual disk.
For more information on the Snapshot feature, see the Administrator's Guide.
52
Page 53
Only the following can be included in a snapshot image:
•Standard virtual disks
•Thin provisioned virtual disks
•Consistency groups
Snapshot Groups And Snapshot Consistency Groups
The Snapshot Virtual Disk premium feature supports two types of snapshot groups:
•Snapshot groups
•Consistency groups
Snapshot Groups
The purpose of a snapshot group is to create a sequence of snapshot images on a given base virtual disk
without impacting performance. You can set up a schedule for a snapshot group to automatically create
a snapshot image at a specific time in the future or on a regular basis.
The following rules apply when creating a snapshot group:
•Snapshot groups can be created with or without snapshot images.
•Each snapshot image is a member of only one snapshot group.
•Standard virtual disks and thin virtual disks are the only types of virtual disks that can contain a
snapshot group. Snapshot virtual disks cannot contain snapshot groups.
•The base virtual disk of a snapshot group can reside on either a disk group or a disk pool.
•Snapshot virtual disks and snapshot groups cannot exist on the same base virtual disk.
•A snapshot group uses a repository to save all data for the snapshot images contained in the group. A
snapshot image operation uses less disk space than a full physical copy because the data stored in the
repository is only the data that has changed since the latest snapshot image.
•A snapshot group is initially created with one repository virtual disk. The repository contains a small
amount of data, which increases over time with subsequent data updates. You can increase the size of
the repository by either increasing the capacity of the repository or adding virtual disks to the
repository.
Snapshot Consistency Groups
To perform the same snapshot image operations on multiple virtual disks, you can create a consistency
group containing the virtual disks. Any operation performed on the consistency group is performed
simultaneously on all of the virtual disks in that group, which creates consistent copies of data between
each virtual disk. Consistency groups are commonly used to create, schedule or rollback virtual disks.
Each virtual disk belonging to a consistency group is referred to as a member virtual disk. When you add a
virtual disk to a consistency group, the system automatically creates a new snapshot group that
corresponds to this member virtual disk. You can set up a schedule for a consistency group to
automatically create a snapshot image of each member virtual disk at a specific time and/or on a regular
basis.
This synchronized snapshot of all the virtual disks is especially suitable for applications that span multiple
virtual disks, such as a database application containing log files on one virtual disk and the database itself
on another.
The following rules apply when creating a consistency group:
53
Page 54
•Consistency groups can be created initially with or without member virtual disks.
•Snapshot images can be created for a consistency group to enable consistent snapshot images
between all member virtual disks.
•Consistency groups can be rolled back.
•A virtual disk can belong to multiple consistency groups.
•Only standard virtual disks and thin virtual disks can be included in a consistency group.
•Snapshots created using the Snapshot Virtual Disk (Legacy) premium feature cannot be included in a
consistency group.
•A base virtual disk can reside on either a disk group or disk pool.
Understanding Snapshot Repositories
Repositories are system-created virtual disks used to hold write data for snapshots, snapshot groups and
consistency groups. During creation of either group or write-enabled snapshot virtual disk, an associated
repository is automatically created. By default, one individual repository virtual disk is created for each
snapshot group or snapshot image. You can create the overall repository automatically using the default
settings, or you can manually create the repository by defining specific capacity settings.
A snapshot virtual disk allows the host access to a copy of the data contained in a snapshot image. A
snapshot image is not directly read or write accessible to the host and is used only to save data captured
from the base virtual disk.
Snapshot Consistency Group Repositories
A snapshot consistency group is made up of simultaneous snapshots of multiple virtual disks. Each virtual
disk that belongs to a consistency group is referred to as a member virtual disk. When you add a virtual
disk to a consistency group, the system automatically creates a new snapshot group that corresponds to
this member virtual disk. A consistency group repository is also created for each member virtual disk in a
consistency group in order to save data for all snapshot images in the group.
A consistency group snapshot image comprises multiple snapshot virtual disks. Its purpose is to provide
host access to a snapshot image that has been taken for each member virtual disk at the same moment
in time. A consistency group snapshot image is not directly read or write accessible to hosts; it is used
only to save the data captured from the base virtual disk. The consistency group snapshot virtual disk can
be designated as either read-only or read-write. Read-write consistency group snapshot virtual disks
require a repository for each member virtual disk in order to save any subsequent modifications made by
the host application to the base virtual disk without affecting the referenced snapshot image. Each
member repository is created when the consistency group snapshot virtual disk is created.
Consistency Groups And Remote Replication
Although a virtual disk can belong to multiple consistency groups, you must create separate consistency
groups for snapshot images and Remote Replication.
When a base virtual disk containing a consistency group is added to Remote Replication (non-legacy,
asynchronous), the repository will automatically purge the oldest snapshot image and set the auto-delete
limit to the maximum allowable snapshot limit for a consistency group.
Additionally, all member virtual disks belonging to both a snapshot consistency group and a Remote
Replication group must belong to the same Remote Replication group.
54
Page 55
Creating Snapshot Images
Guidelines before creating a snapshot image:
•If you attempt to create a snapshot image on a snapshot group and that snapshot group has reached
its maximum number of snapshot images. The failBaseWrites or purgeSnapImages parameters
used with the create snapGroup command allows you to choose to either fail the write attempt or
automatically purge a specified number of older snapshot images.
where, snapGroupName or snapGroupNames is the name of the snapshot group (or groups) that you
specify to hold the snapshot image.
NOTE: Ensure that you have either existing repositories, enough free capacity nodes, or available
unconfigured capacity for the storage array on which you are creating the snapshot group
repository.
Deleting A Snapshot Image
Use the delete snapshot image command to delete the oldest snapshot image from a snapshot
group or consistency group.
After a snapshot image is deleted from a snapshot group:
•The snapshot image is deleted from the storage array.
•The repository’s reserve space is released for reuse within the snapshot group.
•All snapshot virtual disks associated with deleted snapshot image are disabled.
Use the following command to delete a snapshot image from a consistency group:
Creating A Consistency Group Snapshot Virtual Disk
Creating a snapshot virtual disk of a consistency group creates a viewable virtual disk of specific images in
the consistency group. A consistency group snapshot virtual disk can be made up of a single base virtual
disk or multiple base virtual disks from the consistency group.
This command also allows you to set the following attributes for the snapshot virtual disk:
•read-only
•repository full value
•automatic repository selection
Unique command syntax and naming rules apply. The name of a snapshot image has two parts separated
by a colon (:):
•identifier of the snapshot group
•identifier of the snapshot image
If you do not specify the repositoryVirtualDiskType or readOnly parameters, the repositories for
the consistency group snapshot virtual disk will be selected automatically. If the disk group or disk pool
where the base virtual disk resides does not have enough space, the command will fail.
Create a read/write consistency group snapshot virtual disk on a snapshot consistency group named
snapCG1 with these three members cgm1, cgm2, and cgm3. The repository virtual disks already exist and
are selected in this command.
Note the colon (:) in the name of the snapshot image to be included in the consistency group snapshot
virtual disk. The colon is a delimiter that separates the name of the snapshot virtual disk from a particular
snapshot image that you might want to use. Any of the following three options can be used following the
colon:
•An integer value that is the actual sequence number of the snapshot image.
•NEWEST specifies the latest consistency group snapshot image.
•OLDEST specifies the earliest snapshot image created.
The use of the colon following the names of the members of the snapshot consistency group define the
mapping between the member and a repository virtual disk. For example:
cgm1:repos_10, member cgm1 maps to repository virtual disk repos_0010
Create a read-only consistency group snapshot virtual disk on a snapshot consistency group named
snapCG1 with members cgm1, cgm2, and cgm3 using the following command:
Create a consistency group snapshot virtual disk that has a repository full limit set to 60 percent on a
snapshot consistency group named snapCG1 with these three members cgm1, cgm2, and cgm3, with
the following command:
Create a read/write consistency group snapshot virtual disk with automatic repository selection on a
snapshot consistency group named snapCG1 with members cgm1, cgm2, and cgm3, with the following
command:
To create a new snapshot image for each base virtual disk in snapshot consistency group, use the
create cgSnapImage command. The command will suspend all pending I/O operations to each base
virtual disk that is a member of the consistency group before creating the snapshot images. If the
snapshot image cannot be completed successfully for all of the consistency group members, this
command fails and new snapshot images are not created.
Since all members of a snapshot consistency group normally contain the same number of snapshot
images, adding a new member to a snapshot consistency group with this command create a new
member lacking previously created snapshot images. This is not an indication of an error condition.
where, consistencyGroup is the name of the consistency group for which you are creating snapshot
images.
Deleting A Snapshot Virtual Disk Or A Consistency Group Snapshot Virtual
Disk
You can use the following command to delete a snapshot virtual disk or a consistency group snapshot
virtual disk. Optionally, you can also delete the repository members.
where, snapVirtualDiskName is the snapshot you want to delete. The
deleteRepositoryMembers=TRUE parameter will preserve the member virtual disks (default). Setting the parameter to FALSE will delete the member virtual disks.
Deleting A Consistency Group Snapshot Image
You can delete the snapshot images in a consistency group. However, when you delete a consistency
group snapshot image that is associated with a consistency group snapshot virtual disk, the
corresponding snapshot virtual disk member in the consistency group snapshot virtual disk is transitioned
to a Stopped state. Being in this state means that the snapshot virtual disk member no longer has a
relationship to the snapshot group of the deleted snapshot image. A snapshot virtual disk member in a
Stopped state maintains its relationship to its consistency group snapshot virtual disk.
When a snapshot image(s) is deleted from a consistency group:
•The snapshot is deleted from the storage array.
57
Page 58
•The repository’s reserve space for reuse within the consistency group is released.
•Any member virtual disk associated with the deleted snapshot image(s) is moved to a Stopped state.
•The member snapshot virtual disks associated with the deleted snapshot image(s) is deleted.
To delete two snapshot images named all_data_1 from a consistency group, use the following
command:
delete cgSnapImage consistencyGroup="all_data_1"
To succeed, all snapshot images must be deleted or the command will fail and no snapshots will be
deleted.
Scheduling Snapshot Images
Using the enableSchedule and schedule parameters with virtual disk and consistency group
commands, you can schedule regular snapshot images to recover files or schedule regular backups. The
schedule can be set when you initially create a snapshot group or consistency group, or you can add it
later to an existing snapshot group or consistency group.
You can create a schedule that runs daily or weekly in which you select specific days of the week (Sunday
through Saturday).
An example of the set snapGroup command using the schedule parameters:
set snapGroup ["snapGroupName"] enableSchedule=(TRUE | FALSE)
schedule=(immediate | snapshotSchedule)
Valid schedule values for the schedule parameter are immediate, startDate, scheduleDay,
startTime, scheduleInterval, endDate, noEndDate, timesPerDay and timeZone.
For more information, see:
•Set Snapshot Group Attributes
•Create Snapshot Group
•Create Consistency Group
Starting, Stopping And Resuming A Snapshot Rollback
The command line interface allows you to:
•start a rollback operation from multiple snapshot images concurrently
•stop a rollback operation
•resume a rollback operation
When rolling back, the host will have immediate access to the new-rolled-back base virtual disk, but the
existing base virtual disk will not allow the host read-write access once the rollback is initiated.
NOTE: Create a snapshot of the base virtual disk just prior to initiating the rollback to preserve the
pre-rollback base virtual disk for recovery purposes.
When you start a rollback operation for a set of snapshot images, the content of the base virtual disk
changes immediately to match the point-in-time content of the selected snapshot image virtual disk. The
base virtual disk immediately becomes available for read/write requests after the rollback operation has
successfully completed.
58
Page 59
To start a rollback for a snapshot group named snapGroup1:
start snapImage ["snapGroup1"] rollback
You can specify a snapshot image rollback for a specific snapshot image in a snapshot group named
12345:
stop snapImage ["snapGroup1:12345"] rollback;
To stop a rollback operation for the most recent snapshot image in a snapshot group that has the name
snapGroup1:
stop snapImage ["snapGroup1:newest"] rollback;
The repository virtual disk associated with the snapshot image continues to track any new changes
between the base virtual disk and the snapshot image virtual disk that occur after the rollback operation is
completed.
To resume a rollback operation for the same snapshot image and snapshot group:
resume snapImage ["snapgroup1:12345"] rollback;
Creating A Snapshot Group
A snapshot group is a sequence of point-in-time images of a single associated base virtual disk. A
snapshot group uses a repository to save data for all snapshot images contained in the group. The
repository is created at the same time the snapshot group is created.
Guidelines when creating a snapshot group:
•When a base virtual disk that contains a snapshot group is added to an asynchronous remote
replication group, the system automatically changes the repository full policy to automatically purge
the oldest snapshot image and sets the auto-delete limit to the maximum allowable snapshot limit for
a snapshot group.
•If the base virtual disk resides on a standard disk group, the repository members for any associated
snapshot group can reside on either a standard disk group or a disk pool. If a base virtual disk resides
on a disk pool, all repository members for any associated snapshot group must reside on the same
disk pool as the base virtual disk.
•You cannot create a snapshot group on a failed virtual disk.
•If you attempt to create a snapshot image, that snapshot image creation operation might remain in a
Pending state because of the following conditions:
– The base virtual disk that contains this snapshot image is a member of an asynchronous remote
replication group.
– The base virtual disk is currently in a synchronizing operation. The snapshot image creation
completes as soon as the synchronization operation is complete.
In the example, the repository virtual disks were preserved . By default, all member virtual disks in the
repository virtual disk are retained as unused, unmapped standard virtual disks. Setting the
deleteRepositoryMembers parameter to FALSE will delete the repository disks.
Reviving A Snapshot Group
If a snapshot group is not in a Failed state, it can be forced into an Optimal state. Running this command
on a Failed snapshot will return an error message and the command will not complete.
revive snapGroup ["snapGroupName"];
where snapGroupName is the name of the snapshot group you want to force into an Optimal state.
Creating A Consistency Group
A consistency group contains simultaneous snapshots of multiple virtual disks to ensure consistent
copies of a group of virtual disks. When you add a virtual disk to a consistency group, the system
automatically creates a new snapshot group that corresponds to this member virtual disk.
The following command creates a new, empty consistency group. You must add the snapshot groups
using the set consistencyGroup addCGMember command.
where userLabel is the name of the new consistency group you want to create.
repositoryFullPolicy controls how you want snapshot processing to continue if the snapshot
repository virtual disks are full. You can choose to fail writes to the base virtual disk (
delete the snapshot images (purgeSnapImages). The default action is to delete the snapshot images.
The repositoryFullLimit parameter controls the percentage of repository capacity at which you
receive a warning that the snapshot repository virtual disk is nearing full. Use integer values.
autoDeleteLimit configures the automatic deletion thresholds of snapshot images to keep the total
number of snapshot images in the snapshot group at or below a designated level. When this option is
enabled, then any time a new snapshot image is created in the snapshot group, the system automatically
deletes the oldest snapshot image in the group to comply with the limit value. This action frees repository
capacity so it can be used to satisfy ongoing copy-on-write requirements for the remaining snapshot
images.
For other values, see Create Consistency Group.
60
failBaseWrites) or
Page 61
Deleting A Consistency Group
This command deletes a snapshot consistency group. You can either delete both the consistency group
and the repository virtual disks contained by the consistency group, or you can delete only the
consistency group and leave the repository virtual disks contained in the consistency group intact.
To delete the consistency group and the repository virtual disks, set the deleteRepositoryMembers
parameter to TRUE. To retain the repository virtual disks, set this parameter to FALSE. The default setting
is FALSE.
Once completed, this command:
•Deletes all existing snapshot images and snapshot virtual disks from the consistency group.
•Deletes all the associated snapshot images and snapshot virtual disks for each member virtual disk in
the consistency group.
•Deletes all associated repositories for each member virtual disk in the consistency group (if selected).
Setting Consistency Group Attributes
To change or set the properties for a snapshot consistency group, use the set consistencyGroup
command:
set consistencyGroup ["consistencyGroupName"] [userLabel="consistencyGroupName"
| repositoryFullPolicy=(failBaseWrites | purgeSnapImages)|
repositoryFullLimit=percentValue| autoDeleteLimit=
rollbackPriority=(lowest | low | medium | high | highest)]
numberOfSnapImages|
The repositoryFullPolicy parameter determines how you want snapshot processing to continue if
the snapshot repository virtual disks are full. You can choose to fail writes to the base virtual disk
(failBaseWrites) or delete (purgeSnapImages) the snapshot images. The default action is to delete
the images.
The repositoryFullLimit controls the percentage of repository capacity at which you receive a
warning that the snapshot repository virtual disk is nearing full. Use integer values. autoDeleteLimit
configures the automatic deletion thresholds of snapshot images to keep the total number of snapshot
images in the snapshot group at or below a designated level. When this option is enabled, then any time a
new snapshot image is created in the snapshot group, the system automatically deletes the oldest
snapshot image in the group to comply with the limit value. This action frees repository capacity so it can
be used to satisfy ongoing copy-on-write requirements for the remaining snapshot images.
The rollBackPriority parameter sets priority level for a consistency group rollback in an operational
storage array.
For example, use this command on a consistency group named CGGroup_1:
set consistencyGroup ["CGGroup_1"] autoDeleteLimit=6 rollbackPriority=low;
Valid values are highest, high, medium, low, or lowest. The rollback priority defines the amount of
system resources that should be allocated to the rollback operation at the expense of system
performance. A value of
Lower values indicate that the rollback operation should be performed with minimal impact to host I/O.
high indicates that the rollback operation is prioritized over all other host I/O.
61
Page 62
autoDeleteLimit allows you to configure each snapshot group to perform automatic deletion of its
snapshot images to keep the total number of snapshot images in the snapshot group at or below a
maximum number of images. When the number of snapshot images in the snapshot group is at the
maximum limit, the autoDeleteLimit parameter automatically deletes snapshot images whenever a
new snapshot image is created in the snapshot group. The oldest snapshot images in the snapshot group
until the maximum number of images defined with the parameter is met. Deleting snapshot images frees
repository capacity so it can be used to satisfy ongoing copy-on-write requirements for the remaining
snapshot images.
Adding A Member Virtual Disk To A Consistency Group
To add a new base virtual disk to an existing consistency group, specify an existing repository virtual disk
for the new consistency group member or create a new repository virtual disk. When creating a new
repository virtual disk, identify an existing disk group or disk pool in which to create the repository virtual
disk.
To add a new base virtual disk to a consistency group with an existing virtual disk:
set consistencyGroup ["consistencyGroupName"]
addCGMemberVirtualDisk="baseVirtualDiskName" repositoryVirtualDisk="repos_XXXX"
To add a new base virtual disk to a consistency group and create a new repository virtual disk using a disk
group:
set consistencyGroup ["consistencyGroupName"]
addCGMemberVirtualDisk="baseVirtualDiskName"
repositoryVirtualDisk=("diskGroupName" capacity=capacityValue(KB|MB|GB|TB|
bytes))
To add a new base virtual disk to a consistency group and create a new repository virtual disk using a disk
pool:
set consistencyGroup ["consistencyGroupName"]
addCGMemberVirtualDisk="baseVirtualDiskName"
repositoryVirtualDisk=("diskPoolName" capacity=capacityValue(KB|MB|GB|TB|bytes))
Restrictions
•The Snapshot premium feature must be enabled on the storage array.
•To add a new member virtual disk, the consistency group must fewer than the maximum number of
virtual disks allowed in your configuration.
•If the base virtual disk resides on a standard disk group, the repository members for any associated
consistency group can reside on either a standard disk group or a disk pool. If a base virtual disk
resides on a disk pool, the repository members for any associated consistency group must reside on
the same disk pool as the base virtual disk.
•You cannot add a virtual disk to a consistency group that is in a Failed state.
Removing A Member Virtual Disk From A Consistency Group
When removing a member virtual disk from an existing snapshot consistency group, you can also delete
the repository virtual disk members from the consistency group, if desired.
The remove command syntax is shown below:
set consistencyGroup ["consistencyGroupName"]
removeCGMemberVirtualDisk="memberVirtualDiskName"
[deleteRepositoryMembers=(TRUE | FALSE)]
62
Page 63
The removeCGMemberVirtualDisk parameter is the name of the member virtual disk that you want to
remove; deleteRepositoryMembers determines whether the command removes all the repository
members from the consistency group.
To remove a virtual disk names payroll-backup from a consistency group named CGGroup_1, but
preserve the repository virtual disks:
set consistencyGroup ["CGGroup_1"] removeCGMemberVirtualDisk="payroll_backup"
deleteRepositoryMembers=FALSE;
To remove CGGroup_1, as well as all repository virtual disks:
set consistencyGroup ["CGGroup_1"] removeCGMemberVirtualDisk=" payroll_backup"
deleteRepositoryMembers=TRUE;
Or omit the deleteRepositoryMembers parameter, which invokes the default value of preserving
repository virtual disks:
set consistencyGroup ["CGGroup_1"] removeCGMemberVirtualDisk=" payroll_backup";
Changing The Pre-read Consistency Check Setting Of An Overall Repository
Virtual Disk
Use the Pre-Read Consistency Check option to define a storage array's capability to pre-read an overall
repository virtual disk consistency information and determine whether the data of that overall repository
virtual disk is consistent. An overall repository virtual disk that has this feature enabled returns read errors
if the data is determined to be inconsistent by the RAID controller module firmware.
You can enable this option for overall repository virtual disks that contain consistency information. RAID
Level 1, RAID Level 5, and RAID Level 6 maintain redundancy information.
You can change the Pre-Read Consistency Check for an overall repository for the following storage
objects:
•Snapshot group
•Snapshot virtual disk
•Consistency group member virtual disk
•Replicated Pair
This command defines the properties for a virtual disk. You can use most parameters to define properties
for one or more virtual disks. You can also use some parameters to define properties for only one virtual
disk. The syntax definitions are separated to show which parameters apply to several virtual disks and
which apply to only one virtual disk.
NOTE: In configurations where disk groups consist of more than 32 virtual disks, the operation can
result in host I/O errors or internal RAID controller module reboots due to the expiration of the
timeout period before the operation completes. If you experience host I/O errors or internal RAID
controller module reboots, quiesce the host I/O and try the operation again.
The setting to turn on or turn off preread consistency checking. Turning on preread consistency
checking verifies the consistency of RAID redundancy data for the stripes containing the read data.
Preread consistency checking is performed on read operations only. To turn on preread consistency
checking, set this parameter to TRUE. To turn off preread consistency checking, set this parameter to
FALSE.
NOTE: Do not use this parameter on non-redundant virtual disks, such as RAID 0 virtual disks.
Guidelines while using the Pre-Read Consistency Check option:
•Changing the Pre-Read Consistency Check setting modifies the setting only for the overall repository
that you selected.
•The Pre-Read Consistency Check setting is applied to all individual repository virtual disks contained
within the overall repository.
•If an overall repository virtual disk that is configured with pre-read is migrated to a RAID level that
does not maintain consistency information, the metadata of the overall repository virtual disk
continues to show that pre-read is enabled. However, reads to that overall repository virtual disk
ignores consistency pre-read. If the virtual disk is subsequently migrated back to a RAID level that
supports consistency, the option becomes available again.
NOTE: Enabling the option on overall repository virtual disks without consistency does not affect
the virtual disk. However, the attribute is retained for that overall repository virtual disk if it is ever
changed to one with consistency information.
Setting Snapshot Virtual Disk Repository Virtual Disk Capacity
To increase or decreases the capacity of a snapshot virtual disk repository virtual disk, use the set
snapVirtualDisk command.
A snapshot repository virtual disk is an expandable virtual disk that is structured as a concatenated
collection of up to 16 standard virtual disk entities. Initially, an expandable repository virtual disk has only
a single element. The capacity of the expandable repository virtual disk is exactly that of the single
element. You can increase the capacity of an expandable repository virtual disk by attaching additional
standard virtual disks to it. The composite expandable repository virtual disk capacity then becomes the
sum of the capacities of all of the concatenated standard virtual disks.
A snapshot group repository virtual disk must satisfy a minimum capacity requirement that is the sum of
the following:
•32 MB to support fixed overhead for the snapshot group and for copy-on-write processing.
64
Page 65
•Capacity for rollback processing, which is 1/5000th of the capacity of the base virtual disk.
If you are creating a new repository virtual disk when you run this command, you must enter the name of
a disk group or disk pool from which you want the repository virtual disk. Optionally, you can also define
the capacity of the repository virtual disk. The following capacity values are supported:
•A percentage ( integer value) representing an amount of the base virtual disk capacity.
•A decimal fraction representing a percentage of the base virtual disk capacity.
•A specific size for the repository virtual disk, in units of bytes, KB, MB, GB, or TB.
To increase capacity for a snapshot virtual disk, use the increaseRepository Capacity and
capacity parameter as shown below:
set snapVirtualDisk[“snapVirtualDiskName”] increaseRepositoryCapacity
repositoryVirtualDisks=("repos_xxxx" | repositoryVirtualDisks=(diskGroupName
[capacity=capacityValue]) repositoryVirtualDisks=(diskPoolName
[capacity=capacityValue]))
To decrease capacity, replace increaseRepositoryCapacity with decreaseRepository:
set snapVirtualDisk ["snapVirtualDiskName"] decreaseRepositoryCapacity
count=numberOfVirtualDisks
Setting Snapshot Group Repository Virtual Disk Capacity
To increase or decrease capacity of a snapshot group repository virtual disk, use the same basic
command syntax as shown previously for the virtual disk repository, except use the set snapGroup
command and supply the snapshot group name:
For more information about expanding storage objects, see the Administrator's Guide.
Reviving Disk Groups, Physical Disks, Snapshot Groups, And Snapshot Virtual
Disks
The revive command forces failed physical disks, disk groups, snapshot groups and snapshot virtual
disks into an Optimal state. However, this command should only be performed by qualified storage
administrators.
CAUTION: Possible loss of data access—Correct use of this command depends on the data
configuration on the physical disks in the disk group. Do not attempt to revive a physical disk or
disk group unless you are explicitly directed by your Technical Support representative.
To revive a physical disk:
revive physicalDisk [enclosureID,drawerID,slotID]
For MD-series dense storage arrays, all three location attributes (enclosureID, drawerID and slotID) are
required. For non-dense storage arrays, only enclosureID and slotID are required. Valid enclosureID
values are 0 to 99, drawerID values are 0 to 4 and slotID values are 0 to 31.
To revive disk group:
revive diskGroup [diskGroupName]
To revive a snapshot virtual disk, the snapshot virtual disk must be a:
65
Page 66
•standalone snapshot virtual disk
•snapshot virtual disk that is a member of a consistency group
If the snapshot virtual disk is not in a Failed state, the firmware displays an error message and does not
run this command.
revive snapVirtualDisk ["snapVirtualDiskName"]
NOTE: This command for a snapshot virtual disk used in online virtual disk copy.
To revive a snapshot group:
revive snapGroup ["snapGroupName"]
66
Page 67
6
Using The Snapshot (Legacy) Feature
The following types of virtual disk snapshot premium features are supported on the MD storage array:
•Snapshot Virtual Disks using multiple point-in-time (PiT) groups
•Snapshot Virtual Disks (Legacy) using a separate repository for each snapshot
NOTE: This section describes the Snapshot Virtual Disk (legacy) premium feature. If you are using
the Snapshot Virtual Disk using PiT groups, see Using The Snapshot Feature.
This chapter describes how the Snapshot (legacy) feature works, lists the snapshot script commands, and
explains how to use the commands to create snapshot virtual disks. Additional information about the
Snapshot (legacy) feature and related definitions is available in the online help, the Deployment Guide,
the MD Storage Manager online help, and the Owner’s Manual.
The Snapshot (legacy) feature creates a snapshot virtual disk that you can use as a backup of your data.
A snapshot virtual disk is a logical point-in-time image of a standard virtual disk. Because it is not a
physical copy, a snapshot virtual disk is created more quickly than a physical copy and requires less
physical disk space. Typically, you create a snapshot virtual disk so that an application, such as a backup
application, can access the snapshot virtual disk. The application reads the data while the source virtual
disk remains online and user accessible. You can also create several snapshot virtual disks of a source
virtual disk and write data to the snapshot virtual disks to perform testing and analysis.
NOTE: If you ordered Premium Features for the Snapshot Virtual Disks, you would have received a
Premium Features Activation card shipped in the same box as your Dell PowerVault MD storage
array. Follow the directions on the card to obtain a key file and to enable the feature. For more
information, see Premium Feature — Snapshot Virtual Disks in the Owner’s Manual.
Snapshot virtual disks allow you to perform the following tasks:
•Create a complete image of the data on a source virtual disk at a particular point in time.
•Use only a small amount of disk space.
•Provide quick, frequent, nondisruptive backups; or test new versions of a database system without
affecting actual data.
•Provide for snapshot virtual disks to be read, written, and copied.
•Use the same availability characteristics of the source virtual disk (such as redundant array of
independent disks (RAID) protection and redundant path failover).
•Map the snapshot virtual disk and make it accessible to any host on a storage area network. You can
make snapshot data available to secondary hosts for read and write access by mapping the snapshot
to the hosts.
•Create up to four snapshots per virtual disk.
NOTE: The maximum number of snapshot virtual disks is one-half of the total number of virtual
disks supported by the RAID controller module.
•Increase the capacity of a snapshot virtual disk.
67
Page 68
The following table lists the components that comprise a snapshot virtual disk and briefly describes what
they do.
Table 9. Snapshot Virtual Disk Components
ComponentDescription
Source virtual diskStandard virtual disk from which the snapshot is
created
Snapshot virtual diskPoint-in-time image of a standard virtual disk
Snapshot repository virtual diskVirtual disk that contains snapshot metadata and
copy-on-write data for a particular snapshot virtual
disk
The following table lists the snapshot virtual disk commands and brief descriptions of what the
commands do.
Table 10. Snapshot Virtual Disk Commands
CommandDescription
create snapshotVirtualDisk
re-create snapshot
set (snapshotVirtualDisk)
stop snapshot
Creates a snapshot virtual disk.
Starts a fresh copy-on-write operation by using an
existing snapshot virtual disk.
Defines the properties for a snapshot virtual disk
and enables you to rename a snapshot virtual disk.
Stops a copy-on-write operation.
Using Host Servers To Create An Initial Snapshot Virtual
Disk
CAUTION: Before using the Snapshot Virtual Disks Premium Feature in a Microsoft Windows
clustered configuration, you must first map the snapshot virtual disk to the cluster node that
owns the source virtual disk. This ensures that the cluster nodes correctly recognize the snapshot
virtual disk.
If you map the snapshot virtual disk to the node that does not own the source virtual disk before the
snapshot enabling process is completed, the operating system may fail to correctly identify the
snapshot virtual disk. This can result in data loss on the source virtual disk or an inaccessible
snapshot.
NOTE: You can create concurrent snapshots of a source virtual disk on both the source disk group
and on another disk group.
Before creating a Snapshot Virtual Disk, note the following:
•The following types of virtual disks are not valid source virtual disks: snapshot repository virtual disks,
snapshot virtual disks, target virtual disks that are participating in a virtual disk copy.
•You cannot create a snapshot of a virtual disk that contains unreadable sectors.
68
Page 69
•You must satisfy the requirements of your host operating system for creating snapshot virtual disks.
Failure to meet the requirements of your host operating system results in an inaccurate point-in-time
image of the source virtual disk or the target virtual disk in a virtual disk copy.
Creating A Snapshot Virtual Disk
The create snapshotVirtualDisk command provides three methods for defining the physical disks
for your snapshot repository virtual disk:
•Define each physical disk for the snapshot repository virtual disk by enclosure ID and slot ID.
•Define a disk group in which the snapshot repository virtual disk resides. Optionally define the
capacity of the repository virtual disk.
•Define the number of physical disks, but not specific physical disks, for the repository virtual disk.
When using the create snapshotVirtualDisk command to create a snapshot virtual disk, the
standard virtual disk name for the source virtual disk is the minimum information required. When you
provide only the standard virtual disk name, the storage management software provides default values for
the other required property parameters for a snapshot virtual disk.
NOTE: In some cases, depending on the host operating system and any virtual disk manager
software in use, the software prevents you from mapping the same host to both a source virtual disk
and its associated snapshot virtual disk.
An error message appears in the command line when the utility cannot distinguish between the
following:
•Source virtual disk and snapshot virtual disk (for example, if the snapshot virtual disk has been
removed)
•Standard virtual disk and virtual disk copy (for example, if the virtual disk copy has been removed)
If you are running a Linux operating system, run the hot_add utility to register the snapshot virtual disk
with the host operating system.
NOTE: The hot_add utility is not available for Windows.
Enabling The Snapshot Virtual Disk Feature
The first step in creating a snapshot virtual disk is to make sure the feature is enabled on the storage
array. You need a feature key to enable the feature. The command for enabling the feature key file is:
enable storageArray feature file="filename"
where, the file parameter is the complete file path and file name of a valid feature key file. Enclose the
file path and file name in quotation marks (" "). Valid file names for feature key files usually end with .key
extension.
Creating A Snapshot Virtual Disk With User-Assigned Physical Disks
Creating a snapshot virtual disk by assigning the physical disks allows you to choose from the available
physical disks when defining your storage array configuration. When you choose the physical disks for
your snapshot virtual disk, you automatically create a new disk group. You can specify which physical
disks to use and the RAID level for the new disk group.
69
Page 70
Preparing Host Servers To Create An Initial Snapshot Virtual Disk
CAUTION: Before you create a new point-in-time image of a source virtual disk, stop any data
access (I/O) activity or suspend data transfer to the source virtual disk to ensure that you capture
an accurate point-in-time image of the source virtual disk. Close all applications, including
Windows Internet Explorer, to make sure all I/O activity has stopped.
NOTE: Removing the drive letter of the associated virtual disk(s) in Windows or unmounting the
virtual drive in Linux helps to guarantee a stable copy of the drive for the Snapshot.
Before creating a snapshot virtual disk, the server has to be in the proper state. To ensure that the host
server is properly prepared to create a snapshot virtual disk, you can either use an application to carry out
this task, or you can perform the following steps:
1.Stop all I/O activity to the source.
2.Using your Windows system, flush the cache to the source. At the host prompt, type SMrepassist
-f <filename-identifier>
See "SMrepassist Utility" in the Owner’s Manual for more information.
3.Remove the drive letter(s) of the source in Windows or unmount the virtual drive(s) in Linux to help
guarantee a stable copy of the drive for the Snapshot. If this is not done, the snapshot operation
reports that it has completed successfully, but the snapshot data is not updated properly.
NOTE: Verify that the virtual disk has a status of Optimal or Disabled by clicking the Summary
tab and then clicking the Disk Groups & Virtual Disks link.
4.Follow any additional instructions for your operating system. Failure to follow these additional
instructions can create unusable snapshot virtual disks.
NOTE: If your operating system requires additional instructions, you can find those instructions
in your operating system documentation.
and press <Enter>.
If you want to use a snapshot regularly, such as for backups, use the Disable Snapshot and Re-create
Snapshot options to reuse the snapshot. Disabling and re-creating snapshots preserves the existing virtual
disk-to-host mappings to the snapshot virtual disk.
After your server has been prepared, see Creating The Initial Snapshot Virtual Disk.
Creating The Initial Snapshot Virtual Disk
After preparing the host server(s) as specified in the preceding procedure, use the following examples to
make a virtual disk snapshot.
The following syntax is the general form of the command to create a snapshot virtual disk:
NOTE: Use one or all of the optional parameters as needed to help define your configuration. You
do not need to use any optional parameters.
See step 1 through step 4 in the preceding section, Preparing Host Servers To Create An Initial Snapshot
Virtual Disk. The following example shows a command in which users assign the physical disks:
The command in this example creates a new snapshot of the source virtual disk Mars_Spirit_4. The
snapshot repository virtual disk consists of five physical disks that form a new disk group. The new disk
group has a RAID level of 5. This command also takes a snapshot of the source virtual disk, starting the
copy-on-write operation.
See step 1 through step 4 in the preceding section, Preparing Host Servers To Create An Initial Snapshot
Virtual Disk. The following example is the script file version of the command:
The command in this example creates a new snapshot for the source virtual disk Mars_Spirit_4. The
repository virtual disk is created in the same disk group as the source virtual disk, which means that the
repository virtual disk has the same RAID level as the source virtual disk. This command starts the copyon-write operation.
Refer to steps 1 through 4 in the preceding section, Preparing Host Servers To Create An Initial Snapshot
Virtual Disk. The following example is the script file version of the command:
Creating A Snapshot Virtual Disk With Software-Assigned Physical Disks
This version of the create snapshotVirtualDisk command lets you choose an existing disk group to
place the snapshot repository virtual disk. The storage management software determines which physical
disks to use. You can also define how much space to assign to the repository virtual disk. Because you are
using an existing disk group, the RAID level for the snapshot virtual disk defaults to the RAID level of the
disk group in which you place it. You cannot define the RAID level for the snapshot virtual disk. The
general syntax for this command is:
The command in this example creates a new snapshot repository virtual disk in disk group 2. The source
virtual disk is Mars_Spirit_4. The size of the snapshot repository is 4 GB. This command also takes a
snapshot of the source virtual disk, which starts the copy-on-write operation.
Define the capacity of a snapshot repository virtual disk as any percentage of the size of the source virtual
disk. A value of 20 percent is an optimum number. In the previous example, the size of the snapshot
repository is set to 4 GB. The underlying assumption is that the source virtual disk size is 20 GB (0.2 x 20
GB = 4 GB).
The following example is the script file version of the command:
Creating A Snapshot Virtual Disk By Specifying A Number Of Physical Disks
With this version of the create snapshotVirtualDisk command, you must specify the number of
physical disks and the RAID level for the snapshot repository virtual disk. This version of the create snapshotVirtualDisk command creates a new disk group. You must have physical disks in the
storage array that are not assigned to a disk group for this command to work:
The command in this example creates a new snapshot repository virtual disk that consists of three
physical disks. The three physical disks comprise a new disk group with a RAID level of 5. This command
also takes a snapshot of the source virtual disk, which starts the copy-on-write operation.
72
Page 73
The following example is the script file version of the command:
Parameters for the create snapshotVirtualDisk command enable you to define the snapshot virtual
disk to suit the requirements of your storage array. The following table lists the parameters and
descriptions of Snapshot Virtual Disk.
Table 11. Snapshot Virtual Disk Parameters
ParameterDescription
physicalDiskType
repositoryDiskGroup
freeCapacityArea
userLabel
Specifies the type of physical disk to use for the
snapshot repository virtual disk. The type must be
specified as Serial Attached SCSI (SAS). This
parameter works only with the count-based
repository method of defining a snapshot virtual
disk.
Specifies the disk group in which to build the
snapshot virtual disk. Default builds the snapshot
repository virtual disk in the same disk group as the
source virtual disk.
Specifies the amount of storage space to use for
the snapshot repository virtual disk. Free storage
space is defined in units of bytes, kilobytes,
megabytes, or gigabytes.
Specifies the name to give to the snapshot virtual
disk. If you do not choose a name for the snapshot
virtual disk, the RAID controller modules create a
default name using the source virtual disk name.
For example, if the source virtual disk name is
Mars_Spirit_4 and it does not have a snapshot
virtual disk, the default snapshot virtual disk name
is Mars_Spirit_4‑1. If the source virtual disk already
has n – 1 number of snapshot virtual disks, the
default name is Mars_Spirit_4‑n.
repositoryUserLabel
Specifies the name to give to the snapshot
repository virtual disk. If you do not choose a name
for the snapshot repository virtual disk, the RAID
controller modules create a default name using the
source virtual disk name. For example, if the source
virtual disk name is Mars_Spirit_4 and it does not
have an associated snapshot repository virtual disk,
the default snapshot repository virtual disk name is
Mars_Spirit_4‑R1. If the source virtual disk already
has n – 1 number of snapshot repository virtual
disks, the default name is Mars_Spirit_4‑Rn.
73
Page 74
ParameterDescription
warningThresholdPercent
repositoryPercentOfSource
repositoryFullPolicy
enableSchedule
Specifies how full to allow the snapshot repository
virtual disk to get before sending a warning that
the snapshot repository virtual disk is close to
capacity. The warning value is a percentage of the
total capacity of the snapshot repository virtual
disk. The default value is 50, which represents
50 percent of total capacity. (Change this value
using the set snapshotVirtualDisk
command.)
Specifies the size of the snapshot repository virtual
disk as a percentage of the source virtual disk size.
The default value is 20, which represents 20
percent of the source virtual disk size.
Specifies how snapshot processing continues if the
snapshot repository virtual disk is full. You can
choose to fail writes to the source virtual disk
(failSourceWrites) or fail writes to the snapshot
virtual disk (failSnapShot). The default value is
failSnapShot.
Turns on or off the ability to schedule a snapshot
operation. To turn on snapshot scheduling, set the
parameter to TRUE. To turn off snapshot
scheduling, set the parameter to FALSE. Schedules
a snapshot operation of the following type:
•immediate
•startDate
•scheduleDay
•startTime
•scheduleInterval
•endDate
•noEndDate
•timesPerDay
•timeZone
The following example of the create sourceVirtualDisk command includes user‑defined
parameters:
NOTE: In the previous examples, the names for the snapshot virtual disk and repository virtual disk
are defined by the user. If you do not choose to create names for the snapshot virtual disks or the
repository virtual disks, the RAID controller modules provide default names. (See Names Of
Snapshot Virtual Disks And Repository Virtual Disks for an explanation of naming conventions.)
Names Of Snapshot Virtual Disks And Repository Virtual Disks
The names of snapshot virtual disks and repository virtual disks can be any combination of alphanumeric
characters, hyphens, and underscores. The maximum length of the virtual disk names is 30 characters.
You must enclose the name in quotation marks. The character string cannot contain a new line.
Make sure that you use unique names or the RAID controller module firmware returns an error.
One technique for naming the snapshot virtual disk and the repository virtual disk is to add a hyphenated
suffix to the original name of the source virtual disk. The suffix distinguishes between the snapshot virtual
disk and the repository virtual disk. For example, if you have a source virtual disk with a name Engineering Data, the snapshot virtual disk can have a name Engineering Data-S1. The repository virtual disk can have
a name of Engineering Data‑R1.
If you do not choose a unique name for either the snapshot virtual disk or repository virtual disk, the RAID
controller modules create a default name by using the name of the source virtual disk. For example, if the
name of the source virtual disk is aaa and it does not have a snapshot virtual disk, then the default name
is aaa‑1. If the source virtual disk already has n – 1 number of snapshot virtual disks, then the default
name is aaa‑n. Similarly, if the name of the source virtual disk is aaa and it does not have a repository
virtual disk, then the default repository virtual disk name is aaa‑R1. If the source virtual disk already has
n – 1 number of repository virtual disks, then the default name is aaa-Rn.
In the examples from the previous section, the user‑defined name of the snapshot virtual disk was
Mars_Spirit_4_snap1. The user‑defined name of the repository virtual disk was Mars_Spirit_4_rep1. The
default name provided by the RAID controller module for the snapshot virtual disk would be
Mars_Spirit_4-1. The default name provided by the RAID controller module for the repository virtual disk
would be Mars_Spirit_4-R1.
Changing Snapshot Virtual Disk Settings
The set (snapshot) virtualDisk command enables you to change the property settings for a
snapshot virtual disk. Using this command, you can change the following parameters:
•Name of the snapshot virtual disk
•Warning threshold percent
•Repository full policy
75
Page 76
The following example shows the command to change the name of a snapshot virtual disk:
The following example is the script file version of the command:
set virtualDisk ["Mars_Spirit_4-1"] userLabel=
"Mars_Odyssey_3-2";
When you change the warning threshold percent and repository full policy, you can apply the changes to
one or several snapshot virtual disks. The following example uses the set (snapshot) virtualDisk
command to change these properties on more than one snapshot virtual disk:
The following example is the script file version of the command:
set virtualDisks ["Mars_Spirit_4-1"
"Mars_Spirit_4-2" "Mars_Spirit_4-3"]
warningThresholdPercent=50 repositoryFullPolicy=
failSourceWrites;
Stopping And Deleting A Snapshot Virtual Disk
When you create a snapshot virtual disk, copy-on-write immediately starts running. As long as a snapshot
virtual disk is enabled, storage array performance is affected by the copy‑on‑write operations to the
associated snapshot repository virtual disk. If you no longer want copy-on-write operations to run, you
can use the stop snapshot virtualDisk command to stop the copy‑on‑write operations. When you
stop a snapshot virtual disk, the snapshot virtual disk and the repository virtual disk are still defined for the
source virtual disk; only copy-on-write has stopped. The following example stops a snapshot virtual disk:
When you stop the copy-on-write operations for a specific snapshot virtual disk, only that snapshot
virtual disk is disabled. All other snapshot virtual disks remain in operation.
Re-creating The Snapshot Virtual Disk
To restart a copy-on-write operation, use the recreate snapshot virtualDisk command. This
command starts a fresh copy-on-write operation using an existing snapshot virtual disk. When you restart
a snapshot virtual disk, the snapshot virtual disk must have either an Optimal or a Disabled state.
The following conditions then occur:
•All copy-on-write data previously on the snapshot repository virtual disk is deleted.
•Snapshot virtual disk and snapshot repository virtual disk parameters remain the same as the
previously disabled snapshot virtual disk and snapshot repository virtual disk. You can also change the
userLabel, warningThresholdPercent, and repositoryFullPolicy parameters when you
restart the snapshot virtual disk.
•The original names for the snapshot repository virtual disk are retained.
76
Page 77
Preparing Host Servers To Re-create A Snapshot Virtual Disk
CAUTION: Before you create a new point-in-time image of a source virtual disk, stop any data
access (I/O) activity or suspend data transfer to the source virtual disk and snapshot virtual disk to
ensure that you capture an accurate point-in-time image of the source virtual disk. Close all
applications, including Windows Internet Explorer, to make sure all I/O activity has stopped.
NOTE: Removing the drive letter of the associated virtual disk in Windows or unmounting the virtual
drive in Linux helps to guarantee a stable copy of the drive for the Snapshot.
Before re-creating a snapshot virtual disk, both the server and the associated virtual disk you are
re‑creating have to be in the proper state. To ensure that the host server is properly prepared to re-create
a snapshot virtual disk, you can either use an application to carry out this task, or you can perform the
following steps:
1.Stop all I/O activity to the source and snapshot virtual disk (if mounted).
2.Using your Windows system, flush the cache to both the source and the snapshot virtual disk
(if mounted). At the host prompt, type SMrepassist ‑f <filename-identifier> and press
<Enter>. See "SMrepassist Utility" in the Owner’s Manual for more information.
3.Remove the drive letter(s) of the source and (if mounted) snapshot virtual disk in Windows or
unmount the virtual drive(s) in Linux to help guarantee a stable copy of the drive for the Snapshot.
If this is not done, the snapshot operation reports that it has completed successfully, but the
snapshot data is not updated properly.
4.Follow any additional instructions for your operating system. Failure to follow these additional
instructions can create unusable snapshot virtual disks.
NOTE: If your operating system requires additional instructions, you can find those instructions
in your operating system documentation.
After your server has been prepared, see Re-creating The Snapshot Virtual Disk to re-create the snapshot
virtual disk.
Re-creating A Snapshot Virtual Disk
After first preparing the host server(s) as specified in the preceding procedure, use the following examples
to re-create a virtual disk snapshot.
Refer to steps 1 through 4 in the preceding section, Preparing Host Servers To Re-create A Snapshot
Virtual Disk. The following example shows the command to restart a snapshot virtual disk:
Refer to steps 1 through 4 in the preceding section, Preparing Host Servers To Re-create A Snapshot
Virtual Disk. The following example is the script file version of the command:
If you do not intend to use a snapshot virtual disk again, you can delete the snapshot virtual disk using the
delete virtualDisk command. When you delete a snapshot virtual disk, the associated snapshot
repository virtual disk is also deleted.
77
Page 78
7
Using The Virtual Disk Copy Feature
This chapter describes how the Virtual Disk Copy feature works, lists the script commands for Virtual Disk
Copy, and explains how to use the commands to create and run Virtual Disk Copy. Additional information
about Virtual Disk Copy and related definitions is available in the online help, the Deployment Guide, the
MD Storage Manager online help, and the Administrator's Guide.
NOTE: If you ordered Premium Features for Virtual Disk Copy, you received a Premium Features
Activation card shipped in the same box as your Dell PowerVault MD storage array. Follow the
directions on the card to obtain a key file and to enable the feature. For more information, see
"Premium Feature — Virtual Disk Copy" in the Owner's Manual.
The Virtual Disk Copy feature enables you to copy data from one virtual disk (the source) to another
virtual disk (the target) in a single storage array. You can use this feature to perform the following
functions:
•Back up data.
•Copy data from disk groups that use smaller capacity physical disks to disk groups using larger
capacity physical disks.
•Restore snapshot virtual disk data to the associated source virtual disk.
•Copy data from a thin virtual disk to a standard virtual disk on the same storage array.
NOTE: You cannot copy data from a standard virtual disk to a thin virtual disk.
About Virtual Disk Copy
Starting a virtual disk copy operation does the following to your target copy disks:
•Overwrites all existing data on the target virtual disk.
•Makes the target virtual disk read-only to hosts.
•Fails all snapshot (legacy) virtual disks or snapshot image virtual disks associated with the target virtual
disk.
If you have data stored on a virtual disk you specify as a virtual disk copy target, make sure you no longer
need the data or have it backed up before beginning virtual disk copy.
Virtual Disk Copy Types
The Virtual Disk Copy script commands create one of following types of virtual disk copies:
•A virtual disk copy using a snapshot (legacy), which suspends I/O to the source virtual disk while the
copy is in progress. The source virtual disk will not be available during the copy operation. This is
called an offline virtual disk copy.
•A virtual disk copy using a point-in-time copy of any virtual disk, while still allowing access to the
source virtual disk when the copy is in progress. This is called an online virtual disk copy.
78
Page 79
In either type of virtual disk copy, the target virtual disk is locked and cannot be accessed while the copy
operation is in place.
After completion of the virtual disk copy of a snapshot (legacy), the legacy snapshot is disabled. After
completion of the virtual disk copy using a snapshot image, the snapshot image is deleted and the
snapshot virtual disk is disabled.
NOTE: You can have a maximum of eight virtual disk copies in progress at one time. If you try to
create more than eight virtual disk copies at one time, the RAID controller modules return a status
of Pending until one of the virtual disk copies that is in progress finishes and returns a status of
Complete.
NOTE: Snapshots created using older (legacy) premium feature versions cannot be managed using
newer snapshot premium feature options. Also, a virtual disk in a snapshot group cannot be a target
for a virtual disk copy. If you want to choose the base virtual disk of an older (legacy) snapshot
virtual disk as your target virtual disk, you must first disable all snapshot (legacy) virtual disks that are
associated with the base virtual disk.
The following table lists the Virtual Disk Copy commands.
NOTE: These commands apply when you are using a snapshot or a snapshot (legacy) image.
Table 12. Virtual Disk Copy Commands
CommandDescription
create virtualDiskCopy
recopy virtualDiskCopy
enable storageArray feature
recopy virtualDiskCopy
remove virtualDiskCopy
set virtualDiskCopy
show virtualDiskCopy
show virtualDiskCopy sourceCandidates
show virtualDiskCopy targetCandidates
stop virtualDiskCopy
Creates a virtual disk copy and starts the virtual disk
copy operation.
Reinitiates a virtual disk copy operation using an
existing virtual disk copy pair.
Activates the Virtual Disk Copy feature.
Re-initiates a virtual disk copy operation by using
an existing virtual disk copy pair.
Removes a virtual disk copy pair.
Defines the properties for a virtual disk copy pair.
Returns information about virtual disk copy
operations. You can retrieve information about a
specific virtual disk copy pair, or all virtual disk copy
pairs in the storage array.
Returns information about the candidate virtual
disks that you can use as the source for a virtual
disk copy operation.
Returns information about the candidate virtual
disks that you can use as the target for a virtual disk
copy operation.
Stops a virtual disk copy operation.
79
Page 80
Creating A Virtual Disk Copy
Before creating a virtual disk copy, ensure that a suitable target virtual disk exists on the storage array, or
create a new target virtual disk specifically for the virtual disk copy. The target virtual disk must have a
capacity equal to or greater than the source virtual disk.
You can have a maximum of eight virtual disk copies in progress at one time. Any virtual disk copy greater
than eight has a status of Pending until one of the virtual disk copies with a status of In Progress
completes.
The following steps show the general process for creating a virtual disk copy:
1.Enable the Virtual Disk Copy feature.
2.Determine candidates for a virtual disk copy.
3.Create the target virtual disk and source virtual disk for a virtual disk copy.
Enabling The Virtual Disk Copy Feature
The first step in creating a virtual disk copy is to make sure the feature is enabled on the storage array.
You need a feature key to enable the feature. To enable the feature key file, use the command:
enable storageArray feature file="filename"
where, the file parameter is the complete file path and file name of a valid feature key file. Enclose the
file path and file name in quotation marks (" "). Valid file names for feature key files usually end with a .key
extension.
Determining Virtual Disk Copy Candidates
All virtual disks might not be available for use in virtual disk copy operations. To determine which
candidate virtual disks on the storage array can be used as a source virtual disk, use the show virtualDiskCopy sourceCandidates command. To determine which candidate virtual disks on the
storage array can be used as a target virtual disk, use the
command. These commands return a list of the expansion enclosure, slot, and capacity information for
source virtual disk and target virtual disk candidates. You can use the show virtualDiskCopy
sourceCandidates
have enabled the virtual disk copy feature.
A source virtual disk can be a standard or thin virtual disk. A target virtual disk can be a standard or thin
virtual disk in a disk group or disk pool and, if the legacy version is enabled, a legacy snapshot base virtual
disk.
and the show virtualDiskCopy targetCandidates commands only after you
show virtualDiskCopy targetCandidates
Creating A Virtual Disk Copy
CAUTION: A virtual disk copy overwrites data on the target virtual disk. Ensure that you no longer
need the data or have backed up the data on the target virtual disk before starting a virtual disk
copy.
When you create a virtual disk copy, you must define which virtual disks to use for the source virtual disk
and target virtual disks. Define the source virtual disk and target virtual disk by the name of each virtual
disk. You can also define the copy priority and choose whether you want the target virtual disk to be write
enabled or read only after the data is copied from the source virtual disk.
80
Page 81
Preparing Host Servers To Create A Virtual Disk Copy
CAUTION: Before you create a new copy of a source virtual disk, stop any data access (I/O)
activity or suspend data transfer to the source virtual disk (and, if applicable, the target disk) to
ensure that you capture an accurate point-in-time image of the source virtual disk. Close all
applications, including Windows Internet Explorer, to make sure all I/O activity has stopped.
NOTE: Removing the drive letter of the associated virtual disk(s) in Windows or unmounting the
virtual drive in Linux helps to guarantee a stable copy of the drive for the virtual disk copy.
Before creating a virtual disk copy, both the server and the associated virtual disk you are copying have to
be in the proper state. To ensure that the host server is properly prepared to create a virtual disk copy,
you can either use an application to carry out this task, or you can perform the following steps:
1.Stop all I/O activity to the source and target virtual disk.
2.Using your Windows system, flush the cache to both the source and the target virtual disk (if
mounted). At the host prompt, type SMrepassist -f <filename-identifier> and press
<Enter>.
See "SMrepassist Utility" in the Owner’s Manual for more information.
3.Remove the drive letter(s) of the source and (if mounted) virtual disk in Windows or unmount the
virtual drive(s) in Linux to help guarantee a stable copy of the drive for the virtual disk. If this is not
done, the copy operation reports that it has completed successfully, but the copied data is not
updated properly.
4.Follow any additional instructions for your operating system. Failure to follow these additional
instructions can create unusable virtual disk copies.
NOTE: If your operating system requires additional instructions, you can find those instructions
in your operating system documentation.
After your server has been prepared, see Copying The Virtual Disk to copy the virtual disk.
Copying The Virtual Disk
After first preparing the host server(s) as specified in the preceding procedure, use the following examples
to make a virtual disk copy.
The following syntax is the general form of the command:
create virtualDiskCopy source="sourceName" target=
"targetName" [copyPriority=(highest | high |
medium | low | lowest) targetReadOnlyEnabled=(TRUE | FALSE)]
NOTE: Use one or both of the optional parameters as needed to help define your configuration. It is
not necessary to use any optional parameters.
Once the virtual disk copy has started, the source virtual disk is read only to all I/O activity. Any write
attempts to the source virtual disk fails until the operation completes.
Once the virtual disk copy operation is completed register the target virtual disk with the operating
system to be used by performing the following steps:
•Enable write permission on the target virtual disk by either removing the Virtual Disk Copy Pair or
explicitly setting write permission.
– In Windows, assign a drive letter to the virtual disk.
81
Page 82
– In Linux, mount the virtual disk.
See step 1 to step 4 in Preparing Host Servers To Create A Virtual Disk Copy.
The create virtualDiskCopy command might look like the following example:
The command in this example copies the data from the source virtual disk named Jaba_Hut to the target
virtual disk named Obi_1. Setting the copy priority to medium provides a compromise between the
following storage array operations:
•The speed with which the data is copied from the source virtual disk to the target virtual disk
•The amount of processing resource required for data transfers to other virtual disks in the storage
array
Setting the targetReadOnlyEnabled parameter to TRUE means that write requests cannot be made to
the target virtual disk. This setting also ensures that the data on the target virtual disk remains unaltered.
Refer to steps 1 through 4 in the preceding section, Preparing Host Servers To Create A Virtual Disk Copy.
The following example is the script file version of the command:
After the virtual disk copy operation is completed, the target virtual disk automatically becomes read-only
to the hosts. Any write requests to the target virtual disk are rejected, unless you disable the read-only
attribute. Use the set virtualDiskCopy command to disable the read-only attribute.
Viewing Virtual Disk Copy Properties
Using the show virtualDiskCopy command, you can view information about one or more selected
source virtual disks or target virtual disks. This command returns the following information:
•The virtual disk role (target or source)
•The copy status
•The start timestamp
•The completion timestamp
•The virtual disk copy priority
•The read-only attribute setting for the target virtual disk
•The source virtual disk World Wide Identifier (WWID) or the target virtual disk WWID
A virtual disk can be a source virtual disk for one virtual disk copy and a target virtual disk for another
virtual disk copy. If a virtual disk participates in more than one virtual disk copy, the details are repeated
for each associated copy pair.
The following syntax is the general form of the command:
show virtualDiskCopy (allVirtualDisks | source
[sourceName] | target [targetName])
82
Page 83
The following example shows a command that returns information about a virtual disk used for a virtual
disk copy:
The command in the preceding example requests information about the source virtual disk Jaba_Hut. If
you want information about all virtual disks, use the allVirtualDisks parameter. You can also request
information about a specific target virtual disk.
The following example is the script file version of the command:
show virtualDiskCopy source ["Jaba_Hut"];
Changing Virtual Disk Copy Settings
The set virtualDiskCopy command enables you to change the property settings for a virtual disk
copy pair. Using this command, you can change the following items:
•Copy priority
•Read/write permission for the target virtual disk
Copy priority has five relative settings, which range from highest to lowest. The highest priority supports
the virtual disk copy, but I/O activity might be affected. The lowest priority supports I/O activity, but the
virtual disk copy takes longer. You can change the copy priority at three different times in the operation:
•Before the virtual disk copy begins
•While the virtual disk copy has a status of In Progress
•After the virtual disk copy has completed re-creating a virtual disk copy using the recopy
virtualDiskCopy
command
When you create a virtual disk copy pair and after the original virtual disk copy has completed, the target
virtual disk is automatically defined as read-only to the hosts. The read-only status of the target virtual
disk ensures that the copied data on the target virtual disk is not corrupted by additional writes to the
target virtual disk after the virtual disk copy is created. Maintain the read-only status when the following
conditions apply:
•You are using the target virtual disk for backup purposes
•You are copying data from one disk group to a larger disk group for greater accessibility
•You are planning to use the data on the target virtual disk to copy back to the source virtual disk in
case of a disabled or failed snapshot virtual disk
At other times you might want to write additional data to the target virtual disk. You can use the set virtualDiskCopy command to reset the read/write permission for the target virtual disk.
NOTE: If you enabled host writes to the target virtual disk, read and write requests are rejected while
the virtual disk copy has a status of In Progress, Pending, or Failed.
The following syntax is the general form of the command:
set virtualDiskCopy target [targetName] [source
[sourceName]] copyPriority=(highest | high |
medium | low | lowest) targetReadOnlyEnabled=(TRUE | FALSE)
NOTE: Use one or both of the parameters as needed to help define your configuration. It is not
necessary to use either parameter.
83
Page 84
The following example shows how to change parameters using the set virtualDiskCopy command:
The following example is the script file version of the command:
set virtualDiskcopy target ["Obi_1"] copyPriority=
highest targetreadonlyenabled=false;
Recopying A Virtual Disk
CAUTION: The recopy virtualDiskCopy command overwrites existing data on the target
virtual disk and makes the target virtual disk read‑only to hosts. The
command fails all snapshot virtual disks associated with the target virtual disk, if any exist.
Using the recopy virtualDiskCopy command, you can create a new virtual disk copy for a
previously defined copy pair that has a status of Stopped, Failed, or Completed. Use the recopy
virtualDiskCopy
for off-site storage. When using the recopy virtualDiskCopy command to make a backup, you
cannot write to source while the recopy is running. The recopy might take a long time.
When you run the recopy virtualDiskCopy command, the data on the source virtual disk is copied in
its entirety to the target virtual disk.
Reset the copy priority for the recopy operation by using the recopy virtualDiskCopy command.
The higher priorities allocate storage array resources to the virtual disk copy at the expense of storage
array performance.
command to create backups of the target virtual disk, then copy the backup to tape
recopy virtualDiskCopy
Preparing Host Servers To Recopy A Virtual Disk
CAUTION: Before you create a new copy of a source virtual disk, stop any data access (I/O)
activity or suspend data transfer to the source virtual disk (and, if applicable, the target disk) to
ensure that you capture an accurate point-in-time image of the source virtual disk. Close all
applications, including Windows Internet Explorer, to make sure all I/O activity has stopped.
NOTE: Removing the drive letter of the associated virtual disk(s) in Windows or unmounting the
virtual drive in Linux helps to guarantee a stable copy of the drive for the virtual disk copy.
Before creating a new virtual disk copy for an existing copy pair, both the server and the associated virtual
disk you are recopying have to be in the proper state. To ensure that the host server is properly prepared
to create a virtual disk recopy, you can either use an application to carry out this task, or you can perform
the following steps:
1.Stop all I/O activity to the source and target virtual disk.
2.Using your Windows system, flush the cache to both the source and the target virtual disk (if
mounted). At the host prompt, type SMrepassist -f <filename-identifier> and press
<Enter>.
See "SMrepassist Utility" in the Owner’s Manual for more information.
3.Remove the drive letter(s) of the source and (if mounted) virtual disk in Windows or unmount the
virtual drive(s) in Linux to help guarantee a stable copy of the drive for the virtual disk. If this is not
done, the copy operation reports that it has completed successfully, but the copied data is not
updated properly.
4.Follow any additional instructions for your operating system. Failure to follow these additional
instructions can create unusable virtual disk copies.
84
Page 85
NOTE: If your operating system requires additional instructions, you can find those instructions
in your operating system documentation.
After your server has been prepared, see Recopying The Virtual Disk to recopy the virtual disk.
Recopying The Virtual Disk
After first preparing the host server(s) as specified in the preceding procedure, use the following examples
to make a virtual disk copy.
The following syntax is the general form of the command:
recopy virtualDiskCopy target [targetName] [source
[sourceName] copyPriority=(highest | high | medium
| low | lowest) targetReadOnlyEnabled=(TRUE | FALSE)]
NOTE: Use one or all of the optional parameters as needed to help define your configuration. It is
not necessary to use any optional parameters.
Refer to steps 1 through 4 in the preceding section, Preparing Host Servers To Recopy A Virtual Disk. The
following example shows a command that changes the copy priority:
The command in this example copies data from the source virtual disk associated with the target virtual
disk Obi_1 to the target virtual disk again. The copy priority is set to the highest value to complete the
virtual disk copy as quickly as possible. The underlying consideration for using this command is that you
have already created the virtual disk copy pair. When you create a virtual disk copy pair, you automatically
created one virtual disk copy. Using this command, you are copying the data from the source virtual disk
to the target virtual disk. You are making this copy because the data on the source virtual disk changed
since the previous copy was made.
Refer to steps 1 through 4 in the preceding section, Preparing Host Servers To Recopy A Virtual Disk. The
following example is the script file version of the command:
The stop virtualDiskCopy command enables you to stop a virtual disk copy that has a status of In
Progress, Pending, or Failed. After you stop a virtual disk copy, you can use the
virtualDiskCopy command to create a new virtual disk copy using the original virtual disk copy pair.
All mapped hosts have write access to the source virtual disk.
The following syntax is the general form of the command:
The following example is the script file version of the command:
stop virtualDiskCopy target ["Obi_1"];
recopy
85
Page 86
Removing Copy Pairs
The remove virtualDiskCopy command enables you to remove a virtual disk copy pair from the
storage array configuration. All virtual disk copy information for the source virtual disk and target virtual
disk is removed from the storage array configuration. The data on the source virtual disk or target virtual
disk is not deleted. Removing a virtual disk copy from the storage array configuration also removes the
read-only attribute for the target virtual disk.
CAUTION: If the virtual disk copy has a status of In Progress, you must stop the virtual disk copy
before you can remove the virtual disk copy pair from the storage array configuration.
The following syntax is the general form of the command:
The following example is the script file version of the command:
remove virtualDiskCopy target ["Obi_1"];
Interaction With Other Features
You can run the Virtual Disk Copy feature while running the following features:
•Storage Partitioning
•Snapshot Virtual Disks
When running the Virtual Disk Copy feature with other features, you must take the requirements of other
features into consideration to ensure you set up a stable storage array configuration. You can also run the
Virtual Disk Copy feature while running Dynamic Virtual Disk Expansion.
About Snapshot (Legacy) Premium Features With Virtual
Disk Copy
•A source virtual disk for a virtual disk copy can be a standard virtual disk or a thin virtual disk (if
supported in your firmware version).
•A target virtual disk can be a standard virtual disk in a disk group or disk pool, or a snapshot (legacy)
base virtual disk if the snapshot (legacy) premium feature is the only snapshot premium feature
enabled on your storage array. A virtual disk in a snapshot group cannot be a virtual disk copy target.
•Snapshots created using the older (legacy) premium feature versions cannot be managed using newer
snapshot premium feature options.
86
Page 87
Using The Remote Replication Premium
Feature
The following types of Remote Replication premium features are supported on the MD storage array:
8
Remote
Replication
Remote
Replication
(Legacy)
NOTE: This chapter describes the standard Remote Replication premium feature supported in both
iSCSI and Fibre Channel storage arrays only. To understand the Remote Replication (legacy)
premium feature, see Using The Remote Replication (Legacy) Premium Feature.
The Remote Replication premium feature provides for online, real-time replication of data between
storage arrays over a remote distance. In the event of a disaster or a catastrophic failure on one storage
array, you can promote the second storage array to take over responsibility for computing services.
Remote Replication is designed for extended storage environments in which the storage arrays that are
used for Remote Replication are maintained at separate sites.
You can use Remote Replication for these functions:
Disaster
recovery
Standard asynchronous replication using point-in-time images to batch the
resynchronization between the local and remote site. This type of replication is
supported on either Fibre Channel or iSCSI storage arrays (both local and remote
arrays must have the same data protocol).
Synchronous (or full-write) replication that synchronizes local and remote site data
in real-time. This type of replication is supported on Fibre Channel storage arrays
only.
Remote Replication lets you replicate data from one site to another site, which
provides an exact duplicate at the remote (secondary) site. If the primary site fails,
you can use replicated data at the remote site for failover and recovery. You can
then shift storage operations to the remote site for continued operation of all of
the services that are usually provided by the primary site.
Data vaulting
and data
availability
Two-way data
protection
Remote Replication lets you send data off site where it can be protected. You can
then use the off-site copy for testing or to act as a source for a full backup to avoid
interrupting operations at the primary site.
Remote Replication provides the ability to have two storage arrays back up each
other by duplicating critical virtual disks on each storage array to virtual disks on
the other storage array. This action lets each storage array recover data from the
other storage array in the event of any service interruptions.
How Remote Replication Works
Standard Remote Replication (asynchronous) is a premium feature that provides RAID controller-based
data replication between a local and remote storage array on a per-virtual disk basis. By identifying
87
Page 88
primary (local) and secondary (remote) virtual disk pairs, called replicated pairs, write operations to the
primary virtual disk of the pair are tracked by the RAID controller firmware and captured in a point-intime image and transferred to the secondary virtual disk in the pair.
Remote Replication groups allow you to manage synchronization of both virtual disks to create a
consistent data set across local and remote storage arrays. Point-in-time images on the primary virtual
disk and the secondary virtual disk can be resynchronized in a batch approach that increases replication
throughput. When data synchronization completes, the system uses the point-in-time images on the
secondary virtual disk to ensure that the data is maintained in a consistent state during subsequent
synchronization operations to the secondary virtual disk.
Replication Pairs And Replication Repositories
Replicated pairs, comprising of a primary and secondary virtual disk, contain identical data copies as a
result of data synchronization. Replication repository virtual disks are used to manage replication data
synchronization and are required for both the primary virtual disk and secondary virtual disk in a
replicated pair.
A replication repository consists of the following data:
•Resynchronization and recovery point images for primary and secondary virtual disks.
•Tracking data from the primary virtual disk that occurs in between synchronization intervals. This data
is also written to the secondary virtual disk in case a reversal of the primary/secondary virtual disk role
should occur.
Differences Between Remote Replication And Remote
Replication (Legacy) Features
The standard Remote Replication premium feature uses a point-in-time snapshot image to capture the
state of the source virtual disk and only writes data that has changed since the last point-in-time image.
The Remote Replication (Legacy) premium feature reproduces every data write to the local (primary)
virtual disk on the remote (secondary) virtual disk through a Fibre Channel-only connected configuration.
While not producing a fully synchronized data view between both storage arrays, standard Remote
Replication offers a much faster replication solution that can run on both iSCSI and Fibre Channel
configurations.
NOTE: Both remote and local storage arrays must of the same data protocol. Replication between
Fibre Channel and iSCSI storage arrays is not supported.
Standard Remote Replication creates a repository virtual disk for each replicated pair, increasing
throughput between local and remote arrays. Remote Replication (Legacy) uses a single repository virtual
disk for all replicated pairs.
Physical distance between local and remote storage arrays is unlimited using the Standard Remote
Replication premium feature. Remote Replication (legacy) is limited to approximately 10 Km (6.2 miles)
between local and remote storage arrays.
Link Interruptions Or Secondary Virtual Disk Errors
When processing write requests, the primary RAID controller module might be able to write to the
primary virtual disk, but a link interruption might prevent communication with the remote (secondary)
RAID controller module.
88
Page 89
In this case, the remote write operation cannot be completed to the secondary virtual disk, and the
primary virtual disk and the secondary virtual disk are no longer correctly replicated. The primary RAID
controller module transitions the replicated pair into an Unsynchronized state and sends an I/O
completion to the primary host. The primary host can continue to write to the primary virtual disk, but
remote writes do not take place.
When communication is restored between the RAID controller module owner of the primary virtual disk
and the RAID controller module owner of the secondary virtual disk, a resynchronization takes place. This
resynchronization happens automatically, or it must be started manually, depending on which write
mode you chose when setting up the replication relationship. During the resynchronization, only the
blocks of data that have changed on the primary virtual disk during the link interruption are copied to the
secondary virtual disk. After the resynchronization starts, the replicated pair transitions from an
Unsynchronized status to a Synchronization in Progress status.
The primary RAID controller module also marks the replicated pair as unsynchronized when a virtual disk
error on the secondary side prevents the remote write from completing. For example, an offline
secondary virtual disk or a failed secondary virtual disk can cause the remote replication to become
unsynchronized. When the virtual disk error is corrected (the secondary virtual disk is placed online or
recovered to an Optimal status), then synchronization is required. The replicated pair then transitions to a
Synchronization in Progress status.
Resynchronization
Data replication between the primary virtual disk and the secondary virtual disk in a replication
relationship is managed by the RAID controller modules and is transparent to host machines and
applications. When the RAID controller module owner of the primary virtual disk receives a write request
from a host, the RAID controller module first logs information about the write to a replication repository
virtual disk. The RAID controller module then writes the data to the primary virtual disk. The RAID
controller module then initiates a write operation to copy the affected data to the secondary virtual disk
on the remote storage array.
If a link interruption or a virtual disk error prevents communication with the secondary storage array, the
RAID controller module owner of the primary virtual disk transitions the replicated pair into an
Unsynchronized status. The RAID controller module owner then sends an I/O completion to the host
sending the write request. The host can continue to issue write requests to the primary virtual disk, but
remote writes to the secondary virtual disk do not take place.
When connectivity is restored between the RAID controller module owner of the primary virtual disk and
the RAID controller module owner of the secondary virtual disk, the virtual disks must be resynchronized
by copying the blocks of data that changed during the interruption to the secondary virtual disk. Only the
blocks of data that have changed on the primary virtual disk during the link interruption are copied to the
secondary virtual disk.
CAUTION: Possible loss of data access – Any communication disruptions between the primary
storage array and the secondary storage array while resynchronization is underway could result
in a mix of new data and old data on the secondary virtual disk. This condition would render the
data unusable in a disaster recovery situation.
89
Page 90
Remote Replication Group
After your Remote Replication premium feature is activated on both local and remote storage arrays, you
must create Remote Replication groups on the local storage array.
A replication group contains at least one replicated virtual disk pair, one on the local storage and one on
the remote storage array, that share data synchronization settings. Multiple replicated pairs can reside in a
replication group, but each pair can only be a member of one Remote Replication group.
The following attributes also apply to a Remote Replication group:
•The local storage array serves as the primary side of the Remote Replication group, while the remote
storage array serves as the secondary side of the Remote Replication group.
•At the virtual disk level, all virtual disks added to the Remote Replication group on the local storage
array serve as the primary role in the Remote Replication configuration. Virtual disks added to the
group on the remote storage array serve the secondary role.
Once you have selected virtual disks at both the remote and local storage array that you want to pair in a
replication relationship, adding them to a replication group actually begins the replication
synchronization process.
For more detailed information on the role of Remote Replication groups, see the Administrator's Guide.
Previous Users Of Remote Replication (Legacy) Premium
Feature
If you have upgraded (or plan to upgrade) your RAID controller firmware version to a level that supports
both legacy and non-legacy Remote Replication, any legacy replication configurations you have
previously set up will be unaffected, and will continue to function normally.
Remote Replication Requirements And Restrictions
To use the standard Remote Replication premium feature, you must have:
•Two storage arrays with write access and both these storage arrays must have sufficient space to
replicate data between them.
•Each storage must have a dual-controller Fibre Channel or iSCSI configuration (single-controller
configurations are not supported).
•Fibre Channel Connection Requirements — You must attach dedicated remote replication ports to a
Fibre Channel fabric environment. In addition, these ports must support the Name Service.
•You can use a fabric configuration that is dedicated solely to the remote replication ports on each
RAID controller module. In this case, host systems can connect to the storage arrays using fabric.
•Fibre Channel Arbitrated Loop (FC-AL), or point-to-point configurations, are not supported for arrayto-array communications.
•Maximum distance between the local site and remote site is 10 km (6.2 miles), using single-mode fibre
Gigabit interface converters (GBICs) and optical long-wave GBICs.
•iSCSI connection considerations:
– iSCSI does not require dedicated ports for replication data traffic
– iSCSI array-to-array communication must use a host-connected port (not the Ethernet
management port).
90
Page 91
– The first port that successfully establishes an iSCSI connection is used for all subsequent
communication with that remote storage array. If that connection subsequently fails, a new
session is attempted using any available ports.
Primary And Secondary Virtual Disks
Before you create any replication relationships, virtual disks must exist at both the primary site and the
secondary site. The virtual disk that resides on the local storage array is the primary virtual disk. Similarly,
the virtual disk that resides on the remote storage array is the secondary virtual disk. If neither the primary
virtual disk nor the secondary virtual disk exist, you must create these virtual disks. Keep these guidelines
in mind when you create the secondary virtual disk:
•The secondary virtual disk must be of equal or greater size than the primary virtual disk.
•The RAID level of the secondary virtual disk does not have to be the same as the primary virtual disk.
Setting Up Remote Replication
Setting up Remote Replication between local and remote storage arrays consists of the following:
•Enabling Remote Replication (on both storage arrays)
•Activating the Remote Replication premium feature on both the local and remote storage arrays.
•Creating a Remote Replication group on the local storage array.
•Adding a replicated pair of virtual disks to the Remote Replication group.
Enabling The Remote Replication Premium Feature
The first step in creating a remote replication is to make sure that the Remote Replication premium
feature is enabled on both storage arrays. Because Remote Replication is a premium feature, you need a
feature key file to enable the premium feature. The command for enabling the feature key file is as
follows:
In this command, asyncReplication is literal and is appended with the activation_key, which is
provided by Dell. For example, if your activation key value is 999999:
Activating the Remote Replication premium feature prepares the storage arrays to create and configure
replication relationships. After you activate the premium feature, the secondary ports for each RAID
controller module are reserved and dedicated to remote replication use. In addition, replication
repository virtual disks are automatically created for each RAID controller module in the storage array.
To activate the Remote Replication premium feature, use this command:
activate storageArray feature=asyncReplication
Creating A Remote Replication Group
The first step in establishing a Remote Replication relationship is to create a replication group on the local
storage array. A replication group cannot be created on the remote storage array.
91
Page 92
The create asyncRemoteReplicationGroup command creates a new, empty remote replication
group on both the local storage array and the remote storage array. Each replicated pair you add to the
remote replication group share the same synchronization settings, primary and secondary role, and write
mode.
This command must be run on the local storage array. Remote replication group creation is initiated from
the storage array that contains the virtual disks that hold the primary role in the replicate relationship. The
following command create a replication group names RRG-001 on a remote iSCSI storage array named
Remote_SS_A101. The remote storage array has a password of 123Dell321 (remotePassword=). A
warning will be triggered when the capacity of a replication repository virtual disk reaches 75 percent of
capacity (warningThresholdPercent=).
Adding Primary Virtual Disk To Remote Replication Group
The add virtualDisk command adds a primary virtual disk to a remote replication group. This
command is valid only on the local storage array that contains the remote replication group to which you
want to add the primary virtual disk. To add the secondary virtual disk on the remote storage array to the
remote replication group, use the
You have two options for specifying a repository virtual disk when using this command: you can either
specify an existing repository virtual disk or create a new one when running the command.
To add a virtual disk named employeeBackfilData to the Remote_SS_A101 group you created using an
existing repository virtual disk named rep_VD_404:
You can substitute a disk pool name for the disk group name, if creating the repository virtual disk from
disk pool space.
Changing Remote Replication Group Settings
The set asyncRemoteReplicationGroup command lets you change settings for an existing
replication group.
The following parameters set synchronization and warning threshold values for the replication group.
Changing the synchronization settings affects the synchronization operations of all replicated pairs within
the remote replication group.
•syncInterval
•warningSyncThreshold
•warningRecoveryThreshold
92
Page 93
•warningThresholdPercent
The following parameters allow you to change (or force) the primary/secondary role of a replication
group, or whether or not to perform a synchronization before changing primary/secondary roles.
•role
•force
•nosync
You can apply the changes to one or several remote replicated pairs by using this command. Use the
primary virtual disk name to identify the remote replicated pairs for which you are changing the
properties.
Adding Secondary Virtual Disk To Remote Replication
Group
Use the establish asyncRemoteReplication command to add the secondary virtual disk on the
remote storage array to the replication group. This command completes the remote replicated pair
process begun with the
storage array to the replication group.
Before running this command, the remote replication group must exist and the primary virtual disk must
exist in the remote replication group. After the establish asyncRemoteReplication command
successfully completes, remote replication automatically starts between the primary virtual disk and the
secondary virtual disk.
Using the previous example, the following command completes a replicated pair within a replication
group named Remote_SS_A101 between the primary virtual disk (on the local storage array) named
employeeBackfilData and a secondary virtual disk (on the remote storage array) named
employeeBackfilData_remote:
add virtualDisk command, which added the primary virtual disk on the local
Suspending A Remote Replication Group
Use the asyncRemoteReplicationGroup command to stop data transfer between a primary virtual
disk and a secondary virtual disk in a replication relationship without disabling the replication relationship.
Suspending a replication relationship group lets you control when the data on the primary virtual disk and
data on the secondary virtual disk are synchronized. Suspending a replication relationship group helps to
93
Page 94
reduce any performance impact to the host application that might occur while any changed data on the
primary virtual disk is copied to the secondary virtual disk.
When a replication relationship is in a suspended state, the primary virtual disk does not make any
attempt to contact the secondary virtual disk. Any writes to the primary virtual disk are persistently logged
in the replication repository virtual disks. After the replication relationship resumes, any data that is written
to the primary virtual disk is automatically written to the secondary virtual disk. Only the modified data
blocks on the primary virtual disk are written to the secondary virtual disk. Full synchronization is not
required.
This example shows the suspend remoteReplicationGroup command:
The replication relationship group remains suspended until you use the resume
asyncRemoteReplicationGroup command to restart synchronization activities.
This example shows the resume remoteReplicationGroup command:
Use the delete asyncRemoteReplicationGroup command to delete one or more replication groups
from the local or remote storage array. The replication group you are attempting to delete must be
empty (contain no virtual disks or replicated pairs) before running this command.
This example shows the delete asyncRemoteReplicationGroup command:
Removing A Virtual Disk Or Repository Virtual Disk From
A Remote Replication Group
Use the remove virtualDisk command to remove either a member virtual disk or repository virtual
disk from an existing replication group. This command can only be run on the local storage array
containing the replication group affected by the command.
The command shown below removes diskname from the replication group named groupname. Using the
optional deleteRepositoryMembers parameter with a value of TRUE also delete the repository virtual
disk members:
You disable the Remote Replication premium feature to prevent the new replication relationship from
being created. When you disable the Remote Replication premium feature, the premium feature is in a
Disabled/Active state. In this state, you can maintain and manage previously existing replication
relationships; however, you cannot create new relationships. To disable the Remote Replication premium
feature, use this command:
disable storageArray feature=asyncReplication;
Deactivating The Remote Replication Premium Feature
If you no longer require the Remote Replication premium feature and you have removed all of the
replication relationships (replication groups and replicated pairs), you can deactivate the premium
feature. Deactivating the premium feature re-establishes the normal use of dedicated ports on both
storage arrays and deletes replication repository virtual disks. To deactivate the Remote Replication
premium feature, use this command:
deactivate storageArray feature=asyncReplication;
Interaction With Other Premium Features
You can run the Remote Replication premium feature while running these premium features:
•Snapshot—both standard Snapshot and Snapshot (legacy) premium features
•Virtual Disk Copy
When you run the Remote Replication (legacy) premium feature with other premium features, you must
consider the requirements of the other premium features to ensure that you set up a stable storage array
configuration.
For more information on using Remote Replication with other premium features, see the Administrator's Guide.
Standard Remote Replication Commands
The following Remote Replication commands are also available. For more information, see Commands
show asyncRemoteReplicationGroup
synchronizationProgress
show asyncRemoteReplicationGroup
summary
Removes an orphaned replicated pair virtual disk.
Resets IP address for the remote storage array to
re-establish connection with the local storage
array or resets synchronization statistics for
member virtual disks to relative 0.
Shows progress of periodic synchronization of the
replication group as a percentage.
Shows configuration information for one or more
replication groups.
96
Page 97
9
Using The Remote Replication (Legacy)
Premium Feature
The Remote Replication (legacy) premium feature provides for online, real-time replication of data
between storage arrays over a remote distance. In the event of a disaster or a catastrophic failure on one
storage array, you can promote the second storage array to take over responsibility for computing
services. Remote Replication (legacy) is designed for extended storage environments in which the storage
arrays that are used for Remote Replication (legacy) are maintained at separate sites. Virtual disks on one
storage array are replicated to virtual disks on another storage array across a fabric SAN. Data transfers
can be synchronous or asynchronous. You choose the method when you set up the remote replicated
pair. The data transfers occur at Fibre Channel speeds to maintain data on the different storage arrays.
Because Remote Replication (legacy) is storage based, it does not require any server overhead or
application overhead.
You can use Remote Replication (legacy) for these functions:
Disaster
recovery
Data vaulting
and data
availability
Two-way data
protection
Remote Replication (legacy) lets you replicate data from one site to another site,
which provides an exact duplicate at the remote (secondary) site. If the primary site
fails, you can use replicated data at the remote site for failover and recovery. You
can then shift storage operations to the remote site for continued operation of all
of the services that are usually provided by the primary site.
Remote Replication (legacy) lets you send data off site where it can be protected.
You can then use the off-site copy for testing or to act as a source for a full backup
to avoid interrupting operations at the primary site.
Remote Replication (legacy) provides the ability to have two storage arrays back up
each other by duplicating critical virtual disks on each storage array to virtual disks
on the other storage array. This action lets each storage array recover data from
the other storage array in the event of any service interruptions.
How Remote Replication (Legacy) Works
When you create a remote replicated pair, the remote replicated pair consists of a primary virtual disk on
a local storage array and a secondary virtual disk on a storage array at another site. A standard virtual disk
might only be included in one replicated virtual disk pair.
The primary virtual disk is the virtual disk that accepts host I/O activity and stores application data. When
the replication relationship is first created, data from the primary virtual disk is copied in its entirety to the
secondary virtual disk. This process is known as a full synchronization and is directed by the RAID
controller module owner of the primary virtual disk. During a full synchronization, the primary virtual disk
remains fully accessible for all normal I/O operations.
97
Page 98
The RAID controller module owner of the primary virtual disk initiates remote writes to the secondary
virtual disk to keep the data on the two virtual disks synchronized.
The secondary virtual disk maintains a replication (or copy) of the data on its associated primary virtual
disk. The RAID controller module owner of the secondary virtual disk receives remote writes from the
RAID controller module owner of the primary virtual disk but will not accept host write requests. Hosts
are able to read from the secondary virtual disk, which appears as read-only.
In the event of a disaster or a catastrophic failure at the primary site, you can perform a role reversal to
promote the secondary virtual disk to a primary role. Hosts then are able to read from and write to the
newly promoted virtual disk, and business operations can continue.
Replication Repository Virtual Disks
A replication repository virtual disk is a special virtual disk in the storage array that is created as a resource
for the RAID controller module owner of the primary virtual disk in a remote replicated pair. The RAID
controller module stores replication information on this virtual disk, including information about remote
writes that are not yet complete. The RAID controller module can use this information to recover from
RAID controller module resets and the accidental powering down of the storage arrays.
When you activate the Remote Replication (legacy) premium feature on the storage array, you create two
replication repository virtual disks, one for each RAID controller module in the storage array. An individual
replication repository virtual disk is not needed for each remote replication.
When you create the replication repository virtual disks, you specify the location of the virtual disks. You
can either use existing free capacity, or you can create a disk group for the virtual disks from
unconfigured capacity and then specify the RAID level.
Because of the critical nature of the data being stored, do not use RAID Level 0 as the RAID level of
replication repository virtual disks. The required size of each virtual disk is 128 MB, or 256 MB total for
both replication repository virtual disks of a dual-RAID controller module storage array. In previous
versions of the Remote Replication (legacy) premium feature, the replication repository virtual disks
required less disk storage space and needed to be upgraded to use the maximum amount of replication
relationships.
Replication Relationships
Before you create a replication relationship, you must enable the Remote Replication (legacy) premium
feature on both the primary storage array and the secondary storage array. You must also create a
secondary virtual disk on the secondary site if one does not already exist. The secondary virtual disk must
be a standard virtual disk of equal or greater capacity than the associated primary virtual disk.
When secondary virtual disks are available, you can establish a replication relationship in the MD storage
management software by identifying the primary virtual disk and the storage array that contains the
secondary virtual disk.
When you first create the replication relationship, a full synchronization automatically occurs, with data
from the primary virtual disk copied in its entirety to the secondary virtual disk.
98
Page 99
Data Replication
The RAID controller modules manage data replication between the primary virtual disk and the secondary
virtual disk. This process is transparent to host machines and applications. This section describes how
data is replicated between the storage arrays that are participating in Remote Replication (legacy). This
section also describes the actions taken by the RAID controller module owner of the primary virtual disk if
a link interruption occurs between storage arrays.
Write Modes
When the RAID controller module owner of the primary virtual disk receives a write request from a host,
the RAID controller module first logs information about the write to a replication repository virtual disk,
and then writes the data to the primary virtual disk. The RAID controller module then initiates a remote
write operation to copy the affected data blocks to the secondary virtual disk at the secondary storage
array.
The Remote Replication (legacy) premium feature provides two write mode options that affect when the
I/O completion indication is sent back to the host: Synchronous and Asynchronous.
Synchronous Write Mode
Synchronous write mode provides the highest level security for full data recovery from the secondary
storage array in the event of a disaster. Synchronous write mode does reduce host I/O performance.
When this write mode is selected, host write requests are written to the primary virtual disk and then
copied to the secondary virtual disk. After the host write request has been written to the primary virtual
disk and the data has been successfully copied to the secondary virtual disk, the RAID controller module
removes the log record on the replication repository virtual disk. The RAID controller module then sends
an I/O completion indication back to the host system. Synchronous write mode is selected as the default
value and is the recommended write mode.
Asynchronous Write Mode
Asynchronous write mode offers faster host I/O performance but does not guarantee that a copy
operation has successfully completed before processing the next write request. When you use
Asynchronous write mode, host write requests are written to the primary virtual disk. The RAID controller
module then sends an “I/O complete” indication back to the host system, without acknowledging that the
data has been successfully copied to the secondary (remote) storage array.
When using Asynchronous write mode, write requests are not guaranteed to be completed in the same
order on the secondary virtual disk as they are on the primary virtual disk. If the order of write requests is
not retained, data on the secondary virtual disk might become inconsistent with the data on the primary
virtual disk. This event could jeopardize any attempt to recover data if a disaster occurs on the primary
storage array.
Write Consistency Mode
When multiple replication relationships exist on a single storage array and have been configured to use
Asynchronous write mode and to preserve consistent write order, they are considered to be an
interdependent group that is in the Write consistency mode. The data on the secondary, remote storage
array cannot be considered fully synchronized until all of the remote replications that are in the Write
consistency mode are synchronized.
99
Page 100
If one replication relationship in the group becomes unsynchronized, all of the replication relationships in
the group become unsynchronized. Any write activity to the remote, secondary storage arrays is
prevented to protect the consistency of the remote data set.
Link Interruptions Or Secondary Virtual Disk Errors
When processing write requests, the primary RAID controller module might be able to write to the
primary virtual disk, but a link interruption might prevent communication with the remote (secondary)
RAID controller module.
In this case, the remote write operation cannot be completed to the secondary virtual disk, and the
primary virtual disk and the secondary virtual disk are no longer correctly replicated. The primary RAID
controller module transitions the replicated pair into an Unsynchronized state and sends an I/O
completion to the primary host. The primary host can continue to write to the primary virtual disk, but
remote writes do not take place.
When communication is restored between the RAID controller module owner of the primary virtual disk
and the RAID controller module owner of the secondary virtual disk, a resynchronization takes place. This
resynchronization happens automatically, or it must be started manually, depending on which write
mode you chose when setting up the replication relationship. During the resynchronization, only the
blocks of data that have changed on the primary virtual disk during the link interruption are copied to the
secondary virtual disk. After the resynchronization starts, the replicated pair transitions from an
Unsynchronized status to a Synchronization in Progress status.
The primary RAID controller module also marks the replicated pair as unsynchronized when a virtual disk
error on the secondary side prevents the remote write from completing. For example, an offline
secondary virtual disk or a failed secondary virtual disk can cause the remote replication to become
unsynchronized. When the virtual disk error is corrected (the secondary virtual disk is placed online or
recovered to an Optimal status), then synchronization is required. The replicated pair then transitions to a
Synchronization in Progress status.
Resynchronization
Data replication between the primary virtual disk and the secondary virtual disk in a replication
relationship is managed by the RAID controller modules and is transparent to host machines and
applications. When the RAID controller module owner of the primary virtual disk receives a write request
from a host, the RAID controller module first logs information about the write to a replication repository
virtual disk. The RAID controller module then writes the data to the primary virtual disk. The RAID
controller module then initiates a write operation to copy the affected data to the secondary virtual disk
on the remote storage array.
If a link interruption or a virtual disk error prevents communication with the secondary storage array, the
RAID controller module owner of the primary virtual disk transitions the replicated pair into an
Unsynchronized status. The RAID controller module owner then sends an I/O completion to the host
sending the write request. The host can continue to issue write requests to the primary virtual disk, but
remote writes to the secondary virtual disk do not take place.
When connectivity is restored between the RAID controller module owner of the primary virtual disk and
the RAID controller module owner of the secondary virtual disk, the virtual disks must be resynchronized
by copying the blocks of data that changed during the interruption to the secondary virtual disk. Only the
100
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.