Trademarks used in this text: Dell™, the DELL logo, PowerEdge™, PowerVault™,
andOpenManage™ are trademarks of Dell Inc. Intel® is a registered trademarks of Intel Corporation
inthe U.S. and other countries. Microsoft®, Windows®, Windows Server®, MS-DOS®, and
InternetExplorer® are either trademarks or registered trademarks of Microsoft Corporation in the
United Statesand/or other countries. Red Hat® and Red Hat Enterprise Linux® are registered
trademarks of Red Hat, Inc. in the United States and other countries. SUSE® is a registered trademark
of Novell, Inc. in the United States and other countries.
This guide provides information about deploying Dell PowerVault MD3600i
and Dell PowerVault MD3620i storage arrays. The deployment process
includes:
•Hardware installation
•Modular Disk Storage Manager (MDSM) software installation
•Initial system configuration
Other information provided include system requirements, storage array
organization, and utilities.
NOTE: For more information on product documentation, see
dell.com/support/manuals.
MDSM enables an administrator to configure and monitor storage arrays for
optimum usability. The version of MDSM included on the PowerVault MD
series resource media can be used to manage both the PowerVault MD3600i
series and the earlier PowerVault MD series storage arrays. MDSM is
compatible with both Microsoft Windows and Linux operating systems.
System Requirements
Before installing and configuring the PowerVault MD3600i series hardware
and software, ensure that the minimum system requirements are met, and
the supported operating system is installed. For more information, see the
Dell PowerVault Support Matrix available on dell.com/support/manuals.
Management Station Requirements
A management station uses MDSM to configure and manage storage arrays
across the network, and must meet the following minimum system
requirements:
•Intel Pentium or an equivalent processor (333 MHz or faster) with 512 MB
RAM (1024 MB recommended).
•1 GB disk space.
Introduction7
Page 8
•Display resolution of 1024x768 with 16 million colors (1280x1024 32-bit
recommended).
•Microsoft Windows, Red Hat Enterprise Linux, or SUSE Linux Enterprise
Server.
NOTE: Supported operating systems include both native and guest operating
systems.
NOTE: Supported hypervisors include Microsoft Hyper-V, Citrix XenServer,
and VMware. For information about the supported versions, see the
Matrix
at dell.com/support.
•Administrator or equivalent permissions.
Support
Introduction to Storage Arrays
A storage array includes various hardware components, such as physical disks,
RAID controller modules, fans, and power supplies, gathered into enclosures.
An enclosure containing physical disks accessed through RAID controller
modules is called a storage array.
One or more host servers attached to the storage array can access the data on
the storage array. You can also establish multiple physical paths between the
host(s) and the storage array so that loss of any single path (for example,
through failure of a host server port) does not result in loss of access to data
on the storage array.
The storage array is managed by MDSM running on a:
•Host server—On a host server system, MDSM and the storage array
communicate management requests and event information using
SAS connections.
•Management station—On a management station, MDSM communicates
with the storage array either through an Ethernet connection to the
storage array management port or through an Ethernet connection to a
host server. The Ethernet connection passes management information
between the management station and the storage array using SAS
connections.
Using MDSM, you can configure the physical disks in the storage array into
logical components called disk groups and then divide the disk groups into
virtual disks. Disk groups are created in the unconfigured capacity of a storage
array. Virtual disks are created in the free capacity of a disk group.
8Introduction
Page 9
Unconfigured capacity comprises physical disks not already assigned to a disk
group. When a virtual disk is created using unconfigured capacity, a disk
group is automatically created. If the only virtual disk in a disk group is
deleted, the disk group is also deleted. Free capacity is space in a disk group
that is not assigned to any virtual disk.
Data is written to the physical disks in the storage array using RAID
technology. RAID levels define the way in which data is written to physical
disks. Different RAID levels offer different levels of accessibility, redundancy,
and capacity. You can set a specified RAID level for each disk group and
virtual disk on your storage array.
For more information about using RAID and managing data in your storage
solution, see the
Owner’s Manual at
dell.com/support/manuals
.
Introduction9
Page 10
10Introduction
Page 11
2
Hardware Installation
Before using this guide, ensure that you review the instructions in the:
•
Getting Started Guide
storage array provides information to configure the initial setup of the system.
•Planning section of the
information
your storage solution. See the
dell.com/support/manuals
about important concepts you must know before setting up
Planning the Storage Configuration
Consider the following before installing your storage array:
•Evaluate data storage needs and administrative requirements.
•Calculate availability requirements.
•Decide the frequency and level of backups, such as weekly full backups
with daily partial backups.
•Consider storage array options, such as password protection and e-mail
alert notifications for error conditions.
•Design the configuration of virtual disks and disk groups according to a
data organization plan. For example, use one virtual disk for inventory, a
second for financial and tax information, and a third for customer
information.
•Decide whether to allow space for hot spares, which automatically replace
failed physical disks.
—The
Getting Started Guide
Owner’s Manual—
Owner’s Manual
.
that shipped with the
The planning section provides
at
Hardware Installation11
Page 12
Connecting the Storage Array
The storage array is connected to a host using two hot-swappable RAID
controller modules. The RAID controller modules are identified as RAID
controller module 0 and RAID controller module 1.
Each RAID controller module has two iSCSI In port connectors that provide
Ethernet connections to the host server or switches. Each RAID controller
module also contains an Ethernet management port and a SAS Out port.
The Ethernet management port allows you to install a dedicated
management station (server or stand-alone system). The SAS Out port allows
you to connect the storage array to optional PowerVault MD1200 series
expansion enclosures for additional storage capacity.
Each PowerVault MD3600i series storage array can be expanded to a
maximum of 120 (or 192, if enabled using Premium Feature activation)
physical disks through a maximum of seven MD1200 series expansion
enclosures.
Cabling the Storage Array
The iSCSI interface enables different host-to-controller configurations.
The figures in this chapter are grouped according to the following categories:
•Direct-attached configurations (no Ethernet switches are used)
•Network-attached (SAN) configurations (Ethernet switches are used)
Redundant and Non-Redundant Configurations
Non-redundant configurations are configurations that provide only a single
data path from a host to the storage array. This type of configuration is only
recommended for non-critical data storage. Path failure from a failed or
removed cable, a failed NIC, or a failed or removed RAID controller module
results in loss of host access to storage on the storage array.
Redundancy is established by installing separate data paths between the host
and the storage array, in which each path is to one of the two RAID controller
modules installed in the storage array. Redundancy protects the host from
losing access to data in the event of path failure, because both RAID
controller modules can access all the disks in the storage array.
12Hardware Installation
Page 13
Direct-Attached Configurations
You can connect the Ethernet ports of the host servers directly to the storage
array RAID controller module iSCSI ports.
Single Path Data Configurations
With a single path configuration, a group of heterogeneous hosts can be
connected to the storage array through a single physical Ethernet port. Since
there is only one port, there is no redundancy, although each iSCSI portal
supports multiple connections. This configuration is supported for both
single controller and dual controller modes.
Hardware Installation13
Page 14
Figure 2-1 shows a non-redundant cabling configuration to the RAID
Corporate, public,
or private network
Storage array
Server 1Server 2
controller modules using a single path data configuration.
Figure 2-1. Two Hosts Connected to a Single Controller
14Hardware Installation
Page 15
Figure 2-2 shows one host connected to a single controller array.
Corporate, public,
or private network
Storage array
Server
Figure 2-2. One Host Connected to a Single Controller
Hardware Installation15
Page 16
Figure 2-3 shows four stand-alone hosts supported in a dual controller array
Corporate, public,
or private network
Storage array
Server 2
Server 3Server 4Server 1
configuration with a single data path.
Figure 2-3. Four Hosts in a Dual-Controller Configuration
16Hardware Installation
Page 17
Dual Path Data Configuration
In Figure 2-4, up to two servers are directly attached to the RAID controller
modules. If the host server has a second Ethernet connection to the array,
it can be attached to the iSCSI ports on the array's second controller.
This configuration provides improved availability by allowing two separate
physical paths for each host, which ensures full redundancy if one of the
paths fail.
In Figure 2-5, up to two cluster nodes are directly attached to two RAID
controller modules. Since each cluster node has redundant paths, loss of a
single path still allows access to the storage array through the alternate path.
Hardware Installation17
Page 18
Figure 2-4. Two Hosts Connected to Two Controllers
Storage array
Server 1
Server 2
Corporate, public,
or private network
18Hardware Installation
Page 19
Figure 2-5. Two Hosts Connected in a Dual-Controller Configuration
Corporate, public,
or private network
Storage array
Two node cluster server
Hardware Installation19
Page 20
Network-Attached Configurations
You can also cable the host servers to the RAID controller module iSCSI ports
through industry-standard 10G or 1G Ethernet switches. An iSCSI
configuration that uses Ethernet switches is frequently referred to as an IP
SAN. By using an IP SAN, the PowerVault MD3600i series storage array can
support up to 64 hosts simultaneously. This configuration supports either
single or dual path data configurations and either single or dual controller
modules.
Figure 2-6 shows up to 64 stand-alone servers attached (using multiple
sessions) to a single RAID controller module through a network. Hosts that
have a second Ethernet connection to the network allow two separate
physical paths for each host, which ensures full redundancy if one of the paths
fails. It is recommended you use two switches for more redundancy. However,
single switch configuration is also supported. Figure 2-7 shows how the same
number of hosts can be similarly attached to a dual RAID controller module
configuration.
Figure 2-8 shows up to 64 stand-alone servers attached (using multiple
sessions) to a single RAID controller module through a network using a 1G to
10G aggregation scheme. The NICs on the servers are 1G NICs and the
uplink ports on the 1G switches are 10G. Hosts that have a second Ethernet
connection to the network allow two separate physical paths for each host,
which ensures full redundancy if one of the paths fails. It is recommended you
use two switches for more redundancy. However, single switch configuration is
also supported.
Figure 2-9 shows how the same number of hosts can be similarly attached to a
dual RAID controller module configuration. Hardware redundancy is
achieved in this configuration, in case of any switch failure.
20Hardware Installation
Page 21
Corporate, public,
or private network
Up to 64 hosts
Switch
Storage array
Figure 2-6. 64 Servers Connected to a Single Controller
Hardware Installation21
Page 22
Figure 2-7. 64 Servers Connected to Two Controllers
Corporate, public,
or private network
Storage array
Up to 64 hosts
22Hardware Installation
Page 23
Figure 2-8. 64 Servers Connected to a Single RAID Controller
Up to 64 hosts
Storage array
1G NICs
1G Switches
10G Switch
10G Uplinks
Hardware Installation23
Page 24
Figure 2-9. 64 Servers Connected to Two RAID Controllers
Up to 64 hosts
1G NICs
1G Switches
10G Uplinks
10G Switches
Storage array
24Hardware Installation
Page 25
Cabling PowerVault MD1200 Series Expansion
Enclosures
You can expand the capacity of your PowerVault MD3600i series storage array
by adding PowerVault MD1200 series expansion enclosures. You can expand
the physical disk pool to a maximum of 120 (or 192, if enabled using
Premium Feature activation) physical disks using a maximum of seven
expansion enclosures.
Expanding With Previously Configured PowerVault MD1200 Series
Expansion Enclosures
Use this procedure if your expansion enclosure is directly attached to and
configured on a Dell PowerEdge RAID Controller (PERC) H800 adapter.
Data from virtual disks created on a PERC H800 adapter cannot be directly
migrated to a PowerVault MD3600i series storage array or to a PowerVault
MD1200 series expansion enclosure connected to a PowerVault MD3600i
series storage array.
CAUTION: If a PowerVault MD1200 series expansion enclosure that was
previously attached to PERC H800 adapter is used as an expansion enclosure to a
PowerVault MD3600i series storage array, the physical disks of the expansion
enclosure are reinitialized and data is lost. You must backup all data on the
expansion enclosure before attempting the expansion.
To attach previously configured PowerVault MD1200 series expansion
enclosures to the PowerVault MD3600i series storage array:
1
Back up all data on the expansion enclosure(s).
2
Upgrade the expansion enclosure firmware to the latest version available
at
dell.com/support
H800 controller.
Windows systems users can reference the
kernel users can reference the
while the enclosure is still attached to the PERC
DUP.bin
DUP.exe
package.
package and Linux
3
Ensure that the storage array software is installed and up to date before
adding the expansion enclosure(s).
For more information, see the
dell.com/support/manuals
Support Matrix
.
at
Hardware Installation25
Page 26
a
Install the software and driver package included on the PowerVault
MD series resource media.
For information about installing the software, see "Installing
PowerVault MD Storage Software" on page 31.
b
Update the storage array RAID controller module firmware and
NVSRAM to the latest versions available at
MDSM
c
Click
Enterprise Management Window
4
Stop all I/O and turn off the system and attached units.
a
Stop all I/O to the storage array and turn off the host systems attached
.
Tools Upgrade RAID Controller Module Firmware
(EMW).
dell.com/support,
using
in the
to the storage array.
b
Turn off the storage array.
c
Turn off the expansion enclosure(s) in the affected system.
5
Cable the expansion enclosure(s) to the storage array.
6
Turn on attached units:
a
Turn on the expansion enclosure(s). Wait for the enclosure status
LED to light blue.
b
Turn on the storage array and wait for the status LED to indicate that
the unit is ready:
•If the status LEDs are solid amber, the storage array is still coming
online.
•If the status LEDs are blinking amber, there is an error that can be
viewed using the MDSM.
•If the status LEDs are solid blue, the storage array is ready.
c
After the storage array is online and ready, turn on any attached host
systems.
7
After the PowerVault MD1200 series expansion enclosure is configured as
an expansion enclosure of the storage array, restore the data that was
backed up in step 1.
After the expansion enclosures are online, they can be accessed as a part of
the storage array.
26Hardware Installation
Page 27
Expanding With New PowerVault MD1200 Series Expansion Enclosures
Perform the following steps to attach new PowerVault MD1200 series
expansion enclosures to a PowerVault MD3600i series storage array:
Perform the following steps to attach new PowerVault MD1200 series
expansion enclosures to a PowerVault MD3200 series storage array:
1
Before adding the expansion enclosure(s), ensure that the storage array
software is installed and is up to date. For more information, see the
Support Matrix
a
Install the software and driver package included on the PowerVault
MD series
For information about installing the software, see "Installing
PowerVault MD Storage Software" on page 31.
b
Set up the PowerVault MD1200 series expansion enclosure(s).
For information about setting up the PowerVault MD1200 series
expansion enclosure(s), see the
dell.com/support/manuals
c
Using MDSM, update the RAID controller module firmware and
NVSRAM to the latest versions available on
d
Click
Enterprise Management Window (EMW).
2
Stop I/O and turn off all systems:
a
Stop all I/O to the storage array and turn off affected host systems
attached to the storage array.
b
Turn off the storage array.
c
Turn off any expansion enclosure(s) in the affected system.
3
Cable the expansion enclosure(s) to the storage array.
4
Turn on attached units:
a
Turn on the expansion enclosure(s). Wait for the enclosure status
LED to light blue.
b
Turn on the storage array and wait for the status LED to indicate that
the unit is ready:
at
dell.com/support/manuals
resource media.
Owner’s Manual
.
at
.
dell.com/support
Tools Upgrade RAID Controller Module Firmware
.
from the
Hardware Installation27
Page 28
•If the status LEDs are solid amber, the storage array is still coming
online.
•If the status LEDs are blinking amber, there is an error that can be
viewed using MDSM.
•If the status LEDs are solid blue, the storage array is ready.
c
After the storage array is online and ready, turn on any attached host
systems.
5
Using MDSM, update all attached expansion enclosure firmware if it is out
of date:
a
From the EMW, select the enclosure that you want to update and
b
c
enter the
Click
Select
Array Management Window
AdvancedMaintanence
Select All
to update all the attached expansion enclosures
(AMW).
DownloadEMM Firmware
simultaneously.
.
28Hardware Installation
Page 29
Hardware Installation29
Page 30
30Hardware Installation
Page 31
3
Installing PowerVault MD Storage
Software
The Dell PowerVault MD series resource media contains software and drivers
for both Linux and Microsoft Windows operating systems.
The root of the media contains a readme.txt file covering changes to the
software, updates, fixes, patches, and other important data applicable to both
Linux and Windows operating systems. The readme.txt file also specifies
requirements for accessing documentation, information regarding versions of
the software on the media, and system requirements for running the software.
For more information on supported hardware and software for Dell PowerVault
systems, see the Support Matrix at dell.com/support/manuals.
NOTE: It is recommended that you install all the latest updates available at
dell.com/support.
The PowerVault MD series resource media provides features that include the core
software, providers, and optional utilities. The core software feature includes the
host-based storage agent, multipath driver, and Modular Disk Storage Manager
(MDSM) application used to configure, manage, and monitor the storage array
solution. The providers feature includes providers for the Microsoft Virtual Disk
Service (VDS) and Microsoft Volume Shadow-Copy Service (VSS) framework.
The Modular Disk Configuration Utility (MDCU) is an optional utility that
provides a consolidated approach for configuring the management ports, iSCSI
host ports, and creating sessions for the iSCSI Modular Disk storage arrays. It is
recommended that you install and use the MDCU to configure iSCSI on each
host connected to the storage array.
NOTE: For more information about the Microsoft VDS and Microsoft VSS providers,
see the PowerVault MD3600i Owner's Manual. To install the software on a Windows
or Linux system, you must have administrative or root privileges.
Installing PowerVault MD Storage Software31
Page 32
NOTE: If Dynamic Host Configuration Protocol (DHCP) is not used, initial configuration
of the management station must be performed on the same physical subnet as the
storage array. Additionally, during initial configuration, at least one network adapter
must be configured on the same IP subnet as the storage array’s default management
port (192.168.128.101 or 192.168.128.102). After initial configuration, the management
ports are configured using MDSM and the management station’s IP address can be
changed back to the previous settings.
The PowerVault MD series resource media offers the following three
installation methods:
•Graphical Installation (Recommended)—This is the recommended
installation procedure for most users. The installer presents a graphical
wizard-driven interface that allows customization of which components
are installed.
•Console Installation—This installation procedure is useful for Linux users
that do not desire to install an X-Window environment on their supported
Linux platform.
•Silent Installation—This installation procedure is useful for users that
prefer to create scripted installations.
Graphical Installation (Recommended)
The PowerVault MD Storage Manager software configures, manages and
monitors the storage array. The MD Configuration Utility (MDCU) is an
optional utility that provides a consolidated approach for configuring the
management and iSCSI host ports, and creating sessions for the iSCSI
modular disk storage arrays. It is recommended that you use MDCU to
configure iSCSI on each host server connected to the storage array. To install
the MD storage software:
1
Insert the PowerVault MD series resource media.
Depending on your operating system, the installer may launch
automatically. If the installer does not launch automatically, navigate to
the root directory of the installation media (or downloaded installer
image) and run the
navigate to the root of the resource media and run the autorun file.
md_launcher.exe
file. For Linux-based systems,
NOTE: By default, Red Hat Enterprise Linux mounts the resource media with the
–noexec mount option which does not allow you to run executable files. To change
this setting, see the Readme file in the root directory of the installation media.
32Installing PowerVault MD Storage Software
Page 33
2
Select
Install MD Storage Software.
3
Read and accept the license agreement.
4
Select one of the following installation options from the
Install Set
dropdown menu:
•Full (recommended)—Installs the MD Storage Manager (client)
software, host-based storage agent, multipath driver, and hardware
providers.
•Host Only—Installs the host-based storage agent and multipath
drivers.
•Management—Installs the management software and hardware
providers.
•Custom—Allows you to select specific components.
5
Select the PowerVault MD storage array model(s) you are setting up to
serve as data storage for this host server.
6
Choose whether to start the event monitor service automatically when the
host server reboots or manually.
NOTE: This option is applicable only to Windows client software installation.
7
Confirm the installation location and choose
8
If prompted, reboot the host server after the installation completes.
9
When the reboot is complete, the MDCU may launch automatically. If
Install
.
the MDCU does not launch automatically, launch it manually.
•In a Windows-based operating system, click
Disk Configuration Utility
.
•In a Linux-based operating system, double-click the
Configuration Utility
10
Start
MD Storage Manager
11
If applicable, activate any premium features purchased with your storage
icon on the desktop.
and discover the array(s).
StartDellodular
Modular Disk
array. If you purchased premium features, see the printed activation card
shipped with your storage array.
Installing PowerVault MD Storage Software33
Page 34
NOTE: The MD Storage Manager installer automatically installs the required
drivers, firmware, and operating system patches/hotfixes to operate your storage
array. These drivers and firmware are also available at dell.com/support. In
addition, see the
settings and/or software required for your specific storage array.
Support Matrix
at dell.com/support/manuals for any additional
Console Installation
NOTE: Console installation only applies to Linux systems that are not running a
graphical environment.
The autorun script in the root of the resource media detects when there is no
graphical environment running and automatically starts the installer in a
text-based mode. This mode provides the same options as graphical
installation with the exception of the MDCU specific options. The MDCU
requires a graphical environment to operate.
NOTE: The console mode installer provides the option to install the MDCU.
However, a graphical environment is required to utilize the MDCU.
Silent Installation
To run silent installation on a Windows system:
1Copy the custom_silent.properties file in the /windows folder of the
installation media or image to a writable location on the host server.
2Modify the custom_silent.properties file to reflect the features, models
and installation options to be used. Then, save the file.
3Once the custom_silent.properties file is revised to reflect your specific
installation, run the following command to begin the silent installation:
To upgrade from a previous version of the MD Storage Manager application,
uninstall the previous version (see "Uninstalling MD Storage Software" on
page 51), and then follow the instructions in this chapter to install the new
version.
Installing PowerVault MD Storage Software35
Page 36
36Installing PowerVault MD Storage Software
Page 37
4
Post Installation Tasks
Before using the storage array for the first time, complete the initial
configuration tasks in the order shown. These tasks are performed using the
MD Storage Manager.
NOTE: If Dynamic Host Configuration Protocol (DHCP) is not used, initial configuration
using the management station must be performed on the same physical subnet as the
storage array. Additionally, during initial configuration, at least one network adapter
must be configured on the same IP subnet as the storage array’s default management
port (192.168.128.101 or 192.168.128.102). After initial configuration, the management
ports are configured using MD Storage Manager and the management station’s IP
address can be changed back to the previous settings.
Before You Begin
NOTE: Before you begin configuring iSCSI, it is recommended that you fill out the
IPv4 or IPv6 iSCSI configuration worksheet available in this document, see "IPv4
Settings—Worksheet" on page 39 and "IPv6 Settings—Worksheet" on page 40.
Gathering this type of information about your network before starting the
configuration steps helps you to complete the process more efficiently.
iSCSI Configuration Terminology
Table 4-1. Standard Terminology Used in iSCSI Configuration
TermDefinition
CHAP (Challenge Handshake
Authentication Protocol)
Host or host serverA server connected to the storage array using
Host server portSCSI port on the host server used to connect
An optional security protocol used to control
access to an iSCSI storage system by restricting
use of the iSCSI data ports on both the host
server and storage array. For more information
on the types of CHAP authentication
supported, see "Understanding CHAP
Authentication" on page 64.
iSCSI ports.
it to the storage array.
Post Installation Tasks37
Page 38
Table 4-1. Standard Terminology Used in iSCSI Configuration
TermDefinition
iSCSI initiatorThe iSCSI-specific software installed on the
host server that controls communications
between the host server and the storage array.
iSCSI host portThe iSCSI port (two per controller) on the
storage array.
iSNS (Microsoft Internet Storage
Naming Service)
Management stationThe system from which you manage your
Storage arrayThe enclosure containing the storage data
TargetAn iSCSI port on the storage array that
An automated discovery, management and
configuration Storage Naming Service) tool
used by some iSCSI devices.
host server/storage array configuration.
accessed by the host server.
accepts and responds to requests from the
iSCSI initiator installed on the host server.
iSCSI Configuration Worksheet
The "IPv4 Settings—Worksheet" on page 39 and "IPv6 Settings—Worksheet"
on page 40 helps you plan your configuration. Recording host server and
storage array IP addresses at a single location enables you to configure your
setup faster and more efficiently.
"Guidelines For Configuring Your Network For iSCSI" on page 47 provides
general network setup guidelines for both Microsoft Windows and Linux
environments. It is recommended that you review these guidelines before
completing the worksheet.
38Post Installation Tasks
Page 39
IPv4 Settings—Worksheet
Mutual CHAP
Secret
Target CHAP
Secret
A
B
host server
PowerVault
MD36
x
0i
192.168.131.101 (In 1 default)
192.168.128.101 (management network port)
192.168.130.102 (In 0 default)
192.168.131.102 (In 1 default)
192.168.128.102 (management network port)
If you need additional space for more than one host server, use an additional sheet.
iSCSI port 1
iSCSI port 2
Management port
Management port
Static IP address (host server)
Subnet
Default gateway
(should be different for each NIC)
A
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
Static IP address (host server)
Subnet
Default gateway
B
iSCSI controller 0, In 0
iSCSI controller 0, In 1
Management port cntrl 0
iSCSI controller 1, In 0
iSCSI controller 1, In 1
Management port cntrl 1
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
___ . ___ . ___ . ___
192.168.130.101 (In 0 default)
Post Installation Tasks39
Page 40
IPv6 Settings—Worksheet
Mutual CHAP
Target CHAP
A
B
If you need additional space for more than one host server, use an additional sheet.
The following sections contain step-by-step instructions for configuring
iSCSI on your storage array. However, before you begin, it is important to
understand where each of these steps occur in relation to your host server or
storage array environment.
The following table shows each iSCSI configuration step and where it occurs.
Table 4-2. Host Server Vs. Storage Array
This Step is Performed on the Host Server
Using the Microsoft or Linux iSCSI Initiator
3
Perform target discovery from the iSCSI
initiator
6
(Optional) Configure CHAP
authentication on the host server
7
Connect to the storage array from the
host server
This Step is Performed on the Storage
Array Using PowerVault MD Storage
Manager
1
Discover the storage array
2
Configure the iSCSI ports on the
storage array
4
Configure host access
5
(Optional) Configure CHAP
authentication on the storage array
8
(Optional) Set up in-band management
NOTE: It is recommended that you use the PowerVault Modular Disk Configuration
Utility (MDCU) for iSCSI configuration. The PowerVault MDCU wizards guides you
through the configuration steps described above. If you want to perform a manual
configuration, see "Appendix—Manual Configuration of iSCSI" on page 55.
42Post Installation Tasks
Page 43
Automatic Configuration Using the Modular Disk Configuration Utility
NOTE: If MDCU is not installed, it can be installed from the MD series resource
media.
The MDCU provides a consolidated approach for configuring the iSCSI network
of host servers and iSCSI-based storage arrays using a wizard-driven interface.
This utility also enables the user to configure the iSCSI sessions of the host server
according to the best practices and to achieve load-balanced paths with the
storage array iSCSI host ports. If you select Launch the MDCU after reboot
during the installation of the host software, the utility automatically launches
after the next host server reboot. This utility can also be launched manually.
The utility has a context sensitive online help to guide you through each step
of the wizard.
The MDCU performs:
•Storage array configuration
•Host configuration
Storage Array Configuration
Before a host iSCSI initiator and an iSCSI-based storage array can
communicate, they must be configured with information such as which IP
addresses and authentication method to use. Since iSCSI initiators establish
connections with an already configured storage array, the first task is to
configure your storage arrays to make them available for iSCSI initiators.
This utility requires network access to the management ports of the storage arrays
you wish to configure. You must have a properly functioning network
infrastructure before attempting to configure your storage arrays. If your storage
arrays are already configured, you can skip directly to the host configuration.
This configuration task generally involves the following steps:
1
Discover available storage array(s) for configuration.
2
Select a storage array to configure.
3
Set a storage array name and password.
4
Configure the IP protocols and addresses for the management ports.
5
Configure the IP protocols and addresses for the iSCSI ports.
6
Specify the CHAP authentication method.
Post Installation Tasks43
Page 44
7
Apply the settings after reviewing a summary.
8
Repeat the process starting from step 2 to configure additional arrays.
After you have completed configuring your iSCSI-based storage arrays, the
next task is to run this utility on all hosts that need to access the storage
arrays. Depending on your network configuration, your host may be the same
machine you use to manage your storage arrays, or it may be on a completely
separate network.
The option to configure a host is disabled if the machine on which the utility
is running does not have an iSCSI initiator or the required driver components
installed. When the option is disabled, the utility also displays an
informational message. If you are running the utility on a host which is not
connected to the iSCSI–based storage array (or which you do not wish to
connect to the array), the informational message can be ignored.
The task generally involves the following steps:
1
Discover available storage array(s) for connection.
2
Select a storage array.
3
Specify the CHAP secret.
4
Select the iSCSI ports the host's initiator uses to log on.
5
Repeat the process starting from step 2 to connect to additional arrays.
6
Repeat these steps on each host that needs access to the storage array(s).
Before Starting the Configuration Process
Before you start configuring the storage array or host connectivity, it is
recommended that you fill out the iSCSI configuration worksheet to help you
plan your configuration. You may need to use several worksheets depending
on your configuration.
Keep the following guidelines in mind for the storage array and host
configuration:
•For optimal performance, ensure your network configuration.See the
storage array's
Support Matrix
at dell.com/support/manuals.
•If your host has multiple network interfaces, it is recommended that each
network interface uses a separate subnet.
44Post Installation Tasks
Page 45
•For redundancy in a dual controller (duplex) configuration, ensure each
host network interface is configured to connect to both storage array
controllers.
•For optimal load balancing, ensure each host network interface that is used
for iSCSI traffic is configured to connect to each storage array controller.
•It is recommended that each host network interface only establishes one
iSCSI session per storage array controller.
NOTE: The utility tries to follow the guidelines for the host connectivity whenever
possible based on the available host network interfaces and their connectivity with
the iSCSI host ports of the storage array.
Configure the Storage Array Using MDCU
To configure the iSCSI-based storage array(s) using the MDCU:
1
Launch the utility (if it is not launched automatically) from the server
with access to the management ports of the storage array(s) to be
configured.
For Wi nd ow s , c li ck
Software
Modular Disk Configuration Utility
StartAll ProgramsDellMD Storage
.
For Linux, click the MDCU icon on the desktop or navigate to the
The MDCU automatically discovers all the available storage arrays.
2
In the Discover MD Arrays window, select the iSCSI storage array you
want to configure.
3
In the Selected Array window, review current port and session information.
4
Click Config Wizard, to start the iSCSI configuration wizard.
5
Complete the steps in Config Wizard to configure your iSCSI storage
array.
6
In the Array Configuration Summary window, review and apply your
configuration settings.
7
Click Create iSCSI Sessions, to create host-to-storage array
communication.
Repeat for all host-to-array mappings you want to implement
Post Installation Tasks45
Page 46
8
Verify that communication is established between the storage array and
host server.
NOTE: For more information on MDCU, see the MDCU online help.
Post Connection Establishment Steps
After iSCSI connectivity is established between the host server(s) and the
storage array, you can create virtual disks on the storage array using MD
Storage Manager and these virtual disks can be utilized by the host server(s).
For more information about storage planning and using MD Storage
Manager, see the Administrator's Guide at dell.com/support/manuals.
46Post Installation Tasks
Page 47
Guidelines For Configuring Your Network For
iSCSI
This section provides general guidelines for setting up your network
environment and IP addresses for use with the iSCSI ports on your host server
and storage array. In order for hosts to communicate with management and/or
iSCSI ports of storage arrays, local NICs must be configured with IP addresses
capable of communication with the addresses listed in the IPv4/IPv6
worksheet. Your specific network environment may require different or
additional steps than shown here, so make sure you consult with your system
administrator before performing this setup.
Microsoft Windows Host Setup
To set up a Windows host network, you must configure the IP address and
netmask of each iSCSI port connected to the storage array. The specific steps
depend on whether you are using a Dynamic Host Configuration Protocol
(DHCP) server, static IP addressing, Domain Name System (DNS) server, or
Windows Internet Name Service (WINS) server.
NOTE: The server IP addresses must be configured for network communication to
the same IP subnet as the storage array management and iSCSI ports.
Using A DHCP server
1
In the
Control Panel
Sharing Center
2
Right-click the network connection you want to configure and
select
Properties
3
On the
(for all other connections), select
click
4
Select
General
Properties
Obtain an IP address automatically
, select
Network connections
and then click
.
tab (for a local area connection) or the
.
Manage network connections
Internet Protocol (TCP/IP)
or
, then click OK.
Network and
.
Networking
, and then
tab
Using static IP addressing
1
In the
Control Panel
Center
and then click
, select
Network connections
Manage network connections
or
Post Installation Tasks47
Network and Sharing
.
Page 48
2
Right-click the network connection you want to configure and select
Properties
3
On the
(for all other connections), select
click
4
Select
and default gateway addresses.
.
General
Properties
Use the following IP address
tab (for a local area connection) or the
.
Networking
Internet Protocol (TCP/IP)
and enter the IP address, subnet mask,
, and then
Using A DNS server
1
In the
Control Panel
Center
and then click
2
Right-click the network connection you want to configure and select
Properties
3
On the
(for all other connections), select
click
4
Select
and alternate DNS server IP addresses and click
.
General
Properties
Obtain DNS server address automatically
, select
Network connections
Manage network connections
tab (for a local area connection) or the
Internet Protocol (TCP/IP)
.
or
Network and Sharing
.
or enter the preferred
OK
.
Networking
, and then
Using A WINS Server
NOTE: If you are using a DHCP server to allocate WINS server IP addresses, you
do not need to add WINS server addresses.
1
In the
Control Panel
2
Right-click the network connection you want to configure and select
Properties
3
On the
(for all other connections), select
click
4
Select
5
In the
server and click
6
To enable use of the Lmhosts file to resolve remote NetBIOS names, select
Enable LMHOSTS lookup
.
General
Properties
Advanced
TCP/IP WINS server
, select
Network connections
tab (for a local area connection) or the
Internet Protocol (TCP/IP)
.
WINS
Add
tab and click
window, type the IP address of the WINS
.
.
Add
.
.
Networking
, and then
tab
tab
tab
48Post Installation Tasks
Page 49
7
To specify the location of the file that you want to import into the
Lmhosts file, select
Open
dialog box.
8
Enable or disable NetBIOS over TCP/IP.
If using Microsoft Windows Server 2008 Core Version, use the netsh
interface command to configure the iSCSI ports on the host server.
Import LMHOSTS
and then select the file in the
Linux Host Setup
To set up a Linux host network, you must configure the IP address and
netmask of each iSCSI port connected to the storage array. The specific steps
depend on whether you are configuring TCP/IP using DHCP or configuring
TCP/IP using a static IP address.
NOTE: The server IP addresses must be configured for network communication to
the same IP subnet as the storage array management and iSCSI ports.
Using DHCP
If you are using DHCP (root users only):
1
Edit the
NETWORKING=yes HOSTNAME=mymachine.mycompany.com
2
Edit the configuration file for the connection you want to configure, either
Edit the configuration file for the connection you want to configure, either
/etc/sysconfig/network-scripts/ifcfg-ethX
or /etc/sysconfig/network/ifcfg-eth-id-XX:XX:XX:XX:XX (for SUSE
Enterprise Linux).
BOOTPROTO=static BROADCAST=192.168.1.255 IPADDR=
192.168.1.100 NETMASK=255.255.255.0 NETWORK=
192.168.1.0 ONBOOT=yes TYPE=Ethernet
HWADDR=XX:XX:XX:XX:XX:XX GATEWAY=192.168.1.1
3
Restart network services using the following command:
/etc/init.d/network restart
(for Red Hat Enterprise Linux)
50Post Installation Tasks
Page 51
5
Uninstalling MD Storage Software
Uninstalling MD Storage Software From
Windows
Use the Change/Remove Program feature to uninstall the Dell PowerVault
Modular Disk Storage Software (MDSS) from Microsoft Windows operating
systems other than Microsoft Windows Server 2008:
1
From the
2
Select
3
Click
The
4
Follow the instructions on screen.
5
Select
Use the following procedure to uninstall Modular Disk Storage software from
Windows Server 2008 GUI versions:
1
From the
2
Select
3
Click
The
Control Panel,
Dell MD36
Change/Remove
Uninstall Complete
Yes
to restart the system, and then click
Control Panel
MD Storage Software
Uninstall/Change
Uninstall Complete
double-click
xxi Storage Software
.
window is displayed.
, double-click
from the list of programs.
.
window is displayed.
Add or Remove Programs
from the list of programs.
Done
.
Programs and Features
.
.
4
Follow the instructions on screen.
5
Select
Yes
to restart the system, then click Done.
Use the following procedure to uninstall Modular Disk Storage Software on
Windows Server 2008 Core versions:
1
Navigate to the
36
xx
Storage Software
NOTE: By default, MD Storage Manager is installed in the \Program Files\Dell\MD
Storage Software directory. If another directory was used during installation,
navigate to that directory before beginning the uninstallation procedure.
From the installation directory, type the following command and press
<Enter>:
Uninstall Dell MD Storage Software
3
From the
screen.
4
Select
Uninstall
Yes
to restart the system, then click
window, click
Next
and follow the instructions on the
Done
.
Uninstalling MD Storage Software From Linux
1
By default, MD Storage Manager is installed in the
/opt/dell/mdstoragemanager
If another directory was used during installation, navigate to that directory
before beginning the uninstallation procedure.
2
From the installation directory, open the
Software
Software.exe
While the software is uninstalling, the
When the uninstall procedure is complete, the
window
3
Click
directory and run the file
.
is displayed.
Done
.
directory.
Uninstall Dell MD Storage
Uninstall Dell MD Storage
Uninstall
window is displayed.
Uninstall Complete
52Uninstalling MD Storage Software
Page 53
6
Getting Help
Locating Your System Service Tag
Your system is identified by a unique Express Service Code and Service Tag
number. The Express Service Code and Service Tag are found on the front of
the system by pulling out the information tag. This information is used by
Dell to route support calls to the appropriate personnel.
Contacting Dell
NOTE: Dell provides several online and telephone-based support and service
options. If you do not have an active Internet connection, you can find contact
information on your purchase invoice, packing slip, bill, or Dell product catalog.
Availability varies by country and product, and some services may not be available
in your area.
To contact Dell for sales, technical support, or customer-service issues:
1
Go to
dell.com/contactdell
2
Select your country or region from the interactive world map.
When you select a region, the countries for the selected regions are
displayed.
.
3
Select the appropriate language under the country of your choice.
4
Select your business segment.
The main support page for the selected business segment is displayed.
5
Select the appropriate option depending on your requirement.
NOTE: If you have purchased a Dell system, you may be asked for the Service Tag.
Getting Help53
Page 54
Documentation Feedback
If you have feedback for this document, write to
documentation_feedback@dell.com. Alternatively, you can click on the
Fee dback link in any of the Dell documentation pages, fill up the form, and click Submit to send your feedback.
54Getting Help
Page 55
A
Appendix—Manual Configuration
of iSCSI
The following sections contain step-by-step instructions for configuring
iSCSI on your storage array. However, before beginning, it is important to
understand where each of these steps occur in relation to your host server or
the storage array environment.
Tab le A -1 sh ow s e ac h i SCSI configuration step and where it occurs.
Table A-1. Host Server Vs. Storage Array
This Step is Performed on the Host Server
Using the Microsoft or Linux iSCSI Initiator
3
Perform target discovery from the iSCSI
initiator.
6
(Optional) Configure CHAP
authentication on the host server.
7
Connect to the storage array from the
host server.
This Step is Performed on the Storage
Array Using MD Storage Manager
1
Discover the storage array.
2
Configure the iSCSI ports on the
storage array.
4
Configure host access.
5
(Optional) Configure
Handshake Authentication
Protocol (
the storage array.
8
(Optional) Set up in-band
management.
CHAP) authentication on
Challenge
Appendix—Manual Configuration of iSCSI55
Page 56
Step 1: Discover the Storage Array (Out-of-band
Management Only)
Default Management IPv4 Port Settings
By default, the storage array management ports are set to Dynamic Host
Configuration Protocol (DHCP). If the controller(s) on your storage array is
unable to get IP configuration from a DHCP server, it times out after
approximately three minutes and falls back to a default static IP address. The
default IP configuration is:
Controller 0: IP: 192.168.128.101 Subnet Mask:
255.255.255.0
Controller 1: IP: 192.168.128.102 Subnet Mask:
255.255.255.0
NOTE: No default gateway is set.
NOTE: If DHCP is not used, perform the initial configuration using the management
station on the same physical subnet as the storage array. Additionally, during initial
configuration, configure at least one network adapter on the same IP subnet as the
storage array’s default management port (192.168.128.101 or 192.168.128.102). After
initial configuration (management ports are configured using MD Storage
Manager), you can change the management station’s IP address back to its
previous settings.
Default Management IPv6 Port Settings
By default, the storage array management ports are enabled for IPv6 stateless
auto-configuration. The ports are automatically configured to respond to
their link local address and to a routable address if a configured IPv6 router is
present on the network. To know the link local addresses of the management
port, see the MAC label for the management port on the controller. For
example:
1
If the MAC Address is 00:08:74:AA:BB:CC, the link local address starts
with FE80::02.
2
Add the second and third bytes 08:74 MAC address to the prefix
FE80::0208:744.
3
Add FF:FE to obtain FE80::0200:08FF:FE.
56Appendix—Manual Configuration of iSCSI
Page 57
4
Finally, add the last three bytes of the MAC address
FE80::0200:08FF:FEAA:BBCC.
NOTE: This procedure applies to out-of-band management only. If you choose to
set up in-band management, you must complete this step and then proceed to "Step
8: (Optional) Set Up In-Band Management" on page 74.
You can discover the storage array either automatically or manually. Select
one and complete the following procedure.
Automatic Storage Array Discovery
1
Launch MD Storage Manager (MDSM).
If this is the first storage array to be set up, the
window is displayed.
2
Select
Automatic
and click OK.
It may take several minutes for the discovery process to complete. Closing
the discovery status window before the discovery process completes
cancels the discovery process.
Appendix—Manual Configuration of iSCSI57
Add New Storage Array
Page 58
After discovery is complete, a confirmation screen is displayed.
3
Click
Close
to close the screen.
Manual Storage Array Discovery
1
Launch MDSM.
If this is the first storage array to be set up, the
Add New Storage Array
window is displayed.
2
Select
3
Manual
Select
Out-of-band management
and click OK.
and enter the host server name(s) or IP
address(es) of the iSCSI storage array controller.
4
Click
Add
.
Out-of-band management is now successfully configured.
After discovery is complete, a confirmation screen is displayed.
5
Click
Close
to close the screen.
Setting Up the Array
1
When discovery is complete, the name of the first storage array found is
displayed under the
2
The default name for the newly discovered storage array is
another name is displayed, click the down arrow next to that name and
select
Unnamed
3
Click the
Initial Setup Tasks
installation tasks. For more information about each task, see the
Manual
. Perform these tasks in the order shown in Table A-2.
Summary
tab in MDSM.
in the drop-down list.
option to see links to the remaining post-
Unnamed
. If
Owner’s
NOTE: Before configuring the storage array, check the status icons on the
Summary tab to ensure that the enclosures in the storage array are in an Optimal
status. For more information on the status icons, see the
dell.com/support/manuals.
Owner’s Manual
58Appendix—Manual Configuration of iSCSI
at
Page 59
Table A-2. Initial Setup Tasks Dialog Box
TaskPurpose
Rename the storage arrayTo provide a more meaningful name than the
software-assigned label, Unnamed.
Set a storage array passwordTo restrict unauthorized access. MDSM may ask
for a password before changing the
configuration or performing a destructive
operation.
Set up alert notifications
Set up e-mail alerts
Set up SNMP alerts
Configure a storage arrayTo create virtual disks and map them to hosts.
To notify individuals (by e-mail) and/or storage
enterprise management consoles, such as Dell
Management Console, (by SNMP) when a
storage array component degrades or fails, or an
adverse environmental condition occurs.
Step 2: Configure the iSCSI Ports on the Storage
Array
By default, the iSCSI ports on the storage array are set to the following IPv4
settings:
Controller 0, Port 0: IP: 192.168.130.101 Subnet Mask:
255.255.255.0 Port: 3260
Controller 0, Port 1: IP: 192.168.131.101 Subnet Mask:
255.255.255.0 Port: 3260
Controller 0, Port 2: IP: 192.168.132.101 Subnet Mask:
255.255.255.0 Port: 3260
Controller 0, Port 3: IP: 192.168.133.101 Subnet Mask:
255.255.255.0 Port: 3260
Controller 1, Port 0: IP: 192.168.130.102 Subnet Mask:
255.255.255.0 Port: 3260
Controller 1, Port 1: IP: 192.168.131.102 Subnet Mask:
255.255.255.0 Port: 3260
Appendix—Manual Configuration of iSCSI59
Page 60
Controller 1, Port 2: IP: 192.168.132.102 Subnet Mask:
255.255.255.0 Port: 3260
Controller 1, Port 3: IP: 192.168.133.102 Subnet Mask:
255.255.255.0 Port: 3260
NOTE: No default gateway is set.
To configure the iSCSI ports on the storage array:
1
From MDSM, navigate to the
2
Click
configure Ethernet management ports
iSCSI Host Ports
3
Configure the iSCSI ports on the storage array.
NOTE: Using static IPv4 addressing is recommended, although DHCP is supported.
.
Setup
tab on the
and then select
AMW
.
Configure
The following settings are available (depending on the configuration) by
clicking the
Advanced
button:
•Virtual LAN (VLAN) support—A VLAN is a network of different
systems that behave as if they are connected to the same segments of a
local area network (LAN) and are supported by the same switches and
routers. When configured as a VLAN, a device can be moved to
another location without being reconfigured. To use VLAN on your
storage array, obtain the VLAN ID from your network administrator.
•Ethernet priority—This parameter is set to determine a network
access priority.
•TCP listening port—The port number on the storage array that listens
for iSCSI logins from host server iSCSI initiators.
NOTE: The TCP listening port for the iSNS server is the port number the
storage array controller uses to connect to an iSNS server. This allows the
iSNS server to register the iSCSI target and portals of the storage array so
that the host server initiators can identify them.
•Jumbo frames—Jumbo Ethernet frames are created when the
maximum transmission units (MTUs) are larger than 1500 bytes per
frame. This setting is adjustable port-by-port.
4
To enable ICMP PING responses for all ports, select
Enable ICMP PING
responses.
5
Click
OK
when all iSCSI storage array port configurations are complete.
6
Test the connection by performing a ping command on each iSCSI storage
array port.
60Appendix—Manual Configuration of iSCSI
Page 61
Step 3: Perform Target Discovery From the iSCSI
Initiator
This step identifies the iSCSI ports on the storage array to the host server.
Select the set of steps in one of the following sections (Microsoft Windows or
Linux) that corresponds to your operating system.
If you are using Microsoft Windows Server 2003 or Windows Server 2008 GUI version:
1
Click
StartProgramsMicrosoft iSCSI Initiator
Programs
2
Click the
3
Under
the iSCSI port on the storage array.
4
If the iSCSI storage array uses a custom TCP port, change the
number. The default is 3260.
5
Click
•
•
•
•
Administrative ToolsiSCSI Initiator
Discovery
Target Portals
Advanced
Local Adapter
Source IP
with.
Data Digest and Header Digest
digest of data or header information be compiled during transmission
to assist in troubleshooting.
CHAP logon information
enter CHAP information at this point, unless you are adding the
storage array to a Storage Area Network (SAN) that has target CHAP
already configured.
tab.
, click
Add
and enter the IP address or DNS name of
and set the following values on the
—Must be set to Microsoft iSCSI Initiator.
—The source IP address of the host you want to connect
—Optionally, you can specify that a
—Leave this option unselected and do not
or click
.
General
StartAll
Port
tab:
NOTE: IPSec is not supported.
6
Click OK to exit the
Tar ge t Por ta ls
7
To ex it t he
If you plan to configure CHAP authentication
•
more than one iSCSI port at this point. Go to "Step 4: Configure Host
Access" on page 63.
Discovery
screen.
Advanced
tab, click OK.
menu and click OK again to exit the
, do not perform discovery on
Appendix—Manual Configuration of iSCSI61
Add
Page 62
•
If you do not plan to configure CHAP authentication
, repeat step 1
thorough step 6 for all iSCSI ports on the storage array.
If you are using Windows Server 2008 Core Version:
1
Set the iSCSI initiator service to start automatically:
If you are using Red Hat Enterprise Linux 5, Red Hat Enterprise Linux 6,
SUSE Linux Enterprise Server 10, or SUSE Linux Enterprise Server 11:
Configuration of the iSCSI initiator for Red Hat Enterprise Linux 5 and
SUSE Linux Enterprise Server 10 SP1 distributions is done by modifying the
/etc/iscsi/iscsid.conf file, which is installed by default when you install
MDSM. You can edit the file directly, or replace the default file with a sample
file included on the PowerVault MD series resource media.
To use the sample file included on the media:
1
Make a copy of the default
/etc/iscsi/iscsid.conf
file by saving it to another
directory of your choice.
2
Edit the following entries in the
a
Edit or verify that the
b
Edit or verify that the
/etc/iscsi/iscsid.conf
node.startup = manual
file:
line is disabled.
node.startup = automatic
line is
enabled. This enables automatic startup of the service at boot time.
c
Verify that the following time-out value is set to 30:
node.session.timeo.replacement_timeout = 30
d
Save and close the
3
From the console, restart the iSCSI service with the following command:
/etc/iscsi/iscsid.conf
file.
service iscsi start
4
Verify that the iSCSI service is running during boot using the following
command from the console:
chkconfig iscsi on
62Appendix—Manual Configuration of iSCSI
Page 63
5
To display the available iSCSI targets at the specified IP address, use the
following command:
iscsiadm –m discovery –t st -p
<IP_address_of_iSCSI_port>
6
After target discovery, use the following command to manually log in:
iscsiadm -m node –l
This login is performed automatically at startup if automatic startup is
enabled.
7
Manually log out of the session using the following command:
This step specifies which host servers access virtual disks on the storage array.
You should perform this step before mapping virtual disks to host servers or
any time you connect new host servers to the storage array.
1
Launch MDSM.
2
Navigate to the AMW and click
3
At
Enter host name
This can be an informal name, not necessarily a name used to identify the
host server to the network.
, enter the host server for virtual disk mapping.
Manually define hosts
.
4
Select a method for adding the host port identifier.
5
Select the host type.
6
Select whether or not the host server will be part of a host server group that
shares access to the same virtual disks as other host servers. Select
if the host is part of a Microsoft cluster.
7
Click
Next
.
8
Specify if this host will be part of a host group, and click
Appendix—Manual Configuration of iSCSI63
Finish
Yes
.
only
Page 64
Understanding CHAP Authentication
What is CHAP?
Challenge Handshake Authentication Protocol (CHAP) is an optional iSCSI
authentication method where the storage array (target) authenticates iSCSI
initiators on the host server. Two types of CHAP are supported:
•Target CHAP
•Mutual CHAP
Target CHAP
In target CHAP, the storage array authenticates all requests for access issued
by the iSCSI initiator(s) on the host server using a CHAP secret. To set up
target CHAP authentication, you must enter a CHAP secret on the storage
array, then configure each iSCSI initiator on the host server to send that
secret each time it attempts to access the storage array.
Mutual CHAP
In addition to setting up target CHAP, you can set up mutual CHAP in which
both the storage array and the iSCSI initiator authenticate each other. To set up
mutual CHAP, configure the iSCSI initiator with a CHAP secret that the
storage array must send to the host sever in order to establish a connection. In
this two-way authentication process, both the host server and the storage array
send information that the other must validate before a connection is allowed.
CHAP is an optional feature and is not required to use iSCSI. However, if you
do not configure CHAP authentication, any host server connected to the same
IP network as the storage array can read from and write to the storage array.
NOTE: When using CHAP authentication, you should configure it on both the
storage array (using MDSM) and the host server (using the iSCSI initiator) before
preparing virtual disks to receive data. If you prepare disks to receive data before
you configure CHAP authentication, you lose visibility to the disks once CHAP
is configured.
64Appendix—Manual Configuration of iSCSI
Page 65
CHAP Definitions
To summarize the differences between target CHAP and mutual CHAP
authentication, see Table A-3.
Table A-3. CHAP Types Defined
CHAP TypeDescription
Target CHAPSets up accounts that iSCSI initiators use to connect to the
target storage array. The target storage array then authenticates
the iSCSI initiator.
Mutual CHAPApplied in addition to target CHAP, mutual CHAP sets up an
account that a target storage array uses to connect to an iSCSI
initiator. The iSCSI initiator then authenticates the target.
Step 5: Configure CHAP Authentication on the
Storage Array (Optional)
If you are configuring CHAP authentication of any kind (either target-only or
target and mutual), you must complete this step and "Step 5: Configure
CHAP Authentication on the Storage Array (Optional)" on page 65.
If you are not configuring any type of CHAP, skip these steps and go to "Step
7: Connect to the Target Storage Array From the Host Server" on page 71.
NOTE: If you choose to configure mutual CHAP authentication, configure target
CHAP first.
In terms of iSCSI configuration, the term target always refers to the storage array.
Configuring Target CHAP Authentication on the Storage Array
1
From MDSM, click the
Authentication
Select one of the CHAP settings described in Table A-4.
2
To configure a CHAP secret, select
3
Enter the
in
Confirm Target CHAP Secret
.
Target CHAP Secret (or Generate Random Secret)
iSCSI
tab and then click
CHAP
and click
Appendix—Manual Configuration of iSCSI65
Change Target
and select
OK
.
CHAP Secret
.
. Confirm it
Page 66
Although the storage array allows sizes from 12 to 57 characters, many
initiators only support CHAP secret sizes up to 16 characters (128-bit).
NOTE: A CHAP secret is not retrievable after it is entered. Ensure that you
record the secret in an accessible place. If Generate Random Secret is used,
copy and paste the secret into a text file for future reference since the same
CHAP secret is used to authenticate any new host servers you may add to the
storage array. If you forget this CHAP secret, you must disconnect all existing
hosts attached to the storage array and repeat the steps in this chapter to
re-add them.
4
Click OK.
Table A-4. CHAP Setting
OptionDescription
NoneThis is the default selection. If None is the only selection, the
storage array allows an iSCSI initiator to log on without
supplying any type of CHAP authentication.
None and CHAP The storage array allows an iSCSI initiator to log on with or
without CHAP authentication.
CHAPIf CHAP is selected and None is deselected, the storage array
requires CHAP authentication before allowing access.
Configuring Mutual CHAP Authentication on the Storage Array
The initiator secret must be unique for each host server that connects to the
storage array and must not be the same as the target CHAP secret.
Change the initiator authentication settings in the Change Target Authentication window. Use these options to change the settings:
•
None
—Select
None
, any initiator can access this target. Use this option only if you do
not require secure data. However, you can select both
the same time.
•
CHAP
—Select
access the target to authenticate using CHAP. Define the CHAP secret
only if you want to use mutual CHAP authentication. If you select
and if no CHAP target secret is defined, an error message is displayed.
Click
CHAP Secret
window to define the CHAP secrets.
NOTE: To remove a CHAP secret, you must delete the host initiator and re-add it.
None
if you permit no initiator authentication. If you select
None
and
CHAP
if you want to enable an initiator that tries to
to view the
Enter CHAP Secre
t windows. Use this
CHAP
CHAP
at
,
66Appendix—Manual Configuration of iSCSI
Page 67
Step 6: Configure CHAP Authentication on the
Host Server (Optional)
If you configured CHAP authentication in "Step 5: Configure CHAP
Authentication on the Storage Array (Optional)" on page 65, complete the
following steps. If not, skip to "Step 7: Connect to the Target Storage Array
From the Host Server" on page 71.
Select the set of steps in one of the following sections (Windows or Linux)
that corresponds to your operating system.
If you are using Windows Server 2008 GUI version:
1
Click
Start
Programs
2
If you are not using mutual CHAP authentication, go to step 4.
3
If you are using mutual CHAP authentication, click the
select
Secret
entered for the storage array
4
Click the
5
Under
array and click
The iSCSI port you configured on the storage array during target
discovery disappears.
Programs
Administrative Tools
. At
Enter a secure secret
Discovery
Tar ge t Po rta ls
Remove
Microsoft iSCSI Initiator
tab.
, select the IP address of the iSCSI port on the storage
.
or click
iSCSI Initiator
, enter the mutual CHAP secret you
.
Start
General
All
tab and
6
Under
Target Portals
of the iSCSI port on the storage array (removed above).
7
Click
Advanced
•Local Adapter—Should always be set to Microsoft iSCSI Initiator.
•Source IP—The source IP address of the host you want to connect
with.
•Data Digest and Header Digest—Optionally, you can specify that a
digest of data or header information be compiled during transmission
to assist in troubleshooting.
•CHAP logon information—Enter the target CHAP authentication
user name and secret you entered (for the host server) on the
storage array.
, click
Add
and re-enter the IP address or DNS name
and set the following values on the
Appendix—Manual Configuration of iSCSI67
General
tab:
Page 68
•Perform mutual authentication—If mutual CHAP authentication is
configured, select this option.
NOTE: IPSec is not supported.
8
Click OK.
If you require a discovery session failover, repeat step 5 and step 6 (in this
procedure) for all iSCSI ports on the storage array. Otherwise, single-host
port configuration is sufficient.
NOTE: If the connection fails, ensure that all IP addresses are entered correctly.
Mistyped IP addresses result in connection problems.
If you are using Windows Server 2008 Core version:
1
Set the iSCSI initiator services to start automatically (if not already set):
sc \\<server_name> config msiscsi start=auto
2
Start the iSCSI service (if necessary):
3
If you are not using mutual CHAP authentication, go to step 5.
4
Enter the mutual CHAP secret you entered for the storage array:
iscsicli CHAPSecret <secret>
5
Remove the target portal that you configured on the storage array during
where, [CHAP_username] is the initiator name and [CHAP_password] is
the target CHAP secret.
If you require a discovery session failover, repeat step 5 for all iSCSI ports
on the storage array. Otherwise, single-host port configuration is sufficient.
If you are using Red Hat Enterprise Linux 5, Red Hat Enterprise Linux 6,
SUSE Linux Enterprise Server 10, or SUSE Linux Enterprise Server 11:
68Appendix—Manual Configuration of iSCSI
Page 69
1
To enable CHAP (optional), the following line needs to be enabled in your
/etc/iscsi/iscsid.conf
file:
node.session.auth.authmethod = CHAP
2
To set a user name and password for CHAP authentication of the initiator
by the target(s), edit the following lines:
If you are using Mutual CHAP authentication, you can set the user name
and password for CHAP authentication of the target(s) by the initiator by
editing the following lines:
To set the user name and password for discovery session CHAP
authentication of the target(s) by the initiator for Mutual CHAP, edit the
following lines:
If you are using SUSE Linux Enterprise Server SP3 using the GUI:
1
Click
DesktopYa S TiSCSI Initiator
2
Click
Service Start
3
Select
Discovered Targets
4
Enter the IP address of the port.
5
Click
Next
.
6
Select any target that is not logged in and click
, then select
, then select
When Booting
/etc/iscsi/iscsid.conf
.
.
Discovery
.
Log in
.
file might
70Appendix—Manual Configuration of iSCSI
Page 71
7
Select one:
•If you are not using CHAP authentication, select
Go to step 8.
or
•If you are using CHAP authentication, enter the CHAP user name and
password. To enable Mutual CHAP, select and enter the Mutual
CHAP user name and password.
8
Repeat step 7 for each target until at least one connection is logged in for
each controller.
9
Go to
Connected Targets
10
Verify that the targets are connected and displays a status of
.
No Authentication
true
.
Step 7: Connect to the Target Storage Array From
the Host Server
If you are using Windows Server 2008 GUI:
1
Click
Start
Programs
2
Click the
If previous target discovery was successful, the iqn of the storage array
should be displayed under Targets.
Programs
Administrative Tools
Ta r ge t s
tab.
Microsoft iSCSI Initiator
iSCSI Initiator
or click
.
Start
All
.
3
4
5
6
Click
Log On
Select
Select
Click
Advanced
tab:
•
Local Adapter
Source IP
•
connect from.
•
Tar ge t Por ta l
that you want to connect to.
.
Automatically restore this connection when the system boots
Enable multi-path
and configure the following settings under the
—The source IP address of the host server you want to
—Select the iSCSI port on the storage array controller
.
—Must be set to
Appendix—Manual Configuration of iSCSI71
Microsoft iSCSI Initiator
General
.
.
Page 72
•
Data Digest and Header Digest
—Optionally, you can specify that a
digest of data or header information be compiled during transmission
to assist in troubleshooting.
•
CHAP logon information
—If CHAP authentication is required,
select this option and enter the Target secret.
•
Perform mutual authentication
—If mutual CHAP authentication is
configured, select this option.
NOTE: IPSec is not supported.
7
Click OK.
To support storage array controller failover, the host server must be
connected to at least one iSCSI port on each controller. Repeat step 3
through step 8 for each iSCSI port on the storage array that you want to
establish as failover targets. The
Target Portal
address is different for each
port you connected to.
NOTE: To enable the higher throughput of multipathing I/O, the host server
must connect to both iSCSI ports on each controller, ideally from separate
host-side NICs. Repeat step 3 through step 7 for each iSCSI port on each
x
controller. If using a duplex MD36
balanced between the controllers.
The
Status
field on the
8
Click OK to close the Microsoft iSCSI initiator.
NOTE: PowerVault MD36x0i supports only round-robin load-balancing
policies.
Ta r ge t s
0i configuration, then LUNs should also be
tab should now display as
Connected
If you are using Windows Server 2008 Core Version:
1
Set the iSCSI initiator services to start automatically (if not already set):
To view active sessions to the target, run the following command:
iscsicli SessionList
To support storage array controller failover, the host server must be connected
to at least one iSCSI port on each controller. Repeat step 3 for each iSCSI
port on the storage array that you want to establish as a failover target. The
Target_ Portal_Address is different for each port you connect to.
Appendix—Manual Configuration of iSCSI73
Page 74
PersistentLoginTarget does not initiate a login to the target until after the
system is rebooted. To establish immediate login to the target, substitute
LoginTarget for PersistentLoginTarget.
:NOTE: See the
information about the commands used in the previous steps. For more information
about Windows Server 2008 Server Core, see the Microsoft Developers Network
(MSDN) at microsoft.com.
If you are using a Linux Server:
In MDSM, the Configure iSCSI Host Ports displays the status of each iSCSI
port you attempt to connect and the configuration state of all IP addresses. If
either displays Disconnected or Unconfigured, respectively, check the
following and repeat the iSCSI configuration steps:
•Are all cables securely attached to each port on the host server and storage
array?
•Is TCP/IP correctly configured on all target host ports?
•Is CHAP set up correctly on both the host server and the storage array?
To review optimal network setup and configuration settings, see "Configuring
iSCSI on Your Storage Array" on page 42.
Microsoft iSCSI Software Initiator 2.x User’s Guide
for more
Step 8: (Optional) Set Up In-Band Management
Out-of-band management (see "Step 1: Discover the Storage Array (Out-ofband Management Only)" on page 56) is the recommended method for
managing the storage array. However, to optionally set up in-band
management, follow the procedure given below.
The default iSCSI host port IPv4 addresses are shown below for reference:
Controller 0, Port 0: IP: 192.168.130.101 Controller 0, Port 1: IP:
192.168.131.101
Controller 0, Port 0: IP: 192.168.132.101 Controller 0, Port 1: IP:
192.168.133.101
Controller 1, Port 0: IP: 192.168.130.102 Controller 1, Port 1: IP:
192.168.131.102
Controller 1, Port 0: IP: 192.168.132.102 Controller 1, Port 1: IP:
192.168.133.102
74Appendix—Manual Configuration of iSCSI
Page 75
NOTE: Configure the management station you are using for network
x
New
0i host ports.
Add New
.
communication to the same IP subnet as the PowerVault MD36
1
Establish an iSCSI session to the PowerVault MD3600i RAID storage array.
2
Restart the
3
Launch MDSM.
SMagent
service.
If this is the first storage array to be set up for management, the
Storage Array
4
Select
5
Select In-band management and enter the host server name(s) or IP
window is displayed. Otherwise, click
Manual
and click OK.
address(es) of the host server that is running the MD Storage Manager
software.
6
Click
Add
.
In-band management should now be successfully configured.
Appendix—Manual Configuration of iSCSI75
Page 76
76Appendix—Manual Configuration of iSCSI
Page 77
B
Appendix—Using Internet Storage
Naming Service
Internet Storage Naming Service (iSNS) server, supported only on Microsoft
Windows iSCSI environments, eliminates the need to manually configure
each individual storage array with a specific list of initiators and target IP
addresses. Instead, iSNS automatically discovers, manages, and configures all
iSCSI devices in your environment.
For more information on iSNS, including installation and configuration, see
microsoft.com.
Appendix—Using Internet Storage Naming Service 77
Page 78
78Appendix—Using Internet Storage Naming Service
Page 79
C
Appendix—Load Balancing
Load Balance Policy
Multi-path drivers select the I/O path to a virtual disk through a specific RAID
controller module. When the multi-path driver receives a new I/O to process,
the driver tries to find a path to the current RAID controller module that owns
the virtual disk. If the path to the current RAID controller module that owns
the virtual disk cannot be found, the multi-path driver migrates the virtual disk
ownership to the secondary RAID controller module. When multiple paths to
the RAID controller module that owns the virtual disk exist, you can choose a
load balance policy to determine which path is used to process I/O. Multiple
options for setting the load balance policies let you optimize I/O performance
when mixed host interfaces are configured.
You can choose one of the following load balance policies to optimize I/O
performance:
•Round robin with subset
•Least queue depth with subset
•Least path weight with subset (Microsoft Windows operating systems
only)
Round Robin With Subset
The round robin with subset I/O load balance policy routes I/O requests, in
rotation, to each available data path to the RAID controller module that owns
the virtual disks. This policy treats all paths to the RAID controller module that
owns the virtual disk equally for I/O activity. Paths to the secondary RAID
controller module are ignored until ownership changes. The basic assumption
for the round-robin policy is that the data paths are equal. With mixed host
support, the data paths might have different bandwidths or different data
transfer speeds.
Appendix—Load Balancing79
Page 80
Least Queue Depth With Subset
The least queue depth with subset policy is also known as the least I/Os or
least requests policy. This policy routes the next I/O request to a data path
that has the least outstanding I/O requests queued. For this policy, an I/O
request is simply a command in the queue. The type of command or the
number of blocks that are associated with the command are not considered.
The least queue depth with subset policy treats large block requests and small
block requests equally. The data path selected is one of the paths in the path
group of the RAID controller module that owns the virtual disk.
Least Path Weight With Subset
The least path weight with subset policy assigns a weight factor to each data
path to a virtual disk. An I/O request is routed to the path with the lowest
weight value to the RAID controller module that owns the virtual disk. If
more than one data path to the virtual disk has the same weight value, the
round robin with subset path selection policy is used to route I/O requests
between the paths with the same weight value. The least path weight with
subset load balance policy is not supported on Linux operating systems.
Changing Load Balance Policies on the Windows Server 2008 Operating
System
Load balancing with the PowerVault MD3600i series storage array is only
available for Microsoft Windows Server 2008 and later versions of the
operating system. You can change the load balance policies from the default
round robin with subset by using either the:
•Device manager
•Disk management
To change the load balance policy using Windows Server 2008 device manager:
1
From the desktop of the host, right-click
Manage
2
Click
3
Right-click the multi-path disk device for which you want to set the load
balance policies, then select
4
From the
this disk device.
to open the
Device Manager
MPIO
Computer Management
tab, select the load balance policy that you want to set for
to show the list of devices attached to the host.
Properties
My Computer
dialog box.
.
and select
80Appendix—Load Balancing
Page 81
To change the load balance policy using Windows Server 2008 disk
management:
1
From the desktop of the host, right-click
to open the
2
Click
host.
3
Right-click the virtual disk for which you want to set the load balance
policy, then click
4
From the
this virtual disk.
Computer Management
Disk Management
Properties
MPIO
tab, select the load balance policy that you want to set for
to show the list of virtual disks attached to the
.
My Computer
dialog box.
and click
Manage
Increasing Bandwidth With Multiple iSCSI Sessions
The PowerVault MD3600i series storage array in a duplex configuration
supports two active/active asymmetric redundant controllers. Each controller
has two 10G Ethernet ports that support iSCSI. The bandwidth of the two
ports on the same controller can be aggregated to provide optimal
performance. A host can be configured to simultaneously use the bandwidth
of both the ports on a controller to access virtual disks owned by the
controller. The multi-path failover driver that Dell provides for the
PowerVault MD3600i series storage array can be used to configure the storage
array so that all ports are used for simultaneous I/O access. If the multi-path
driver detects multiple paths to the same virtual disk through the ports on the
same controller, it load-balances I/O access from the host across all ports on
the controller.
Figure C-1 illustrates how the initiator can be configured to take advantage of
the load balancing capabilities of the multi-path failover driver.
Two sessions with one TCP connection are configured from the host to each
controller (one session per port), for a total of two sessions. The multi-path
failover driver balances I/O access across the sessions to the ports on the same
controller. In a duplex configuration, with virtual disks on each controller,
creating sessions using each of the iSCSI data ports of both controllers
increases bandwidth and provides load balancing.
Appendix—Load Balancing83
Page 84
84Appendix—Load Balancing
Page 85
D
Appendix—Stopping iSCSI
Services in Linux
Follow the procedure given below to manually stop the iSCSI services in
Linux.
To shut down iSCSI services:
1
Stop all I/O.
2
Unmount all correlated file systems. Stop iSCSI services by running the
following command:
/etc/init.d/open-iscsi stop
Appendix—Stopping iSCSI Services in Linux85
Page 86
86Appendix—Stopping iSCSI Services in Linux
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.