This document describes initial hardware setup for HP MSA 1040 controller enclosures, and is intended for use by storage system
administrators familiar with servers and computer networks, network administration, storage system installation and configuration,
storage area network management, and relevant protocols.
HP Part Number: 762783-001
Published: March 2014
Edition: 1
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall
not be liable for technical or editorial errors or omissions contained herein.
Acknowledgments
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
UNIX® is a registered trademark of The Open Group.
Warranty
WARRANTY STATEMENT: To obtain a copy of the warranty for this product, see the warranty information website:
HP MSA Storage models are high-performance storage solutions combining outstanding performance with
high reliability, availability, flexibility, and manageability. MSA 1040 enclosure models are designed to
meet NEBS Level 3, MIL-STD-810G (storage requirements), and European Telco specifications.
MSA 1040 Storage models
The MSA 1040 controller enclosures support either large form factor (LFF 12-disk) or small form factor
(SFF 24-disk) 2U chassis, using either AC or DC power supplies. HP MSA 1040 Storage models are
pre-configured at the factory to support one of these host interface protocols:
• 8 Gb FC
• 4 Gb FC
• 10 G bE i S CS I
• 1 GbE iSCSI
The small form-factor pluggable (SFP transceiver or SFP) supporting the pre-configured host interface
protocol is pre-installed in the controller module. MSA 1040 controller enclosures do not allow you to
change host interface protocols or increase speeds. Always use qualified SFP connectors and cables
required for supporting the host interface protocol as described in QuickSpecs.
http://www.hp.com/support/msa1040/QuickSpecs
NOTE: For additional information about MSA 1040 controller modules, see the following subsections:
Product features and supported options are subject to change. Online documentation describes the latest
product and product family characteristics, including currently supported features, options, technical
specifications, configuration data, related optional software, and product warranty information.
NOTE: Check the QuickSpecs for a complete list of supported servers, operating systems, disk drives, and
options. See http://www.hp.com/support/msa1040/QuickSpecs
.
MSA 1040 Storage models11
12Overview
2Components
1
32
4
5
6
12345678 9101112131415161718192021222324
Note: Integers on disks indicate drive slot numbering sequence.
1
4
7
10
3
6
9
12
132
4
5
6
1
2
3
4
5
6
7
8
9
10
11
12
Note: Integers on disks indicate drive slot numbering sequence.
Front panel components
HP MSA 1040 models support small form factor (SFF) and large form factor (LFF) enclosures. The SFF
chassis, configured with 24 2.5" SFF disks, is used as a controller enclosure. The LFF chassis, configured
with 12 3.5" LFF disks, is used as either a controller enclosure or a drive enclosure.
Supported drive enclosures, used for adding storage, are available in LFF or SFF chassis. The MSA 2040
6 Gb 3.5" 12-drive enclosure is the large form factor drive enclosure used for storage expansion. The HP
D2700 6 Gb enclosure, configured with 25 2.5" SFF disks, is the small form factor drive enclosure used for
storage expansion. See "SFF drive enclosure" (page 16) for a description of the D2700.
MSA 1040 Array SFF enclosure
Left ear
1 Enclosure ID LED
2 Disk drive Online/Activity LED
3 Disk drive Fault/UID LED
Figure 1 MSA 1040 Array SFF enclosure: front panel
4 Unit Identification (UID) LED
5 Heartbeat LED
6 Fault ID LED
MSA 1040 Array LFF or supported 12-drive expansion enclosure
Left earRight ear
Right ear
1 Enclosure ID LED
2 Disk drive Online/Activity LED
3 Disk drive Fault/UID LED
Figure 2 MSA 1040 Array LFF or supported 12-drive enclosure: front panel
4 Unit Identification (UID) LED
5 Heartbeat LED
6 Fault ID LED
MSA 1040 enclosures support LFF/SFF Midline SAS, LFF/SFF Enterprise SAS, and SFF SSD disks. For
information about creating vdisks and adding spares using these different disk drive types, see the
HP MSA 1040 SMU Reference Guide and HPSEDDrivesReadThisFirst document.
Controller enclosure—rear panel layout
The diagram and table below display and identify important component items comprising the rear panel
layout of the MSA 1040 controller enclosure.
1 AC Power supplies
2 Controller module A (see face plate detail figures)
4 DC Power supply (2) — (DC model only)
5 DC Power switch
3 Controller module B (see face plate detail figures)
Figure 3 MSA 1040 Array: rear panel
A controller enclosure accommodates two power supply FRUs of the same type—either both AC or both
DC—within the two power supply slots (see two instances of callout 1 above). The controller enclosure
accommodates two controller module FRUs of the same type within the I/O module slots (see callouts 2
and 3 above).
IMPORTANT:If the MSA 1040 controller enclosure is configured with a single controller module, the
controller module must be installed in the upper slot (see callout 2 above), and an I/O module blank must
be installed in the lower slot (see callout 3 above). This configuration is required to allow sufficient air flow
through the enclosure during operation.
The diagrams with tables that immediately follow provide descriptions of the different controller modules
and power supply modules that can be installed into the rear panel of an MSA 1040 controller enclosure.
Showing controller modules and power supply modules separately from the enclosure provides improved
clarity in identifying the component items called out in the diagrams and described in the tables.
Descriptions are also provided for optional drive enclosures supported by MSA 1040 controller enclosures
for expanding storage capacity.
NOTE: MSA 1040 controller enclosures support hot-plug replacement of redundant controller modules,
fans, power supplies, and I/O modules. Hot-add of drive enclosures is also supported.
14Components
MSA 1040 controller module—rear panel components
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 1 PORT 2
157
34
6
8
2
= FC LEDs
= 10GbE iSCSI LEDs
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 1 PORT 2
= 1 Gb iSCSI LEDs (all host ports use 1 Gb RJ-45 SFPs in this figure)
157
34
6
8
2
= FC LEDs
Figure 4 shows host ports configured with either 8 Gb FC or 10GbE iSCSI SFPs. The SFPs look identical.
Refer to the LEDs that apply to the specific configuration of your host ports.
1 Host ports: used for host connection or replication
2 CLI port (USB - Type B)
3 Service port 2 (used by service personnel only)
4 Reserved for future use
5 Network port
6 Service port 1 (used by service personnel only)
7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 SAS expansion port
Figure 4 MSA 1040 controller module face plate (FC or 10GbE iSCSI)
1 Host ports: used for host connection or replication
2 CLI port (USB - Type B)
3 Service port 2 (used by service personnel only)
4 Reserved for future use
5 Network port
NOTE: For information about host port configuration, see the “Configuring host ports” topic within the
HP MSA 1040 SMU Reference Guide or online help.
6 Service port 1 (used by service personnel only)
7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 SAS expansion port
Controller enclosure—rear panel layout15
IMPORTANT:See Connecting to the controller CLI port for information about enabling the controller
INOUT
INOUT
6Gb/s6Gb/s
6Gb/s6Gb/s
145716
2
3
enclosure USB Type - B CLI port for accessing the Command-line Interface via a telnet client.
Drive enclosures
Drive enclosure expansion modules attach to MSA 1040 controller modules via the mini-SAS expansion
port, allowing addition of disk drives to the system. MSA 1040 controller enclosures support adding the
6 Gb drive enclosures described below.
LFF drive enclosure — rear panel layout
MSA 1040 controllers support the MSA 2040 6 Gb 3.5" 12-drive enclosure shown below.
1 Power supplies (AC shown)
2 I/O module A
3 I/O module B
4 Disabled button (used by engineering only)
Figure 6 LFF 12-drive enclosure: rear panel
SFF drive enclosure
MSA 1040 controllers support the D2700 6 Gb drive enclosure for adding storage. For information about
this product, visit http://www.hp.com/support
provided in the MSA 1040 Quick Start Instructions and MSA 1040 Cable Configuration Guide.
Cache
To enable faster data access from disk storage, the following types of caching are performed:
• Write-back or write-through caching. The controller writes user data in the cache memory on the
module rather than directly to the drives. Later, when the storage system is either idle or aging—and
continuing to receive new I/O data—the controller writes the data to the drive array.
• Read-ahead caching. The controller detects sequential array access, reads ahead into the next
sequence of data, and stores the data in the read-ahead cache. Then, if the next read access is for
cached data, the controller immediately loads the data into the system memory, avoiding the latency of
a disk access.
5 Service port (used by service personnel only)
6 SAS In port
7 SAS Out port
. Pictorial representations of this drive enclosure are also
NOTE: See HP MSA 1040 SMU Reference Guide for more information about volume cache options.
Transportable CompactFlash
During a power loss or array controller failure, data stored in cache is saved off to non-volatile memory
(CompactFlash). The data is then written to disk after the issue is corrected. To protect against writing
incomplete data to disk, the image stored on the CompactFlash is verified before committing to disk.
16Components
The CompactFlash card is located at the midplane-facing end of the controller module as shown below.
Do not remove
Used for cache recovery only
Controller module pictorial
CompactFlash card
(Midplane-facing rear view)
Figure 7 MSA 1040 CompactFlash card
In single-controller configurations, if the controller has failed or does not start, and the Cache Status LED is
on or blinking, the CompactFlash will need to be transported to a replacement controller to recover data
not flushed to disk (see "Controller failure in a single-controller configuration" (page 51) for more
information).
CAUTION: The CompactFlash card should only be removed for transportable purposes. To preserve the
existing data stored in the CompactFlash, you must transport the CompactFlash from the failed controller to
the replacement controller using a procedure outlined in the HP MSA Controller Module Replacement Instructions shipped with the replacement controller module. Failure to use this procedure will result in the
loss of data stored in the cache module. The CompactFlash must stay with the same enclosure. If the
CompactFlash is used/installed in a different enclosure, data loss/data corruption will occur.
IMPORTANT:In dual controller configurations featuring one healthy partner controller, there is no need to
transport failed controller cache to a replacement controller because the cache is duplicated between the
controllers (subject to volume write optimization setting).
Supercapacitor pack
To protect RAID controller cache in case of power failure, MSA 1040 controllers are equipped with
supercapacitor technology, in conjunction with CompactFlash memory, built into each controller module to
provide extended cache memory backup time. The supercapacitor pack provides energy for backing up
unwritten data in the write cache to the CompactFlash in the event of a power failure. Unwritten data in
CompactFlash memory is automatically committed to disk media when power is restored. While the cache
is being maintained by the supercapacitor, the Cache Status LED flashes at a rate of 1/10 second on and
9/10 second off.
Upgrading to MSA 2040
For information about upgrading components for use with MSA controllers, refer to: Upgrading to the HP MSA 2040.
Supercapacitor pack17
18Components
3Installing the enclosures
Installation checklist
The following table outlines the steps required to install the enclosures and initially configure the system. To
ensure a successful installation, perform the tasks in the order they are presented.
Table 1 Installation checklist
StepTaskWhere to find procedure
1.Install the controller enclosure and optional
drive enclosures in the rack, and attach ear
caps.
2.Connect the controller enclosure and LFF/SFF
drive enclosures.
3.Connect power cords.See the quick start instructions.
If using the optional Remote Snap feature, also see
"Connecting two storage systems to replicate volumes"
(page 32).
See "Obtaining IP values" (page 37).
See Connecting to the controller CLI port; with Linux and
Windows topics.
Topics below correspond to bullets at left:
See “Getting Started” in the HP MSA 1040 SMU Reference Guide.
See “Configuring the System” and “Provisioning the
System” topics (SMU Reference Guide or online help).
1
The SMU is introduced in "Accessing the SMU" (page 43). See the SMU Reference Guide or online help for additional information.
2
If the systems are cabled for replication and licensed to use the Remote Snap feature, you can use the Replication Setup Wizard to
prepare to replicate an existing volume to another vdisk. See the SMU Reference Guide for additional information.
Connecting controller and drive enclosures
MSA 1040 controller enclosures support up to four enclosures (including the controller enclosure). You can
cable drive enclosures of the same type or of mixed LFF/SFF model type.
The firmware supports both straight-through and fault-tolerant SAS cabling. Fault-tolerant cabling allows
any drive enclosure to fail—or be removed—while maintaining access to other enclosures. Fault tolerance
and performance requirements determine whether to optimize the configuration for high availability or
high performance when cabling. MSA 1040 controller enclosures support 6 Gbit/s internal disk drive
speeds, together with 6 Gbit/s (SAS2.0) expander link speeds. When connecting multiple drive
enclosures, use fault-tolerant cabling to ensure the highest level of fault tolerance.
For example, the illustration on the left in Figure 10 (page 22) shows controller module 1A connected to
expansion module 2A, with a chain of connections cascading down (blue). Controller module 1B is
connected to the lower expansion module (4B) of the last drive enclosure, with connections moving in
the opposite direction (green).
Installation checklist19
Connecting the MSA 1040 controller to the SFF drive enclosure
The SFF D2700 25-drive enclosure, supporting 6 Gb internal disk drive and expander link speeds, can be
attached to an MSA 1040 controller enclosure using supported mini-SAS to mini-SAS cables of 0.5 m
(1.64') to 2 m (6.56') length [see Figure 9 (page 21)].
Connecting the MSA 1040 controller to the LFF drive enclosure
The LFF MSA 1040 6 Gb 3.5"12-drive enclosure, supporting 6 Gb internal disk drive and expander link
speeds, can be attached to an MSA 1040 controller enclosure using supported mini-SAS to mini-SAS
cables of 0.5 m (1.64') to 2 m (6.56') length [see Figure 9 (page 21)].
Connecting the MSA 1040 controller to mixed model drive enclosures
MSA 1040 controllers support cabling of 6 Gb SAS link-rate LFF and SFF expansion modules—in mixed
model fashion—as shown in Figure 12 (page 24), and further described in the HP MSA 1040 Cable Configuration Guide; the HP MSA 1040 Quick Start Instructions;QuickSpecs; and HP white papers (listed
below).
Cable requirements for MSA 1040 enclosures
IMPORTANT:
• When installing SAS cables to expansion modules, use only supported mini-SAS x4 cables with
SFF-8088 connectors supporting your 6 Gb application.
• Mini-SAS to mini-SAS 0.5 m (1.64') cables are used to connect cascaded enclosures in the rack.
• See QuickSpecs for information about which cables are provided with your MSA 1040 products.
http://www.hp.com/support/msa
• If additional or longer cables are required, they must be ordered separately (see relevant MSA 1040
QuickSpecs or P2000 G3 QuickSpecs for your products).
• The maximum expansion cable length allowed in any configuration is 2 m (6.56').
• Cables required, if not included, must be separately purchased.
• When adding more than two drive enclosures, you may need to purchase additional 1 m or 2 m
cables, depending upon number of enclosures and cabling method used:
Spanning 3 drive enclosures requires 1 m (3.28') cables.
• Se
e Quickspecs (link provided above) regarding information about cables supported for host connection:
• Qualified Fibre Channel SFP and cable options
• Qualified 10GbE iSCSI SFP and cable options
• Qualified 1 Gb RJ-45 SFP and cable options
1040/QuickSpecs
For additional information concerning cabling of MSA 1040 controllers and D2700 drive enclosures, visit:
h
ttp://www.hp.com/support/msa1040
Browse for the following reference documents:
• HP MSA 1040 Cable Configuration Guide
• HP Remote Snap technical white paper
• HP MSA 1040/2040 best practices
20Installing the enclosures
NOTE: For clarity, the schematic illustrations of controller and expansion modules shown in this section
InOut
1B
1A
2A
2B
Controller A
IOM blank
P1P2
Controller A
IOM blank
= LFF 12-drive enclosure
= SFF 25-drive enclosure
21
1
2
IOM blank
IOM blank
1B
1A
2A
2B
InOut
1B
1A
2A
2B
Controller A
Controller B
InOut
P1P2
Controller A
Controller B
P1P2
= LFF 12-drive enclosure
= SFF 25-drive enclosure
21
1
2
1B
1A
2A
2B
provide only relevant details such as expansion ports within the module face plate outline. For detailed
illustrations showing all components, see "Controller enclosure—rear panel layout" (page 14).
Figure 8 Cabling connections between the MSA 1040 controller and a single drive enclosure
The figure above shows examples of the MSA 1040 controller enclosure—equipped with a single controller
module—cabled to a single drive enclosure equipped with a single expansion module. The empty I/O
module slot in each of the enclosures is covered with an IOM blank to ensure sufficient air flow during
enclosure operation. The remaining illustrations in the section feature enclosures equipped with dual IOMs.
IMPORTANT:If the MSA 1040 controller enclosure is configured with a single controller module, the
controller module must be installed in the upper slot, and an I/O module blank must be installed in the
lower slot (shown above). This configuration is required to allow sufficient air flow through the enclosure
during operation.
Figure 9 Cabling connections between the MSA 1040 controller and a single drive enclosure
Connecting controller and drive enclosures21
Controller A
Controller B
1A
1B
In
Out
2A
2B
3A
3B
4A
4B
In Out
In Out
In Out
In Out
In Out
Fault-tolerant cabling
Controller A
Controller B
In
Out
In Out
InOut
InOut
InOut
InOut
Straight-through cabling
1A
1B
2A
2B
3A
3B
4A
4B
Figure 10 Cabling connections between MSA 1040 controllers and LFF drive enclosures
The diagram at left (above) shows fault-tolerant cabling of a dual-controller enclosure cabled to MSA 2040
6 Gb 3.5" 12-drive enclosures featuring dual-expansion modules. Controller module 1A is connected to
expansion module 2A, with a chain of connections cascading down (blue). Controller module 1B is
connected to the lower expansion module (4B) of the last drive enclosure, with connections moving in the
opposite direction (green). Fault-tolerant cabling allows any drive enclosure to fail—or be removed—while
maintaining access to other enclosures.
The diagram at right (above) shows the same storage components connected using straight-through
cabling. Using this method, if a drive enclosure fails, the enclosures that follow the failed enclosure in the
chain are no longer accessible until the failed enclosure is repaired or replaced.
Both illustrations in Figure 10 show the maximum number of supported enclosures that can be cabled
together in an MSA 1040 system configuration: up to four enclosures (including the controller enclosure).
22Installing the enclosures
P1
Controller A
Controller B
1A
1B
P2P1
P1
P1
P1
P1
2A
2B
3A
3B
4A
4B
P2
P2
P2
P2
P2
Fault-tolerant cabling
Straight-through cabling
P1
Controller A
Controller B
P2P1
P1
P1
P1
P1
P2
P2
P2
P2
P2
1A
1B
2A
2B
3A
3B
4A
4B
Figure 11 Cabling connections between MSA 1040 controllers and SFF drive enclosures
The figure above provides sample diagrams reflecting cabling of MSA 1040 controller enclosures and
D2700 6 Gb drive enclosures.
The diagram at left shows fault-tolerant cabling of a dual-controller enclosure and D2700 6 Gb drive
enclosures featuring dual-expansion modules. Controller module 1A is connected to expansion module
2A, with a chain of connections cascading down (blue). Controller module 1B is connected to the lower
expansion module (4B) of the last drive enclosure, with connections moving in the opposite direction
(green). Fault-tolerant cabling allows any drive enclosure to fail—or be removed—while maintaining
access to other enclosures.
The diagram at right shows the same storage components connected using straight-through cabling. Using
this method, if a drive enclosure fails, the enclosures that follow the failed enclosure in the chain are no
longer accessible until the failed enclosure is repaired or replaced.
Both illustrations in Figure 11 show the maximum number of supported enclosures that can be cabled
together in an MSA 1040 system configuration: up to four enclosures (including the controller enclosure).
Connecting controller and drive enclosures23
1B
1A
Controller B
Controller A
3B
3A
4B
4A
P2
P2
P1
P1
2B
2A
Out
In
Out
In
Fault-tolerant cabling
1
2
2
= LFF 12-drive enclosure
1
= SFF 25-drive enclosure
2
Drive enclosure IOM face plate key:
Straight-through cabling
P2
P2
P1
P1
Controller B
Controller A
P2
P2
P1
P1
Out
In
Out
In
1
2
2
P2
P2
P1
P1
1B
1A
3B
3A
4B
4A
2B
2A
Figure 12 Cabling connections between MSA 1040 controllers and drive enclosures of mixed model type
The figure above provides sample diagrams reflecting cabling of MSA 1040 controller enclosures and
supported mixed model drive enclosures. In this example, the SFF drive enclosures follow the LFF drive
enclosures. Given that both drive enclosure models use 6 Gb SAS link-rate and SAS2.0 expanders, they
can be ordered in desired sequence within the array, following the controller enclosure.
MSA 1040 controller enclosures support up to four enclosures (including the controller enclosure) for
adding storage. Both illustrations in Figure 12 show the maximum number of supported enclosures that can
be cabled together in an MSA 1040 system configuration.
IMPORTANT:For comprehensive configuration options and associated illustrations, refer to the HP MSA
24Installing the enclosures
1040 Cable Configuration Guide.
Testing enclosure connections
NOTE: Once the power-on sequence for enclosures succeeds, the storage system is ready to be
connected to hosts, as described in "Connecting the enclosure to data hosts" (page 29).
Powering on/powering off
Before powering on the enclosure for the first time:
• Install all disk drives in the enclosure so the controller can identify and configure them at power-up.
• Connect the cables and power cords to the enclosures as explained in the quick start instructions.
NOTE: MSA 1040 controller enclosures and drive enclosures do not have power switches (they are
switchless). They power on when connected to a power source, and they power off when disconnected.
• Generally, when powering up, make sure to power up the enclosures and associated data host in the
following order:
•Drive enclosures first
This ensures that disks in each drive enclosure have enough time to completely spin up before being
scanned by the controller modules within the controller enclosure.
While enclosures power up, their LEDs blink. After the LEDs stop blinking—
and back of the enclosure are amber—
detected. See "LED descriptions" (page 67) for descriptions of LED behavior.
• Controller enclosure next
Depending upon the number and type of disks in the system, it may take several minutes for the
system to become ready.
• Data host last (if powered down for maintenance purposes)
if no LEDs on the front
the power-on sequence is complete, and no faults have been
TIP:Generally, when powering off, you will reverse the order of steps used for powering on.
Power cycling procedures vary according to the type of power supply unit included with the enclosure. For
controller and drive enclosures configured with the switchless AC power supplies, refer to the procedure
described under AC power supply below. For procedures pertaining to a) controller enclosures configured
with DC power supplies, or b) previously installed drive enclosures featuring power switches, see "DC and
AC power supplies equipped with a power switch" (page 26).
IMPORTANT:See "Power cord requirements" (page 78) and QuickSpecs for more information about
power cords supported by MSA 1040 enclosures.
AC power supply
Enclosures equipped with switchless power supplies rely on the power cord for power cycling. Connecting
the cord from the power supply power cord connector to the appropriate power source facilitates power
on; whereas disconnecting the cord from the power source facilitates power off.
Testing enclosure connections25
Figure 13 AC power supply
Power cord connect
Powe r
switch
Powe r
cable
connect
Power
switch
Power
cord
connect
DC power supply unitLegacy AC power supply unit
To power on the system:
1. Obtain a suitable AC power cord for each AC power supply that will connect to a power source.
2. Plug the power cord into the power cord connector on the back of the drive enclosure (see Figure 13).
Plug the other end of the power cord into the rack power source. Wait several seconds to allow the
disks to spin up.
Repeat this sequence for each power supply within each drive enclosure.
3. Plug the power cord into the power cord connector on the back of the controller enclosure (see
Figure 13). Plug the other end of the power cord into the rack power source.
Repeat the sequence for the controller enclosure’s other switchless power supply.
To power off the system:
1. Stop all I/O from hosts to the system [see "Stopping I/O" (page 47)].
2. Shut down both controllers using either method described below:
• Use the SMU (Storage Management Utility) to shut down both controllers, as described in the online
help and web-posted HP MSA 1040 SMU Reference Guide.
Proceed to step 3.
• Use the command-line interface (CLI) to shut down both controllers, as described in the HP MSA 1040 CLI Reference Guide.
3. Disconnect the power cord male plug from the power source.
4. Disconnect the power cord female plug from the power cord connector on the power supply.
NOTE: Power cycling for enclosures equipped with a power switch is described below.
DC and AC power supplies equipped with a power switch
DC power supplies and legacy AC power supplies are shown below. Each model has a power switch.
Figure 14 DC and AC power supplies with power switch
26Installing the enclosures
Connect power cable to DC power supply
+L
GND
-L
+L
GND
-L
+L
GND
-L
+L
GND
-L
Connector pins (typical 2 places)
Connector (front view)
Ring/lug connector (typical 3 places)
D-shell
(left side view)
Locate two DC power cables that are compatible with your controller enclosure.
Figure 15 DC power cable featuring sectioned D-shell and lug connectors
See Figure 15 and the illustration at left (in Figure 14) when performing the following steps:
1. Verify that the enclosure power switches are in the Off position.
2. Connect a DC power cable to each DC power supply using the D-shell connector.
Use the UP> arrow on the connector shell to ensure proper positioning (see adjacent
left side view of D-shell connector).
3. Tighten the screws at the top and bottom of the shell, applying a torque between 1.7
N-m (15 in-lb) and 2.3 N-m (20 in-lb), to securely attach the cable to the DC power
supply module.
4. To complete the DC connection, secure the other end of each cable wire component
of the DC power cable to the target DC power source.
Check the three individual DC cable wire labels before connecting each cable wire lug to its power
source. One cable wire is labeled ground (GND) and the other two wires are labeled positive (+L) and negative (-L), respectively (shown in Figure 15 above).
CAUTION: Connecting to a DC power source outside the designated -48V DC nominal range
(-36V DC to -72V DC) may damage the enclosure.
Connect power cord to legacy AC power supply
Obtain two suitable AC power cords: one for each AC power supply that will connect to a separate power
source. See the illustration at right [in Figure 14 (page 26)] when performing the following steps:
1. Verify that the enclosure power switches are in the Off position.
2. Identify the power cord connector on the power supply, and locate the target power source.
3. For each power supply, perform the following actions:
a. Plug one end of the cord into the power cord connector on the power supply.
b. Plug the other end of the power cord into the rack power source.
4. Verify connection of primary power cords from the rack to separate external power sources.
Power cycle
To power on the system:
1. Power up drive enclosure(s).
Press the power switches at the back of each drive enclosure to the On position. Allow several seconds
for the disks to spin up.
2. Power up the controller enclosure next.
Press the power switches at the back of the controller enclosure to the On position. Allow several
seconds for the disks to spin up.
To power off the system:
1. Stop all I/O from hosts to the system [see "Stopping I/O" (page 47)].
2. Shut down both controllers using either method described below:
Powering on/powering off27
• Use the SMU to shut down both controllers, as described in the online help and HP MSA 1040 SMU Reference Guide.
Proceed to step 3.
• Use the command-line interface to shut down both controllers, as described in the HP MSA 1040 CLI Reference Guide.
3. Press the power switches at the back of the controller enclosure to the Off position.
4. Press the power switches at the back of each drive enclosure to the Off position.
28Installing the enclosures
4Connecting hosts
Host system requirements
Data hosts connected to HP MSA 1040 arrays must meet requirements described herein. Depending on
your system configuration, data host operating systems may require that multi-pathing is supported.
If fault-tolerance is required, then multi-pathing software may be required. Host-based multi-path software
should be used in any configuration where two logical paths between the host and any storage volume
may exist at the same time. This would include most configurations where there are multiple connections to
the host or multiple connections between a switch and the storage.
• Use native Microsoft MPIO DSM support with Windows Server 2008 and Windows Server 2012. Use
either the Server Manager or the command-line interface (mpclaim CLI tool) to perform the installation.
Refer to the following web sites for information about using Windows native MPIO DSM:
http://support.microsoft.com/gp/assistsupport
http://technet.microsoft.com (search the site for “multipath I/O overview”)
• Use the HP Multi-path Device Mapper for Linux Software with Linux servers. To download the
appropriate device mapper multi-path enablement kit for your specific enterprise Linux operating
system, go to http://www.hp.com/
Connecting the enclosure to data hosts
A host identifies an external port to which the storage system is attached. The external port may be a port
in an I/O adapter (such as an FC HBA) in a server. Cable connections vary depending on configuration.
Common cabling configurations are shown in this section. A list of supported configurations resides on the
MSA 1040 manuals site at http://www.hp.com/support/msa1040/manuals
storage/spock.
:
• HP MSA 1040 Quick Start Instructions
• HP MSA 1040 Cable Configuration Guide
These documents provide installation details and describe newly-supported direct attach, switch-connect,
and storage expansion configuration options for MSA 1040 products. For specific information about
qualified host cabling options, see "Cable requirements for MSA 1040 enclosures" (page 20).
Any number or combination of LUNs can be shared among a maximum of 64 host ports (initiators),
provided the total does not exceed 1,024 LUNs per MSA 1040 storage system (single-controller or
dual-controller configuration).
MSA 1040 Storage models
MSA 1040 models use Converged Network Controller technology, allowing you to select the desired host
interface protocol from the available FC or iSCSI host interface protocols supported by the system. The
small form-factor pluggable (SFP transceiver or SFP) connectors used in host ports are further described in
the subsections below. Also see "MSA 1040 Storage models" (page 11)for more information concerning
use of these host ports.
Fibre Channel protocol
The MSA 1040 controller enclosures support one or two controller modules using the Fibre Channel
interface protocol for host connection. Each controller module provides two host ports designed for use
with an FC SFP supporting data rates up to 8 Gbit/s. When configured with FC SFPs, MSA 1040
controller enclosures can also be cabled to support the optionally-licensed Remote Snap replication feature
via the FC ports.
The MSA 1040 controller supports Fibre Channel Arbitrated Loop (public or private) or point-to-point
topologies. Loop protocol can be used in a physical loop or in a direct connection between two devices.
Point-to-point protocol is used to connect to a fabric switch. See the set host-parameters command
within the HP MSA 1040 CLI Reference Guide for command syntax and details about connection mode
parameter settings relative to supported link speeds.
Host system requirements29
Fibre Channel ports are used in either of two capacities:
• To connect two storage systems through a Fibre Channel switch for use of Remote Snap replication.
• For attachment to FC hosts directly, or through a switch used for the FC traffic.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second
option requires that the host computer supports FC and optionally, multipath I/O.
TIP:Use the SMU Configuration Wizard to set FC port speed. Within the SMU Reference Guide, see
“Configuring the system > Using the Configuration Wizard > Configuring host ports,” and scroll to FC port
options. Use the CLI command set host-parameters to set FC port options, and use either the show host-parameters or show ports commands to view information about host ports.
10G bE iSCSI pro to co l
The MSA 1040 controller enclosures support one or two controller modules using the Internet SCSI
interface protocol for host connection. Each controller module provides two host ports designed for use
with a 10GbE iSCSI SFP supporting data rates up to 10 Gbit/s, using either one-way or mutual CHAP
(Challenge-Handshake Authentication Protocol).
TIP:See the “Configuring CHAP” topic in the SMU Reference Guide. Also see the important statement
about CHAP preceding the “Using the Replication Setup Wizard” procedure within that guide.
TIP:Use the SMU Configuration Wizard to set iSCSI port options. Within the SMU Reference Guide, see
“Configuring the system > Using the Configuration Wizard > Configuring host ports,” and scroll to iSCSI
port options. Use the CLI command set host-parameters to set iSCSI port options, and use either the
show host-parameters or show ports commands to view information about host ports.
The 10GbE iSCSI ports are used in either of two capacities:
• To connect two storage systems through a switch for use of Remote Snap replication.
• For attachment to 10GbE iSCSI hosts directly, or through a switch used for the 10GbE iSCSI traffic.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second
option requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
1 Gb iSCSI protocol
The MSA 1040 controller enclosures support one or two controller modules using the Internet SCSI
interface protocol for host port connection. Each controller module provides two iSCSI host ports
configured with an RJ-45 SFP supporting data rates up to 1 Gbit/s, using either one-way or mutual CHAP.
TIP:See the “Configuring CHAP” topic in the SMU Reference Guide. Also see the admonition about
CHAP preceding the “Using the Replication Setup Wizard” procedure within that guide.
TIP:Use the SMU Configuration Wizard to set iSCSI port options. Within the SMU Reference Guide, see
“Configuring the system > Using the Configuration Wizard > Configuring host ports,” and scroll to iSCSI
port options. Use the CLI command set host-parameters to set iSCSI port options, and use either the
show host-parameters or show ports commands to view information about host ports.
The 1 Gb iSCSI ports are used in either of two capacities:
• To connect two storage systems through a switch for use of Remote Snap replication.
• For attachment to 1 Gb iSCSI hosts directly, or through a switch used for the 1 Gb iSCSI traffic.
30Connecting hosts
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second
6Gb/s
Server
MSA 1040
IOM blank
6Gb/s
6Gb/s
Server
MSA 1040
option requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
Connecting direct attach configurations
The MSA 1040 controller enclosures support up to four direct-connect server connections, two per
controller module. Connect appropriate cables from the server HBAs to the controller host ports as
described below, and shown in the following illustrations.
To connect the MSA 1040 controller to a server or switch—using FC SFPs in controller ports—select Fibre
Channel cables supporting 4/8 Gb data rates, that are compatible with the host port SFP connector (see
QuickSpecs). Such cables are also used for connecting a local storage system to a remote storage system
via a switch, to facilitate use of the optional Remote Snap replication feature. The maximum speed
supported for FC is 8 Gbit/s.
To connect the MSA 1040 controller to a server or switch—using 10GbE iSCSI SFPs in controller
ports—select the appropriate qualified 10GbE SFP option (see QuickSpecs). Such cables are also used for
connecting a local storage system to a remote storage system via a switch, to facilitate use of the optional
Remote Snap replication feature.
To connect the MSA 1040 controller to a server or switch—using the 1 Gb SFPs in controller ports—select
the appropriate qualified RJ-45 SFP option (see QuickSpecs). Such cables are also used for connecting a
local storage system to a remote storage system via a switch, to facilitate use of the optional Remote Snap
replication feature.
NOTE: The MSA 1040 diagrams that follow use a single representation for each cabling example. This is
due to the fact that the port locations and labeling are identical for each of the three possible SFP options
supported by the system.
Single-controller configurations
One server/one HBA/single path
Figure 16 Connecting hosts: direct attach—one server/one HBA/single path
Dual-controller configurations
One server/one HBA/dual path
Figure 17 Connecting hosts: direct attach—one server/one HBA/dual path
Connecting the enclosure to data hosts31
Two servers/one HBA per server/dual path
6Gb/s
6Gb/s
Server 1Server 2
MSA 1040
6Gb/s
6Gb/s
Server 1Server 2
Switch ASwitch B
MSA 1040
Figure 18 Connecting hosts: direct attach—two servers/one HBA per server/dual path
The management host directly manages systems out-of-band over an Ethernet network.
1. Connect an RJ-45 Ethernet cable to the network management port on each MSA 1040 controller.
2. Connect the other end of each Ethernet cable to a network that your management host can access
(preferably on the same subnet).
NOTE: Connections to this device must be made with shielded cables—grounded at both ends—with
metallic RFI/EMI connector hoods, in order to maintain compliance with NEBS and FCC Rules and
Regulations.
Connecting two storage systems to replicate volumes
Remote Snap replication is a licensed disaster-recovery feature that performs asynchronous (batch)
replication of block-level data from a volume on a local (primary) storage system to a volume that can be
on the same system or a second, independent system. The second system can be located at the same site
as the first system, or at a different site.
The two associated standard volumes form a replication set, and only the primary volume (source of data)
can be mapped for access by a server. Both systems must be licensed to use Remote Snap, and must be
connected through switches to the same fabric or network (no direct attach). The server accessing the
32Connecting hosts
replication set need only be connected to the primary system. If the primary system goes offline, a
connected server can access the replicated data from the secondary system.
Replication configuration possibilities are many, and can be cabled—in switch attach fashion—to support
MSA 1040 systems on the same network, or on different networks. As you consider the physical
connections of your system—specifically connections for replication—keep several important points in
mind:
• Ensure that controllers have connectivity between systems, whether local or remote.
• Assign specific ports for replication whenever possible. By specifically assigning ports available for
replication, you free the controller from scanning and assigning the ports at the time replication is
performed.
• For remote replication, ensure that all ports assigned for replication are able to communicate
appropriately with the remote replication system (see verify remote-link in the CLI Reference Guide for
more information).
• Allow a sufficient number of ports to perform replication. This permits the system to balance the load
across those ports as I/O demands rise and fall. On dual-controller enclosures, if some of the volumes
replicated are owned by controller A and others are owned by controller B, then allow one port for
replication on each controller module to address replication traffic load.
• For the sake of system security, do not unnecessarily expose the controller module network port to an
external network connection.
Conceptual cabling examples are provided addressing cabling on the same network and cabling relative
to different networks. Both single and dual-controller MSA 1040 environments support replication.
IMPORTANT:Remote Snap must be licensed on all systems configured for replication, and the controller
module firmware version must be compatible on all systems licensed for replication.
NOTE: Systems must be correctly cabled before performing replication. See the following documents for
more information about using Remote Snap to perform replication tasks:
• HP Remote Snap technical white paper
• HP MSA 1040/2040 best practices
• HP MSA 1040 SMU Reference Guide
• HP MSA 1040 CLI Reference Guide
• HP MSA Event Descriptions Reference Guide
• HP MSA 1040 Cable Configuration Guide
To access user documents, see the MSA 1040 manuals site:
http://www.hp.com/support/msa1040/manuals
To access a technical white paper about Remote Snap replication software, navigate to the link shown:
This section shows example replication configurations for MSA 1040 controller enclosures. The following
illustrations provide conceptual examples of cabling to support Remote Snap replication. Blue cables show
I/O traffic and green cables show replication traffic.
NOTE: A simplified version of the MSA 1040 controller enclosure rear panel is used in cabling
illustrations to portray either the FC or iSCSI host interface protocol. The rear panel layouts for the three
configurations are identical; only the external connectors used in the host interface ports differ.
Connecting two storage systems to replicate volumes33
Once the MSA 1040 systems are physically cabled, see the SMU Reference Guide or online help for
6Gb/s6Gb/s
MSA 1040 Storage system
I/O switch
Switch (replication)
To host server
MSA 1040 Storage system
IOM blankIOM blank
6Gb/s
6Gb/s
6Gb/s
6Gb/s
MSA 1040 Storage system
SwitchTo host server(s)
MSA 1040 Storage system
information about configuring, provisioning, and using the optional Remote Snap feature.
NOTE: See the HP MSA 1040 SMU Reference Guide for more information about using Remote Snap to
perform replication tasks. The SMU Replication Setup Wizard guides you through replication setup.
Host ports and replication
MSA 1040 controller modules use qualified SFP options of the same type (FC or iSCSI). Each host port can
perform I/O or replication.
Single-controller configuration
One server/single network/two switches
The diagram below shows the rear panel of two MSA 1040 controller enclosures with both I/O and
replication occurring on the same network. Each enclosure is equipped with a single controller module.
The controller modules use qualified SFP options of the same type, supporting the host interface protocol
ordered from your supplier.
Figure 20 Connecting two storage systems for Remote Snap: one server/two switches/one location
Host ports used for replication must be connected to at least one switch. For optimal protection, use two
switches, with one replication port from each controller connected to the first switch, and the other
replication port from each controller connected to the second switch. Using two switches in tandem avoids
the potential single point of failure inherent to using a single switch.
Dual-controller configuration
Each of the following diagrams show the rear panel of two MSA 1040 controller enclosures equipped with
dual-controller modules. The controller modules use qualified SFP options of the same type, supporting the
host interface protocol ordered from your supplier.
Multiple servers/single network
The diagram below shows the rear panel of two MSA 1040 controller enclosures with both I/O and
replication occurring on the same physical network.
Figure 21 Connecting two storage systems for Remote Snap: multiple servers/one switch/one location
34Connecting hosts
The diagram below shows host interface connections and replication, with I/O and replication occurring
6Gb/s
6Gb/s
6Gb/s
6Gb/s
MSA 1040 Storage systemMSA 1040 Storage system
To host server(s)
I/O switchSwitch (replication)
6Gb/s
6Gb/s
6Gb/s
6Gb/s
Remote site "A"
Switch (replication)
I/O switch
To host servers
Remote site "B"
To host ser vers
I/O switch
Peer sites with failover
MSA 1040 Storage systemMSA 1040 Storage system
on different networks. For optimal protection, use two switches. Connect one port from each controller
module to the first switch to facilitate I/O traffic, and connect one port from each controller module to the
second switch to facilitate replication. Using two switches in tandem avoids the potential single point of
failure inherent to using a single switch; however, if one switch fails, either I/O or replication will fail,
depending on which of the switches fails.
Figure 22 Connecting two storage systems for Remote Snap: multiple servers/switches/one location
Virtual Local Area Network (VLAN) and zoning can be employed to provide separate networks for iSCSI
and FC, respectively. Whether using a single switch or multiple switches for a particular interface, you can
create a VLAN or zone for I/O and a VLAN or zone for replication to isolate I/O traffic from replication
traffic. Since each switch would include both VLANs or zones, the configuration would function as multiple
networks.
The diagram below shows the rear panel of two MSA 1040 controller enclosures with both I/O and
replication occurring on different networks.
Figure 23 Connecting two storage systems for Remote Snap: multiple servers/switches/two locations
Although not shown in the preceding cabling examples, you can cable replication-enabled MSA 1040
and compatible MSA 1040 and P2000 G3 systems—via switch attach—for performing replication tasks.
Updating firmware
After installing the hardware and powering on the storage system components for the first time, verify that
the controller modules, expansion modules, and disk drives are using the current firmware release. Using
the SMU, right-click the system in the Configuration View panel, and select Tools
Update Firmware panel displays the currently installed firmware versions, and enables you to update them.
Optionally, you can update firmware using FTP (File Transfer Protocol) as described in the MSA 1040 SMU Reference Guide.
Update > Firmware. The
Updating firmware35
IMPORTANT:See the “About firmware update” and “Updating firmware” topics within the MSA 1040
SMU Reference Guide before performing a firmware update.
NOTE: To locate and download the latest software and firmware updates for your product, go to
http://www.hp.com/
support.
36Connecting hosts
5Connecting to the controller CLI port
Device description
The MSA 1040 controllers feature a command-line interface port used to cable directly to the controller
and initially set IP addresses, or perform other configuration tasks. This port employs a mini-USB Type B
form factor, requiring a cable that is supplied with the controller, and additional support, so that a server
or other computer running a Linux or Windows operating system can recognize the controller enclosure as
a connected device. Without this support, the computer might not recognize that a new device is
connected, or might not be able to communicate with it. For Linux computers, no new driver files are
needed, but a Linux configuration file must be created or modified.
For Windows computers, the Windows USB device driver must be downloaded from a CD or HP website,
and installed on the computer that will be cabled directly to the controller command-line interface port.
NOTE: Directly cabling to the command-line interface (CLI) port is an out-of-band connection because it
communicates outside the data paths used to transfer information from a computer or network to the
controller enclosure.
Preparing a Linux computer before cabling to the CLI port
Although Linux operating systems do not require installation of a device driver, certain parameters must be
provided during driver loading to enable recognition of the MSA 1040 controller enclosures. To load the
Linux device driver with the correct parameters, the following command is required:
Optionally, the information can be incorporated into the /etc/modules.conf file.
Downloading a device driver for Windows computers
A Windows USB device driver download is provided for communicating directly with the controller
command-line interface port using a USB cable to connect the controller enclosure and the computer.
NOTE: Access the download from your HP MSA support page at http://www.hp.com/support.
The USB device driver is also available from the HP MSA 1040 Software Support and Documentation CD
that shipped with your product.
Obtaining IP values
One method of obtaining IP values for your system is to use a network management utility to discover “HP
MSA Storage” devices on the local LAN through SNMP. Alternative methods for obtaining IP values for
your system are described in the following subsections.
Setting network port IP addresses using DHCP
In DHCP mode, network port IP address, subnet mask, and gateway values are obtained from a DHCP
server if one is available. If a DHCP server is unavailable, current addressing is unchanged.
1. Look in the DHCP server’s pool of leased addresses for two IP addresses assigned to “HP MSA
Storage.”
2. Use a ping broadcast to try to identify the device through the ARP table of the host.
If you do not have a DHCP server, you will need to ask your system administrator to allocate two IP
addresses, and set them using the command-line interface during initial configuration (described
below).
Device description37
NOTE: For more information, see Using the Configuration Wizard > Configuring network ports within the
CACHE
LINK
DIRTY
LINK
ACT
CLI
CLI
Host Interface
Not Shown
SERVICE−2
SERVICE−1
6Gb/s
Connect USB cable to CLI
port on controller faceplate
HP MSA 1040 SMU Reference Guide.
Setting network port IP addresses using the CLI port and cable
You can set network port IP addresses manually using the command-line interface port and cable. If you
have not done so already, you need to enable your system for using the command-line interface port [also
see "Using the CLI port and cable—known issues on Windows" (page 40)].
NOTE: For Linux systems, see "Preparing a Linux computer before cabling to the CLI port" (page 37). For
Windows systems see "Downloading a device driver for Windows computers" (page 37).
Network ports on controller module A and controller module B are configured with the following
factory-default IP settings:
• Management Port IP Address: 10.0.0.2 (controller A), 10.0.0.3 (controller B)
• IP Subnet Mask: 255.255.255.0
• Gateway IP Address: 10.0.0.1
If the default IP addresses are not compatible with your network, you must set an IP address for each
network port using the command-line interface embedded in each controller module. The command-line
interface enables you to access the system using the USB (universal serial bus) communication interface
and terminal emulation software. The USB cable and CLI port support USB version 2.0.
Use the CLI commands described in the steps below to set the IP address for the network port on each
controller module. Once new IP addresses are set, you can change them as needed using the SMU. Be
sure to change the IP address via the SMU before changing the network configuration.
NOTE: Changing IP settings can cause management hosts to lose access to the storage system.
1. From your network administrator, obtain an IP address, subnet mask, and gateway address for
controller A, and another for controller B.
Record these IP addresses so that you can specify them whenever you manage the controllers using the
SMU or the CLI.
2. Use the provided USB cable to connect controller A to a USB port on a host computer. The USB mini 5
male connector plugs into the CLI port as shown in Figure 24 (generic controller module is shown).
Figure 24 Connecting a USB cable to the CLI port
3. Enable the CLI port for subsequent communication:
38Connecting to the controller CLI port
• Linux customers should enter the command syntax provided in "Preparing a Linux computer before
cabling to the CLI port" (page 37).
• Windows customers should locate the downloaded device driver described in "Downloading a
device driver for Windows computers" (page 37), and follow the instructions provided for proper
installation.
4. Start and configure a terminal emulator, such as HyperTerminal or VT-100, using the display settings in
Table 2 (page 39) and the connection settings in Table 3 (page 39) (also, see the note following this
procedure).
.
Table 2 Terminal emulator display settings
ParameterValue
Terminal emulation modeVT-100 or ANSI (for color support)
FontTerminal
TranslationsNone
Columns80
Table 3 Terminal emulator connection settings
ParameterValue
ConnectorCOM3 (for example)
Baud rate115,200
1, 2
Data bits8
ParityNone
Stop bits1
Flow controlNone
1
Your server or laptop configuration determines which COM port is used for Disk Array USB Port.
2
Verify the appropriate COM port for use with the CLI.
5. In the terminal emulator, connect to controller A.
6. Press Enter to display the CLI prompt (#).
The CLI displays the system version, MC version, and login prompt:
a. At the login prompt, enter the default user manage.
b. Enter the default password !manage.
If the default user or password—or both—have been changed for security reasons, enter the secure
login credentials instead of the defaults shown above.
7. At the prompt, type the following command to set the values you obtained in step 1 for each network
port, first for controller A and then for controller B:
set network-parameters ip
address
netmask
netmask
gateway
gateway
controller a|b
where:
•
address
•
netmask
•
gateway
is the IP address of the controller
is the subnet mask
is the IP address of the subnet router
• a|b specifies the controller whose network parameters you are setting
For example:
# set network-parameters ip 192.168.0.10 netmask 255.255.255.0 gateway
192.168.0.1 controller a
# set network-parameters ip 192.168.0.11 netmask 255.255.255.0 gateway
192.168.0.1 controller b
8. Type the following command to verify the new IP addresses:
show network-parameters
Obtaining IP values39
Network parameters, including the IP address, subnet mask, and gateway address are displayed for
each controller.
9. Use the ping command to verify network connectivity.
For example:
# ping 192.168.0.1 (gateway)
Info: Pinging 192.168.0.1 with 4 packets.
Success: Command completed successfully. - The remote computer responded with 4
packets.
10. In the host computer's command window, type the following command to verify connectivity, first for
controller A and then for controller B:
ping controller-
If you cannot access your system for at least three minutes after changing the IP address, your network
might require you to restart the Management Controller(s) using the CLI. When you restart a
Management Controller, communication with it is temporarily lost until it successfully restarts.
Type the following command to restart the management controller on both controllers:
restart mc both
11 . When you are done using the CLI, exit the emulator.
12 . Retain the new IP addresses to access and manage the controllers, using either the SMU or the CLI.
NOTE: Using HyperTerminal with the CLI on a Microsoft Windows host:
On a host computer connected to a controller module’s mini-USB CLI port, incorrect command syntax in a
HyperTerminal session can cause the CLI to hang. To avoid this problem, use correct syntax, use a different
terminal emulator, or connect to the CLI using telnet rather than the mini-USB cable.
IP-address
Be sure to close the HyperTerminal session before shutting down the controller or restarting its Management
Controller. Otherwise, the host’s CPU cycles may rise unacceptably.
If communication with the CLI is disrupted when using an out-of-band cable connection, communication
can sometimes be restored by disconnecting and reattaching the mini-USB cable as described in step 2 on
page 38.
The USB device driver is accessible from the Software Support and Documentation CD that shipped with
your product. The USB device driver is also available as a download.
NOTE: Access the download from your HP MSA support website at http://www.hp.com/support.
Using the CLI port and cable—known issues on Windows
When using the CLI port and cable for setting controller IP addresses, be aware of the following known
issues on Microsoft Windows platforms.
Problem
On Windows operating systems, the USB CLI port may encounter issues preventing the terminal emulator
from reconnecting to storage after the Management Controller (MC) restarts or the USB cable is unplugged
and reconnected.
Workaround
Follow these steps when using the mini-USB cable and USB Type B CLI port to communicate out-of-band
between the host and controller module for setting network port IP addresses.
To create a new connection or open an existing connection (HyperTerminal):
1. From the Windows Control Panel, select Device Manager.
40Connecting to the controller CLI port
2. Connect using the USB COM port and Detect Carrier Loss option.
a. Select Connect To > Connect using: > pick a COM port from the list.
b. Select the Detect Carrier Loss check box.
The Device Manager page should show “Ports (COM & LPT)” with an entry entitled “Disk Array USB
Port (COMn)”—where n is your system’s COM port number.
3. Set network port IP addresses using the CLI (see procedure on page 38).
To restore a hung connection when the MC is restarted (any supported terminal emulator):
1. If the connection hangs, disconnect and quit the terminal emulator program.
a. Using Device Manager, locate the COMn port assigned to the Disk Array Port.
b. Right-click on the hung Disk Array USB Port (COMn), and select Disable.
c. Wait for the port to disable.
2. Right-click on the previously hung—now disabled—Disk Array USB Port (COMn), and select Enable.
3. Start the terminal emulator and connect to the COM port.
4. Set network port IP addresses using the CLI (see procedure on page 38).
Using the CLI port and cable—known issues on Windows41
42Connecting to the controller CLI port
6Basic operation
Verify that you have completed the sequential “Installation Checklist” instructions in Table 1 (page 19).
Once you have successfully completed steps 1 through 8 therein, you can access the management
interface using your web browser, to complete the system setup.
Accessing the SMU
Upon completing the hardware installation, you can access the web-based management interface—SMU
(Storage Management Utility)—from the controller module to monitor and manage the storage system.
Invoke your web browser, and enter the IP address of the controller module’s network port in the address
field (obtained during completion of “Installation Checklist” step 8), then press Enter. To Sign In to the
SMU, use the default user name manage and password !manage. If the default user or password—or
both—have been changed for security reasons, enter the secure login credentials instead of the defaults.
This brief Sign In discussion assumes proper web browser setup.
IMPORTANT:For detailed information on accessing and using the SMU, see the “Getting started” section
in the web-posted HP MSA 1040 SMU Reference Guide.
The Getting Started section provides instructions for signing-in to the SMU, introduces key concepts,
addresses browser setup, and provides tips for using the main window and the help window.
TIP:
After signing in to the SMU, you can use online help as an alternative to consulting the reference guide.
Configuring and provisioning the storage system
Once you have familiarized yourself with the SMU, use it to configure and provision the storage system. If
you are licensed to use the optional Remote Snap feature, you may also need to set up storage systems for
replication. Refer to the following topics within the SMU Reference Guide or online help:
• Configuring the system
• Provisioning the system
• Using Remote Snap to replicate volumes
NOTE: See the “Installing a license” topic within the SMU Reference Guide for instructions about creating
a temporary license or installing a permanent license.
IMPORTANT:If
use its Illegal Request setting. To do so, see either the configuration topic “Changing the missing LUN response”
in the SMU Reference Guide or the command topic “set-advanced-settings” in the CLI Reference Guide.
the system is used in a VMware environment, set the system Missing LUN Response option to
Accessing the SMU43
44Basic operation
7Troubleshooting
USB CLI port connection
MSA 1040 controllers feature a CLI port employing a mini-USB Type B form factor. If you encounter
problems communicating with the port after cabling your computer to the USB device, you may need to
either download a device driver (Windows), or set appropriate parameters via an operating system
command (Linux). See "Connecting to the controller CLI port" (page 37) for more information.
Fault isolation methodology
MSA 1040 controllers provide many ways to isolate faults. This section presents the basic methodology
used to locate faults within a storage system, and to identify the associated Field-replaceable Units (FRUs)
affected.
As noted in "Basic operation" (page 43), use the SMU to configure and provision the system upon
completing the hardware installation. As part of this process, configure and enable event notification so the
system will notify you when a problem occurs that is at or above the configured severity (see “Using the
Configuration Wizard > Configuring event notification” within the SMU Reference Guide). With event
notification configured and enabled, you can follow the recommended actions in the notification message
to resolve the problem, as further discussed in the options presented below.
Basic steps
The basic fault isolation steps are listed below:
• Gather fault information, including using system LEDs [see "Gather fault information" (page 46)].
• Determine where in the system the fault is occurring [see "Determine where the fault is occurring"
• If required, isolate the fault to a data path component or configuration [see "Isolate the fault"
(page 47)].
Cabling systems to enable use of the licensed Remote Snap feature—to replicate volumes—is another
important fault isolation consideration pertaining to initial system installation. See "Isolating Remote Snap
replication faults" (page 54) for more information about troubleshooting during initial setup.
Options available for performing basic steps
When performing fault isolation and troubleshooting steps, select the option or options that best suit your
site environment. Use of any option (four options are described below) is not mutually-exclusive to the use
of another option. You can use the SMU to check the health icons/values for the system and its
components to ensure that everything is okay, or to drill down to a problem component. If you discover a
problem, both the SMU and the CLI provide recommended-action text online. Options for performing basic
steps are listed according to frequency of use:
• Use the SMU.
• Use the CLI.
• Monitor event notification.
• View the enclosure LEDs.
Use the SMU
The SMU uses health icons to show OK, Degraded, Fault, or Unknown status for the system and its
components. The SMU enables you to monitor the health of the system and its components. If any
component has a problem, the system health will be Degraded, Fault, or Unknown. Use the SMU GUI to
drill down to find each component that has a problem, and follow actions in the Health Recommendations
field for the component to resolve the problem.
USB CLI port connection45
Use the CLI
As an alternative to using the SMU, you can run the show system command in the CLI to view the health
of the system and its components. If any component has a problem, the system health will be Degraded,
Fault, or Unknown, and those components will be listed as Unhealthy Components. Follow the
recommended actions in the component Health Recommendations field to resolve the problem.
Monitor event notification
With event notification configured and enabled, you can view event logs to monitor the health of the
system and its components. If a message tells you to check whether an event has been logged, or to view
information about an event in the log, you can do so using either the SMU or the CLI. Using the SMU, you
would view the event log and then click on the event message to see detail about that event. Using the CLI,
you would run the show events detail command (with additional parameters to filter the output) to
see the detail for an event.
View the enclosure LEDs
You can view the LEDs on the hardware (while referring to LED descriptions for your enclosure model) to
identify component status. If a problem prevents access to either the SMU or the CLI, this is the only option
available. However, monitoring/management is often done at a management console using storage
management interfaces, rather than relying on line-of-sight to LEDs of racked hardware components.
Performing basic steps
You can use any of the available options in performing the basic steps comprising the fault isolation
methodology.
Gather fault information
When a fault occurs, it is important to gather as much information as possible. Doing so will help you
determine the correct action needed to remedy the fault.
Begin by reviewing the reported fault:
• Is the fault related to an internal data path or an external data path?
• Is the fault related to a hardware component such as a disk drive module, controller module, or power
supply?
By isolating the fault to one of the components within the storage system, you will be able to determine the
necessary action more quickly.
Determine where the fault is occurring
Once you have an understanding of the reported fault, review the enclosure LEDs. The enclosure LEDs are
designed to alert users of any system faults, and might be what alerted the user to a fault in the first place.
When a fault occurs, the Fault ID status LED on the enclosure right ear (see "Front panel components"
(page 13)) illuminates. Check the LEDs on the back of the enclosure to narrow the fault to a FRU,
connection, or both. The LEDs also help you identify the location of a FRU reporting a fault.
Use the SMU to verify any faults found while viewing the LEDs. The SMU is also a good tool to use in
determining where the fault is occurring if the LEDs cannot be viewed due to the location of the system. The
SMU provides you with a visual representation of the system and where the fault is occurring. It can also
provide more detailed information about FRUs, data, and faults.
Review the event logs
The event logs record all system events. Each event has a numeric code that identifies the type of event that
occurred, and has one of the following severities:
• Critical. A failure occurred that may cause a controller to shut down. Correct the problem immediately.
• Error. A failure occurred that may affect data integrity or system stability. Correct the problem as soon
as possible.
• Warning. A problem occurred that may affect system stability, but not data integrity. Evaluate the
problem and correct it if necessary.
46Troubleshooting
• Informational. A configuration or state change occurred, or a problem occurred that the system
corrected. No immediate action is required.
See the HP MSA Event Descriptions Reference Guide for information about specific events, located at your
HP MSA 1040 manuals page: http://www.hp.com/support/msa1040/manuals
The event logs record all system events. It is very important to review the logs, not only to identify the fault,
but also to search for events that might have caused the fault to occur. For example, a host could lose
connectivity to a vdisk if a user changes channel settings without taking the storage resources assigned to
it into consideration. In addition, the type of fault can help you isolate the problem to either hardware or
software.
Isolate the fault
Occasionally it might become necessary to isolate a fault. This is particularly true with data paths, due to
the number of components comprising the data path. For example, if a host-side data error occurs, it could
be caused by any of the components in the data path: controller module, cable, connectors, or data host.
If the enclosure does not initialize
It may take up to two minutes for the enclosures to initialize. If the enclosure does not initialize:
• Perform a rescan.
• Power cycle the system.
• Make sure the power cord is properly connected, and check the power source that it is connected to.
• Check the event log for errors.
.
Correcting enclosure IDs
When installing a system with drive enclosures attached, the enclosure IDs might not agree with the
physical cabling order. This is because the controller might have been previously attached to some of the
same enclosures during factory testing,and it attempts to preserve the previous enclosure IDs if possible. To
correct this condition, make sure that both controllers are up, and perform a rescan using the SMU or the
CLI. This will reorder the enclosures, but can take up to two minutes for the enclosure IDs to be corrected.
To perform a rescan using the CLI, type the following command:
rescan
To rescan using the SMU:
1. Verify that both controllers are operating normally.
2. In the Configuration View panel, right-click the system and select Tools
3. Click Rescan.
Stopping I/O
When troubleshooting disk drive and connectivity faults, stop I/O to the affected vdisks from all hosts and
remote systems as a data protection precaution. As an additional data protection precaution, it is helpful to
conduct regularly scheduled backups of your data.
IMPORTANT:Stopping I/O to a vdisk is a host-side task, and falls outside the scope of this document.
> Rescan Disk Channels.
When on-site, you can verify that there is no I/O activity by briefly monitoring the system LEDs; however,
when accessing the storage system remotely, this is not possible. Remotely, you can use the show vdisk-statistics command to determine if input and output has stopped. Perform these steps:
1. Using the CLI, run the show vdisk-statistics command.
The Number of Reads and Number of Writes outputs show the number of these operations that
have occurred since the statistic was last reset, or since the controller was restarted.
2. Run the show vdisk-statistics command a second time.
Stopping I/O47
This provides you a specific window of time (the interval between requesting the statistics) to determine
if data is being written to or read from the disk.
3. If any reads or writes occur during this interval, a host is still reading from or writing to this vdisk.
Continue to stop IOPS from hosts, and repeat step 1 until the Number of Reads and Number of
Writes statistics are zero.
See the HP MSA 1040 CLI Reference Guide for additional information, at your HP MSA 1040 manuals
page: http://www.hp.com/support/msa1040/manuals
.
Diagnostic steps
This section describes possible reasons and actions to take when an LED indicates a fault condition during
initial system setup. See "LED descriptions" (page 67) for descriptions of all LED statuses.
NOTE: Once event notification is configured and enabled using the SMU, you can view event logs to
monitor the health of the system and its components using the GUI.
In addition to monitoring LEDs via line-of-sight observation of racked hardware components when
performing diagnostic steps, you can also monitor the health of the system and its components using the
management interfaces. Be mindful of this when reviewing the Actions column in the diagnostics tables,
and when reviewing the step procedures provided in this chapter.
Is the enclosure front panel Fault/Service Required LED amber?
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
YesA fault condition exists/occurred.
If installing an I/O module FRU, the
module has not gone online and likely
failed its self-test.
• Check the LEDs on the back of the controller enclosure to
• Check the event log for specific information regarding the
• If installing an IOM FRU, try removing and reinstalling the
• If the above actions do not resolve the fault, isolate the
Table 4 Diagnostics LED status: Front panel “Fault/Service Required”
Is the enclosure rear panel FRU OK LED off?
AnswerPossible reasons Actions
No
(blinking)
YesThe controller module is not powered
System functioning properly.
System is booting.
on.
The controller module has failed.
No action required.
Wait for system to boot.
• Check that the controller module is fully inserted and
• Check the event log for specific information regarding the
narrow the fault to a FRU, connection, or both.
fault; follow any Recommended Actions.
new IOM, and check the event log for errors.
fault, and contact an authorized service provider for
assistance. Replacement may be necessary.
latched in place, and that the enclosure is powered on.
failure.
Table 5 Diagnostics LED status: Rear panel “FRU OK”
48Troubleshooting
Is the enclosure rear panel Fault/Service Required LED amber?
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
Yes
(blinking)
One of the following errors occurred:
• Hardware-controlled power-up error
• Cache flush error
• Cache self-refresh error
• Restart this controller from the other controller using the
SMU or the CLI.
• If the above action does not resolve the fault, remove the
controller and reinsert it.
• If the above action does not resolve the fault, contact an
authorized service provider for assistance. It may be
necessary to replace the controller.
Table 6 Diagnostics LED status: Rear panel “Fault/Service Required”
Are both disk drive module LEDs off (Online/Activity and Fault/UID)?
AnswerPossible reasons Actions
Yes• There is no power.
• The disk is offline.
• The disk is not configured.
• Check that the disk drive is fully inserted and latched in
place, and that the enclosure is powered on.
Table 7 Diagnostics LED status: Front panel disks “Online/Activity” and “Fault/UID”
Is the disk drive module Fault/UID LED blinking amber?
AnswerPossible reasons Actions
No, but the
Online/Activity
LED is blinking.
The disk drive is rebuilding.No action required.
CAUTION: Do not remove a disk drive that is
reconstructing. Removing a reconstructing disk drive
might terminate the current operation and cause data
loss.
Yes, and the
Online/Activity
LED is off.
Yes, and the
Online/Activity
LED is blinking.
The disk drive is offline. A
predictive failure alert may have
been received for this device.
The disk drive is active, but a
predictive failure alert may have
been received for this device.
• Check the event log for specific information regarding
the fault.
• Isolate the fault.
• Contact an authorized service provider for assistance.
• Check the event log for specific information regarding
the fault.
• Isolate the fault.
• Contact an authorized service provider for assistance.
Table 8 Diagnostics LED status: Front panel disks “Fault/UID”
Diagnostic steps49
Is a connected host port Host Link Status LED off?
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
(see Link LED note: page 72)
YesThe link is down.• Check cable connections and reseat if necessary.
• Inspect cables for damage. Replace cable if necessary.
• Swap cables to determine if fault is caused by a defective
cable. Replace cable if necessary.
• Verify that the switch, if any, is operating properly. If possible,
test with another port.
• Verify that the HBA is fully seated, and that the PCI slot is
powered on and operational.
• In the SMU, review event logs for indicators of a specific fault
in a host data path component; follow any Recommended
Actions.
• Contact an authorized service provider for assistance.
• See "Isolating a host-side connection fault" (page 52).
Table 9 Diagnostics LED status: Rear panel “Host Link Status”
Is a connected port Expansion Port Status LED off?
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
YesThe link is down.• Check cable connections and reseat if necessary.
• Inspect cable for damage. Replace cable if necessary.
• Swap cables to determine if fault is caused by a defective
cable. Replace cable if necessary.
• In the SMU, review event logs for indicators of a specific fault
in a host data path component; follow any Recommended
Actions.
• Contact an authorized service provider for assistance.
• See "Isolating a controller module expansion port connection
fault" (page 53).
Table 10Diagnostics LED status: Rear panel “Expansion Port Status”
Is a connected port Network Port Link Status LED off?
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
YesThe link is down.Use standard networking troubleshooting procedures to isolate
faults on the network.
Table 11 Diagnostics LED status: Rear panel “Network Port Link Status”
50Troubleshooting
Is the power supply Input Power Source LED off?
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
YesThe power supply is not receiving
adequate power.
• Verify that the power cord is properly connected and check
the power source to which it connects.
• Check that the power supply FRU is firmly locked into
position.
• In the SMU, check the event log for specific information
regarding the fault; follow any Recommended Actions.
• If the above action does not resolve the fault, isolate the
fault, and contact an authorized service provider for
assistance.
Table 12 Diagnostics LED status: Rear panel power supply “Input Power Source”
Is the power supply Voltage/Fan Fault/Service Required LED amber?
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
YesThe power supply unit or a fan is
operating at an unacceptable
voltage/RPM level, or has failed.
When isolating faults in the power supply, remember that the
fans in both modules receive power through a common bus on
the midplane, so if a power supply unit fails, the fans continue to
operate normally.
• Check that the power supply FRU is firmly locked into
position.
• Check that the power cable is connected to a power source.
• Check that the power cable is connected to the power
supply module.
Table 13 Diagnostics LED status: Rear panel power supply: “Voltage/Fan Fault/Service Required”
Controller failure in a single-controller configuration
Cache memory is flushed to CompactFlash in the case of a controller failure or power loss. During the write
to CompactFlash process, only the components needed to write the cache to the CompactFlash are
powered by the supercapacitor. This process typically takes 60 seconds per 1 Gbyte of cache. After the
cache is copied to CompactFlash, the remaining power left in the supercapacitor is used to refresh the
cache memory. While the cache is being maintained by the supercapacitor, the Cache Status LED flashes
at a rate of 1/10 second on and 9/10 second off.
IMPORTANT:Transportable cache only applies to single-controller configurations. In dual controller
configurations, there is no need to transport cache from a failed controller to a replacement controller
because the cache is duplicated between the peer controllers (subject to volume write optimization setting).
Controller failure in a single-controller configuration51
If the controller has failed or does not start, is the Cache Status LED on/blinking?
AnswerActions
No, the Cache LED status is off, and the
controller does not boot.
No, the Cache Status LED is off, and the
controller boots.
Yes, at a strobe 1:10 rate - 1 Hz, and the
controller does not boot.
Yes, at a strobe 1:10 rate - 1 Hz, and the
controller boots.
Yes, at a blink 1:1 rate - 1 Hz, and the controller
does not boot.
Yes, at a blink 1:1 rate - 1 Hz, and the controller
boots.
Table 14Diagnostics LED status: Rear panel “Cache Status”
NOTE: See also "Cache Status LED details" (page 73).
Transporting cache
To preserve the existing data stored in the CompactFlash, you must transport the CompactFlash from the
failed controller to a replacement controller using the procedure outlined in HP MSA Controller Module
Replacement Instructions shipped with the replacement controller module. Failure to use this procedure will
result in the loss of data stored in the cache module.
If valid data is thought to be in Flash, see Transporting cache;
otherwise, replace the controller module.
The system has flushed data to disks. If the problem persists,
replace the controller module.
See Transporting cache.
The system is flushing data to CompactFlash. If the problem
persists, replace the controller module.
See Transporting cache.
The system is in self-refresh mode. If the problem persists, replace
the controller module.
CAUTION: Remove the controller module only after the copy process is complete, which is indicated by
the Cache Status LED being off, or blinking at 1:10 rate.
Isolating a host-side connection fault
During normal operation, when a controller module host port is connected to a data host, the port’s host
link status/link activity LED is green. If there is I/O activity, the LED blinks green. If data hosts are having
trouble accessing the storage system, and you cannot locate a specific fault or cannot access the event
logs, use the following procedure. This procedure requires scheduled downtime.
IMPORTANT:Do not perform more than one step at a time. Changing more than one variable at a time
can complicate the troubleshooting process.
Host-side connection troubleshooting featuring host ports with SFPs
The procedure below applies to MSA 1040 controller enclosures employing small form factor pluggable
(SFP) transceiver connectors (4/8 Gb FC, 10GbE iSCSI, or 1 Gb iSCSI) in host interface ports. In the
following procedure, “SFP and host cable” is used to refer to any of the qualified SFP options supporting
Converged Network Controller ports used for I/O or replication.
NOTE: When experiencing difficulty diagnosing performance problems, consider swapping out one SFP
at a time to see if performance improves.
52Troubleshooting
1. Halt all I/O to the storage system as described in "Stopping I/O" (page 47).
2. Check the host link status/link activity LED.
If there is activity, halt all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
• Solid – Cache contains data yet to be written to the disk.
• Blinking – Cache data is being written to CompactFlash.
•Flashing at
• Off – Cache is clean (no unwritten data).
4. Remove the SFP and host cable and inspect for damage.
5. Reseat the SFP and host cable.
Is the host link status/link activity LED on?
• Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again,
clean the connections to ensure that a dirty connector is not interfering with the data path.
• No – Proceed to the next step.
6. Move the SFP and host cable to a port with a known good link status.
This step isolates the problem to the external data path (SFP, host cable, and host-side devices) or to the
controller module port.
Is the host link status/link activity LED on?
• Yes – You now know that the SFP, host cable, and host-side devices are functioning properly. Return
the SFP and cable to the original port. If the link status/link activity LED remains off, you have
isolated the fault to the controller module port. Replace the controller module.
• No – Proceed to the next step.
7. Swap the SFP with the known good one.
Is the host link status/link activity LED on?
• Yes – You have isolated the fault to the SFP. Replace the SFP.
• No – Proceed to the next step.
8. Re-insert the original SFP and swap the cable with a known good one.
Is the host link status/link activity LED on?
• Yes – You have isolated the fault to the cable. Replace the cable.
• No – Proceed to the next step.
9. Verify that the switch, if any, is operating properly. If possible, test with another port.
10. Verify that the HBA is fully seated, and that the PCI slot is powered on and operational.
11 . Replace the HBA with a known good HBA, or move the host side cable and SFP to a known good
HBA.
Is the host link status/link activity LED on?
• Yes – You have isolated the fault to the HBA. Replace the HBA.
• No – It is likely that the controller module needs to be replaced.
12 . Move the cable and SFP back to its original port.
Is the host link status/link activity LED on?
• No – The controller module port has failed. Replace the controller module.
• Yes – Monitor the connection for a period of time. It may be an intermittent problem, which can
occur with damaged SFPs, cables, and HBAs.
1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
Isolating a controller module expansion port connection fault
During normal operation, when a controller module expansion port is connected to a drive enclosure, the
expansion port status LED is green. If the connected port’s expansion port LED is off, the link is down. Use
the following procedure to isolate the fault.
This procedure requires scheduled downtime.
Isolating a controller module expansion port connection fault53
NOTE: Do not perform more than one step at a time. Changing more than one variable at a time can
complicate the troubleshooting process.
1. Halt all I/O to the storage system as described in "Stopping I/O" (page 47).
2. Check the host activity LED.
If there is activity, halt all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
• Solid – Cache contains data yet to be written to the disk.
• Blinking – Cache data is being written to CompactFlash.
•Flashing at
• Off – Cache is clean (no unwritten data).
4. Reseat the expansion cable, and inspect it for damage.
Is the expansion port status LED on?
• Yes – Monitor the status to ensure there is no intermittent error present. If the fault occurs again,
clean the connections to ensure that a dirty connector is not interfering with the data path.
• No – Proceed to the next step.
5. Move the expansion cable to a port on the controller enclosure with a known good link status.
This step isolates the problem to the expansion cable or to the controller module expansion port.
Is the expansion port status LED on?
• Yes – You now know that the expansion cable is good. Return the cable to the original port. If the
expansion port status LED remains off, you have isolated the fault to the controller module
expansion port. Replace the controller module.
• No – Proceed to the next step.
6. Move the expansion cable back to the original port on the controller enclosure.
7. Move the expansion cable on the drive enclosure to a known good expansion port on the drive
enclosure.
Is the expansion port status LED on?
• Yes – You have isolated the problem to the drive enclosure port. Replace the expansion module.
• No – Proceed to the next step.
8. Replace the cable with a known good cable, ensuring the cable is attached to the original ports used
by the previous cable.
Is the host link status LED on?
• Yes – Replace the original cable. The fault has been isolated.
• No – It is likely that the controller module must be replaced.
1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
Isolating Remote Snap replication faults
Cabling for replication
Remote Snap replication is a licensed feature for disaster-recovery. This feature performs asynchronous
(batch) replication of block-level data from a volume on a local storage system to a volume that can be on
the same system or a second, independent system. The second system can be located at the same site as
the first system, or at a different site. See "Connecting two storage systems to replicate volumes" (page 32)
for host connection information concerning Remote Snap.
54Troubleshooting
Replication setup and verification
After storage systems and hosts are cabled for replication, you can use the Replication Setup Wizard in the
SMU to prepare to use the Remote Snap feature. Optionally, you can use telnet to access the IP address of
the controller module and access the Remote Snap feature using the CLI.
NOTE: Refer to the following manuals for more information on replication setup:
• See HP Remote Snap technical white paper for replication best practices
• See HP MSA 1040 SMU Reference Guide for procedures to setup and manage replications
• See HP MSA 1040 CLI Reference Guide for replication commands and syntax
• See HP MSA Event Descriptions Reference Guide for replication event reporting
Basic information for enabling the MSA 1040 controller enclosures for replication supplements the
troubleshooting procedures that follow:
• Familiarize yourself with Remote Snap by reviewing the “Getting started” and “Using Remote Snap to
replicate volumes” chapters in the SMU Reference Guide.
• For best practices concerning replication-related tasks, see the technical white paper.
• Use Wizards
the secondary system.
Follow the wizard to select the primary volume, replication mode, and secondary volume, and to
confirm your replication settings. The wizard verifies the communication links between the primary and
secondary systems. Once setup is successfully completed, you can initiate replication from the SMU or
the CLI.
• For descriptions of replication-related events, see the Event Descriptions Reference Guide.
> Replication Setup Wizard to prepare to replicate an existing volume to another vdisk in
Table 15 Diagnostics for replication setup: Using Remote Snap feature
Verify licensing of the optional feature per system:
• In the Configuration View panel in the SMU, right-click the
system, and select View > Overview. Within the System
Overview table, select the Licensed Features component to
display the status of licensed features.
• If the Replication feature is not enabled, obtain and install a
valid license for Remote Snap.
Isolating Remote Snap replication faults55
AnswerPossible reasonsActions
NoCompatible firmware revision
supporting Remote Snap is not
running on each system used for
replication.
NoInvalid cabling connection.
(Check cabling for each system)
Table 15 Diagnostics for replication setup: Using Remote Snap feature (continued)
host interface ports, replication set
creation fails due to use of CHAP.
• In the SMU, review event logs for indicators of a specific fault in
a host or replication data path component.
• Verify valid IP address of the network port on the remote system.
• In the Configuration View panel in the SMU, right-click the
remote system, and select Tools > Check Remote System Link.
Click Check Links.
• Remote Replication mode: In the Configuration View panel in
the SMU, right-click the remote system, and select Tools > Check
Remote System Link. Click Check Links to verify correct link type
and remote host port-to-link connections.
• Local Replication mode: In the Configuration View panel in the
SMU, right-click the local system, and select Tools > Check Local
System Link. Click Check Links to verify correct link type and
local host port-to-link connections.
If using CHAP (Challenge-Handshake Authentication Protocol),
configure it as described in the SMU topics “Using the Replication
Setup Wizard” or “Replicating a volume.”
Table 17Diagnostics for replication setup: Creating a replication set
56Troubleshooting
AnswerPossible reasons Actions
NoUnable to select the replication
mode (Local or Remote)?
NoUnable to select the secondary
volume (the destination volume on
the vdisk to which you will replicate
data from the primary volume)?
• In the SMU, review event logs for indicators of a specific fault in
a host or replication data path component. Follow any
Recommended Actions.
• Local Replication mode replicates to a secondary volume
residing in the local storage system.
• Verify valid links
On dual-controller systems, verify that A ports can access B
ports on the partner controller, and vice versa.
• Verify existence of either a replication-prepared volume of
the same size as the master volume, or a vdisk with sufficient
unused capacity.
• Remote Replication mode replicates to a secondary volume
residing in an independent storage system:
• Verify selection of valid remote vdisk.
• Verify selection of valid remote volume on the vdisk.
• Verify valid IP address of remote system network port.
• Verify user name with Manage role on remote system.
• Verify user password on remote system.
NOTE: If the remote system has not been added, it cannot
be selected.
• In the SMU, review event logs for indicators of a specific fault in
a replication data path component. Follow any Recommended
Actions.
• Verify valid specification of the secondary volume according to
either of the following criteria:
• Creation of new volume on the vdisk
• Selection of replication-prepared volume
NoCommunication link is down.See actions described in "Can you view information about remote
Table 17Diagnostics for replication setup: Creating a replication set (continued)
NoCommunication link is down.See actions described in "Can you view information about remote
links?" (page 56).
Table 20 Diagnostics for replication setup: Viewing a remote system
Resolving voltage and temperature warnings
1. Check that all of the fans are working by making sure the Voltage/Fan Fault/Service Required LED on
each power supply is off, or by using the SMU to check enclosure health status. In the Configuration
View panel, right-click the enclosure and select View
enclosure and its components. The Enclosure Overview page enables you to see information about
each enclosure and its physical components in front, rear, and tabular views—using graphical or
tabular presentation—allowing you to view the health status of the enclosure and its components.
See "Options available for performing basic steps" (page 45) for a description of health status icons
and alternatives for monitoring enclosure health.
2. Make sure that all modules are fully seated in their slots with latches locked.
3. Make sure that no slots are left open for more than two minutes.
If you need to replace a module, leave the old module in place until you have the replacement or use a
blank module to fill the slot. Leaving a slot open negatively affects the airflow and can cause the
enclosure to overheat.
> Overview to view the health status of the
58Troubleshooting
4. Try replacing each power supply module one at a time.
5. Replace the controller modules one at a time.
6. Replace SFPs one at a time.
Sensor locations
The storage system monitors conditions at different points within each enclosure to alert you to problems.
Power, cooling fan, temperature, and voltage sensors are located at key points in the enclosure. In each
controller module and expansion module, the enclosure management processor (EMP) monitors the status
of these sensors to perform SCSI enclosure services (SES) functions.
The following sections describe each element and its sensors.
Power supply sensors
Each enclosure has two fully redundant power supplies with load-sharing capabilities. The power supply
sensors described in the following table monitor the voltage, current, temperature, and fans in each power
supply. If the power supply sensors report a voltage that is under or over the threshold, check the input
voltage.
Table 21 Power supply sensor descriptions
DescriptionEvent/Fault ID LED condition
Power supply 1Voltage, current, temperature, or fan fault
Power supply 2Voltage, current, temperature, or fan fault
Cooling fan sensors
Each power supply includes two fans. The normal range for fan speed is 4,000 to 6,000 RPM. When a
fan speed drops below 4,000 RPM, the EMP considers it a failure and posts an alarm in the storage system
event log. The following table lists the description, location, and alarm condition for each fan. If the fan
speed remains under the 4,000 RPM threshold, the internal enclosure temperature may continue to rise.
Replace the power supply reporting the fault.
Table 22Cooling fan sensor descriptions
DescriptionLocationEvent/Fault ID LED condition
Fan 1Power supply 1< 4,000 RPM
Fan 2Power supply 1< 4,000 RPM
Fan 3Power supply 2< 4,000 RPM
Fan 4Power supply 2< 4,000 RPM
During a shutdown, the cooling fans do not shut off. This allows the enclosure to continue cooling.
Temperature sensors
Extreme high and low temperatures can cause significant damage if they go unnoticed. Each controller
module has six temperature sensors. Of these, if the CPU or FPGA (Field Programmable Gate Array)
temperature reaches a shutdown value, the controller module is automatically shut down. Each power
supply has one temperature sensor.
When a temperature fault is reported, it must be remedied as quickly as possible to avoid system damage.
This can be done by warming or cooling the installation location.
Resolving voltage and temperature warnings59
Table 23 Controller module temperature sensor descriptions
DescriptionNormal operating
range
CPU temperature3°C–88°C0°C–3°C,
FPGA temperature3°C–97°C0°C–3°C,
Onboard temperature 10°C–70°CNoneNoneNone
Onboard temperature 20°C–70°CNoneNoneNone
Onboard temperature 3
(Capacitor temperature)
CM temperature5°C–50°C≤ 5°C,
0°C–70°CNoneNoneNone
Warning operating
range
88°C–90°C
97°C–100°C
≥ 50°C
Critical operating
range
> 90°C0°C
None0°C
≤ 0°C,
≥ 55°C
Shutdown values
100 °C
105°C
None
When a power supply sensor goes out of range, the Fault/ID LED illuminates amber and an event is
logged to the event log.
Table 24 Power supply temperature sensor descriptions
DescriptionNormal operating range
Power Supply 1 temperature–10°C–80°C
Power Supply 2 temperature–10°C–80°C
Power supply module voltage sensors
Power supply voltage sensors ensure that the enclosure power supply voltage is within normal ranges.
There are three voltage sensors per power supply.
Table 25 Voltage sensor descriptions
SensorEvent/Fault LED condition
Power supply 1 voltage, 12V< 11. 0 0 V
> 13.00V
Power supply 1 voltage, 5V< 4.00V
> 6.00V
Power supply 1 voltage, 3.3V< 3.00V
> 3.80V
60Troubleshooting
8Support and other resources
Contacting HP
For worldwide technical support information, see the HP support website:
http://www.hp.com/go/hpsc
Before contacting HP, collect the following information:
• Product model names and numbers
• Technical support registration number (if applicable)
• Product serial numbers
• Error messages
• Operating system type and revision level
• Detailed questions
Subscription service
HP recommends that you register your product at the Subscriber’s Choice for Business website:
http://www.hp.com/go/e-updates
After registering, you will receive e-mail notification of product enhancements, new driver versions,
firmware updates, and other product resources.
Product advisories
Sign up for proactive notifications to receive MSA product advisories. Applying the suggested resolutions
can enhance the availability of the product. Sign up for notifications at:
http://www.hp.com/go/myadvisory
Related information
The following user documents are available on the HP MSA 1040 manuals page at
http://www.hp.com/support/msa1040/manuals
• HP MSA System Racking Instructions
• HP MSA 1040 Quick Start Instructions
• HP MSA 1040 Cable Configuration Guide
• HP MSA 1040 SMU Reference Guide
• HP MSA 1040 CLI Reference Guide
• HP MSA Event Descriptions Reference Guide
From the support website, you can access additional manuals and information, including best practices,
guided-troubleshooting, and firmware downloads: http://www.hp.com/support/msa1040
• HP MSA 1040 QuickSpecs: http://www.hp.com/support/msa1040/QuickSpecs
• HP Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products:
• HP Systems Insight Manager website: http://www.hp.com/go/hpsim
• Single Point of Connectivity Knowledge (SPOCK) website: http://www.hp.com/storage/spock
• White papers and Analyst reports: http
Prerequisites
Knowledge of relevant topics is required for installing and using this product:
://www.hp.com/storage/whitepapers
Contacting HP61
• Servers and computer networks
• Network administration
• Storage system installation and configuration
• Storage area network management
• Relevant protocols:
• Fibre Channel (FC)
• Internet SCSI (iSCSI)
•Ethernet
Troubleshooting resources
See Chapter 7 for simple troubleshooting procedures pertaining to initial setup of the controller enclosure
hardware. The chapter describes fault isolation methodology, basic fault isolation steps, and options
available for performing the basic steps using the Storage Management Utility (SMU), the Command-line
Interface (CLI), event notification, and LEDs. Diagnostics steps are also described.
For additional information see the HP MSA 1040 Guided Troubleshooting website:
• GUI elements that are clicked or selected, such as menu and list
items, buttons, and check boxes
• System output
• Code
• Commands, their arguments, and argument values
• Command variables
Monospace, bold textEmphasized monospace text
WARNING!Indicates that failure to follow directions could result in bodily harm or death.
CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.
IMPORTANT:Provides clarifying information or specific instructions.
62Support and other resources
NOTE: Provides additional information.
TIP:Provides helpful hints and shortcuts.
Rack stability
Rack stability protects personnel and equipment.
WARNING!To reduce the risk of personal injury or damage to equipment:
• Extend leveling jacks to the floor.
• Ensure that the full weight of the rack rests on the leveling jacks.
• Install stabilizing feet on the rack.
• In multiple-rack installations, fasten racks together securely.
• Extend only one rack component at a time. Racks may become unstable if more than one component is
extended.
Customer self repair
HP customer self repair (CSR) programs allow you to repair your storage product. If a CSR part needs
replacing, HP ships the part directly to you so that you can install it at your convenience. Some parts do
not qualify for CSR. Your HP-authorized service provider will determine whether a repair can be
accomplished by CSR.
For more information about CSR, contact your local service provider. For North America, see the CSR
website:
http://www.hp.com/go/selfrepair
Product warranties
For information about HP storage product warranties, see the warranty information website:
http://www.hp.com/go/storagewarrant
y
Rack stability63
64Support and other resources
9Documentation feedback
HP is committed to providing documentation that meets your needs. To help us improve the documentation,
send any errors, suggestions, or comments to Documentation Feedback (docs.feedback@hp.com
the document title and part number, version number, or the URL when submitting your feedback.
). Include
65
66Documentation feedback
ALED descriptions
123
Left earRight ear
4
5
6
Note: Integers on disks indicate drive slot numbering sequence.
12345678 9101112131415161718192021222324
Front panel LEDs
HP MSA 1040 models support small form factor (SFF) and large form factor (LFF) enclosures. The SFF
chassis, configured with 24 2.5" SFF disks, is used as a controller enclosure. The LFF chassis, configured
with 12 3.5" LFF disks, is used as either a controller enclosure or drive enclosure.
Supported drive enclosures, used for adding storage, are available in LFF or SFF chassis. The MSA 2040
6 Gb 3.5" 12-drive enclosure is the large form factor drive enclosure used for storage expansion. The HP
D2700 6 Gb enclosure, configured with 25 2.5" SFF disks, is the small form factor drive enclosure used for
storage expansion. See "SFF drive enclosure" (page 16) for a description of the D2700.
MSA 1040 Array SFF enclosure
LEDDescriptionDefinition
1Enclosure ID Green — On
Enables you to correlate the enclosure with logical views presented
by management software. Sequential enclosure ID numbering of
controller enclosures begins with the integer 1. The enclosure ID for
an attached drive enclosure is nonzero.
5HeartbeatGreen — The enclosure is powered on with at least one power
supply operating normally.
Off — Both power supplies are off; the system is powered off.
6Fault IDAmber — Fault condition exists. The event has been identified, but
the problem needs attention.
Figure 25 LEDs: MSA 1040 Array SFF enclosure front panel
Off — No fault condition exists.
Front panel LEDs67
MSA 1040 Array LFF or supported 12-drive expansion enclosure
1
4
7
10
3
6
9
12
123
Left earRight ear
4
5
6
Note: Integers on disks indicate drive slot numbering sequence.
1
2
3
4
5
6
7
8
9
10
11
12
LEDDescriptionDefinition
1Enclosure ID Green — On
Enables you to correlate the enclosure with logical views presented
by management software. Sequential enclosure ID numbering of
controller enclosures begins with the integer 1. The enclosure ID for
an attached drive enclosure is nonzero.
The diagram and table below display and identify important component items comprising the rear panel
layout of the MSA 1040 controller enclosure. Diagrams and tables on the following pages further describe
rear panel LED behavior for component field-replaceable units.
1 AC Power supplies [see Figure 31 (page 73)]
2 Controller module A [see Figure 29 (page 71)]
3 Controller module B [see Figure 29 (page 71)]
4 Host ports: used for host connection or replication
5 CLI port (USB - Type B)
6 Service port 2 (used by service personnel only)
7 Reserved for future use
8 Network port
9 Service port 1 (used by service personnel only)
10 Disabled button (used by engineering only)
(Stickers shown covering the openings)
11 SAS expansion port
12 DC Power supply (2) — (DC model only)
13 DC Power switch [see Figure 31 (page 73)]
Figure 28 MSA 1040 Array: rear panel
A controller enclosure accommodates two power supply FRUs of the same type—either both AC or both
DC—within the two power supply slots (see two instances of callout 1 above). The controller enclosure
accommodates two controller module FRUs of the same type within the I/O module slots (see callouts 2
and 3 above).
IMPORTANT:If the MSA 1040 controller enclosure is configured with a single controller module, the
controller module must be installed in the upper slot (see callout 2 above), and an I/O module blank must
be installed in the lower slot (see callout 3 above). This configuration is required to allow sufficient air flow
through the enclosure during operation.
The diagrams with tables that immediately follow provide descriptions of the different controller modules
and power supply modules that can be installed into the rear panel of an MSA 1040 controller enclosure.
The controller module for your product is pre-configured with the appropriate SFP for the selected host
interface protocol. Showing controller modules and power supply modules separately from the enclosure
provides improved clarity in identifying the component items called out in the diagrams and described in
the tables.
Descriptions are also provided for optional drive enclosures supported by MSA 1040 controller enclosures
for expanding storage capacity.
70LED descriptions
MSA 1040 controller module—rear panel LEDs
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 1 PORT 2
5
3
46
8
9
7
:
= FC LEDs= iSCSI LEDs
1
2
LEDDescriptionDefinition
1Host 4/8 Gb FC
Link Status/
Link Activity
2Host 10GbE iSCSI
Link Status/
Link Activity
1
2,3
Off — No link detected.
Green — The port is connected and the link is up.
Blinking green — The link has I/O or replication activity.
Off — No link detected.
Green — The port is connected and the link is up.
Blinking green — The link has I/O or replication activity.
3Network Port Link
Active Status
4Network Port Link Speed4Off — Link is up at 10/100base-T negotiated speeds.
5OK to RemoveOff — The controller module is not prepared for removal.
6Unit LocatorOff — Normal operation.
7FRU OKOff — Controller module is not OK.
8Fault/Service RequiredAmber — A fault has been detected or a service action is required.
9Cache StatusGreen — Cache contains unwritten data and operation is normal. The
4
Off — The Ethernet link is not established, or the link is down.
Green — The Ethernet link is up (applies to all negotiated link speeds).
Amber — Link is up and negotiated at 1000base-T.
Blue — The controller module is prepared for removal.
Blinking white — Physically identifies the controller module.
Blinking green — System is booting.
Green — Controller module is operating normally.
Blinking amber — Hardware-controlled power-up or a cache flush or restore
error.
unwritten information can be log or debug data that remains in the cache, so
a Green cache status LED does not, by itself, indicate that any user data is at
risk or that any action is necessary.
Off — In a working controller, cache is clean (contains no unwritten data).
This is an occasional condition that occurs while the system is booting.
Blinking green — A CompactFlash flush or cache self-refresh is in progress,
indicating cache activity.
See also Cache Status LED details.
10Expansion Port StatusOff — The port is empty or the link is down.
1
When in FC mode, the SFPs must be a qualified 8 Gb fibre optic option described in QuickSpecs. An 8 Gbit/s SFP can run at 8
Gbit/s, 4 Gbit/s, or auto-negotiate its link speed.
2
When in 10GbE iSCSI mode, the SFPs must be a qualified 10GbE iSCSI optic option as described in QuickSpecs.
3
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will switch to the mode of operation.
4
When port is down, both LEDs are off.
Figure 29 LEDs: MSA 1040 controller module (equipped with either FC or10GbE iSCSI SFPs)
On — The port is connected and the link is up.
Rear panel LEDs71
LEDDescriptionDefinition
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 1 PORT 2
5
3
46
8
9
7
:
= FC LEDs= iSCSI LEDs
1
2
2,3
1
The FC SFP is not show in this example [see Figure 29 (page 71)].
Off — No link detected.
Green — The port is connected and the link is up; or the link has I/O or
replication activity.
1Not used in example
2Host 1 Gb iSCSI
Link Status/
Link Activity
3Network Port Link
Active Status
4Network Port Link Speed4Off — Link is up at 10/100base-T negotiated speeds.
5OK to RemoveOff — The controller module is not prepared for removal.
6Unit LocatorOff — Normal operation.
7FRU OKOff — Controller module is not OK.
8Fault/Service RequiredAmber — A fault has been detected or a service action is required.
9Cache StatusGreen — Cache contains unwritten data and operation is normal. The
4
Off — The Ethernet link is not established, or the link is down.
Green — The Ethernet link is up (applies to all negotiated link speeds).
Amber — Link is up and negotiated at 1000base-T.
Blue — The controller module is prepared for removal.
Blinking white — Physically identifies the controller module.
Blinking green — System is booting.
Green — Controller module is operating normally.
Blinking amber — Hardware-controlled power-up or a cache flush or restore
error.
unwritten information can be log or debug data that remains in the cache, so
a Green cache status LED does not, by itself, indicate that any user data is at
risk or that any action is necessary.
Off — In a working controller, cache is clean (contains no unwritten data).
This is an occasional condition that occurs while the system is booting.
Blinking green — A CompactFlash flush or cache self-refresh is in progress,
indicating cache activity.
See also Cache Status LED details.
10Expansion Port StatusOff — The port is empty or the link is down.
1
When in FC mode, the SFPs must be a qualified 8 Gb fibre optic option described in QuickSpecs.
2
When in 1 Gb iSCSI mode, the SFPs must be a qualified RJ-45 iSCSI option as described in QuickSpecs. The 1 Gb iSCSI mode does
not support an iSCSI optic option.
3
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will switch to the mode of operation.
NOTE: Once a Link Status LED is lit, it remains so, even if the controller is shutdown via the SMU or CLI.
72LED descriptions
On — The port is connected and the link is up.
When a controller is shutdown or otherwise rendered inactive—its Link Status LED remains
1
2
1
2
illuminated—falsely indicating that the controller can communicate with the host. Though a link exists
between the host and the chip on the controller, the controller is not communicating with the chip. To reset
the LED, the controller must be properly power-cycled [see "Powering on/powering off" (page 25)].
Cache Status LED details
If the LED is blinking evenly, a cache flush is in progress. When a controller module loses power and write
cache contains data that has not been written to disk, the supercapacitor pack provides backup power to
flush (copy) data from write cache to CompactFlash memory. When cache flush is complete, the cache
transitions into self-refresh mode.
If the LED is blinking momentarily slowly, the cache is in a self-refresh mode. In self-refresh mode, if primary
power is restored before the backup power is depleted (3–30 minutes, depending on various factors), the
system boots, finds data preserved in cache, and writes it to disk. This means the system can be
operational within 30 seconds, and before the typical host I/O time-out of 60 seconds, at which point
system failure would cause host-application failure. If primary power is restored after the backup power is
depleted, the system boots and restores data to cache from CompactFlash, which can take about 90
seconds.
copies of user data are preserved: one in controller cache and one in CompactFlash of each controller. The
Cache Status LED illuminates solid green during the boot-up process. This behavior indicates the cache is
logging all POSTs, which will be flushed to the CompactFlash the next time the controller shuts down.
CAUTION: If the Cache Status LED illuminates solid green—and you wish to shut-down the controller—do
so from the user interface, so unwritten data can be flushed to CompactFlash.
The cache flush and self-refresh mechanism is an important data protection feature; essentially four
Power supply LEDs
Power redundancy is achieved through two independent load-sharing power supplies. In the event of a
power supply failure, or the failure of the power source, the storage system can operate continuously on a
single power supply. Greater redundancy can be achieved by connecting the power supplies to separate
circuits. DC power supplies are equipped with a power switch. AC power supplies may or may not have a
power switch (model shown below has no power switch). Whether a power supply has a power switch is
significant to powering on/off. Power supplies are used by controller and drive enclosures.
LEDDescriptionDefinition
AC modelDC model
1Input Source Power Go odGreen — Power is on and input voltage is normal.
2Voltage/Fan Fault/Service RequiredAmber — Output voltage is out of range or a fan is operating
Figure 31 LEDs: MSA 1040 Storage system enclosure power supply modules
Off — Power is off or input voltage is below the minimum
threshold.
below the minimum required RPM.
Off — Output voltage is normal.
Rear panel LEDs73
NOTE: See "Powering on/powering off" (page 25) for information on power-cycling enclosures.
MSA 1040 controllers support the MSA 2040 6 Gb 3.5" 12-drive enclosure. The front panel of the drive
enclosure looks identical to that of an MSA 1040 Array LFF. The rear panel of the drive enclosure is shown
below.
The MSA 2040 6 Gb 3.5" 12-drive enclosure is identical in outward physical appearance to the legacy
P2000 G3 6 Gb 3.5" 12-drive enclosure. For information about upgrading P2000 G3 components for use
with MSA 1040 controllers, see "Upgrading to MSA 2040" (page 17).
D2700 6Gb drive enclosure
Blinking white— Physically identifies the expansion module.
Blinking amber — Hardware-controlled powerup or a cache flush
or restore error.
Blinking green — System is booting.
Off — Expansion module is not OK.
Off — Port is empty or link is down.
Off — Port is empty or link is down.
MSA 1040 controllers support D2700 6Gb drive enclosures. For information about the D2700, visit
http://www.hp.com/support
1040 Quic k Sta r t I n str uc tion s and MSA 1040 Cable Configuration Guide.
74LED descriptions
. Pictorial representations of this drive enclosure are also provided in the MSA
BEnvironmental requirements and specifications
Safety requirements
Install the system in accordance with the local safety codes and regulations at the facility site. Follow all
cautions and instructions marked on the equipment. Also, refer to the documentation included with your
product ship kit.
Site requirements and guidelines
The following sections provide requirements and guidelines that you must address when preparing your site
for the installation.
When selecting an installation site for the system, choose a location not subject to excessive heat, direct
sunlight, dust, or chemical exposure. These conditions greatly reduce the system’s longevity and might void
your warranty.
Site wiring and AC power requirements
The following are required for all installations using AC power supplies:
• All AC mains and supply conductors to power distribution boxes for the rack-mounted system must be
enclosed in a metal conduit or raceway when specified by local, national, or other applicable
government codes and regulations.
• Ensure that the voltage and frequency of your power source match the voltage and frequency inscribed
on the equipment’s electrical rating label.
• To ensure redundancy, provide two separate power sources for the enclosures. These power sources
must be independent of each other, and each must be controlled by a separate circuit breaker at the
power distribution point.
• The system requires voltages within minimum fluctuation. The customer-supplied facilities’ voltage must
maintain a voltage with not more than
suitable surge protection.
• Site wiring must include an earth ground connection to the AC power source. The supply conductors
and power distribution boxes (or equivalent metal enclosure) must be grounded at both ends.
• Power circuits and associated circuit breakers must provide sufficient power and overload protection. To
prevent possible damage to the AC power distribution boxes and other components in the rack, use an
external, independent power source that is isolated from large switching loads (such as air conditioning
motors, elevator motors, and factory loads).
± 5 percent fluctuation. The customer facilities must also provide
NOTE: For power requirements, see QuickSpecs: http://www.hp.com/support/msa1040/QuickSpecs.
Site wiring and DC power requirements
The following are required for all installations using DC power supplies:
• All DC mains and supply conductors to power distribution boxes for the rack-mounted system must
comply with local, national, or other applicable government codes and regulations.
• Ensure that the voltage of your power source matches the voltage inscribed on the equipment’s
electrical label.
• To ensure redundancy, provide two separate power sources for the enclosures. These power sources
must be independent of each other, and each must be controlled by a separate circuit breaker at the
power distribution point.
• The system requires voltages within minimum fluctuation. The customer-supplied facilities’ voltage must
maintain a voltage within the range specified on the equipment’s electrical rating label. The customer
facilities must also provide suitable surge protection.
Safety requirements75
• Site wiring must include an earth ground connection to the DC power source. Grounding must comply
with local, national, or other applicable government codes and regulations.
• Power circuits and associated circuit breakers must provide sufficient power and overload protection.
Weight and placement guidelines
Refer to "Physical requirements" (page 77) for detailed size and weight specifications.
• The weight of an enclosure depends on the number and type of modules installed.
• Ideally, use two people to lift an enclosure. However, one person can safely lift an enclosure if its
weight is reduced by removing the power supply modules and disk drive modules.
• Do not place enclosures in a vertical position. Always install and operate the enclosures in a
horizontal/level orientation.
• When installing enclosures in a rack, make sure that any surfaces over which you might move the rack
can support the weight. To prevent accidents when moving equipment, especially on sloped loading
docks and up ramps to raised floors, ensure you have a sufficient number of helpers. Remove obstacles
such as cables and other objects from the floor.
• To prevent the rack from tipping, and to minimize personnel injury in the event of a seismic occurrence,
securely anchor the rack to a wall or other rigid structure that is attached to both the floor and to the
ceiling of the room.
Electrical guidelines
• These enclosures work with single-phase power systems having an earth ground connection. To reduce
the risk of electric shock, do not plug an enclosure into any other type of power system. Contact your
facilities manager or a qualified electrician if you are not sure what type of power is supplied to your
building.
• Enclosures are shipped with a grounding-type (three-wire) power cord. To reduce the risk of electric
shock, always plug the cord into a grounded power outlet.
• Do not use household extension cords with the enclosures. Not all power cords have the same current
ratings. Household extension cords do not have overload protection and are not meant for use with
computer systems.
Ventilation requirements
Refer to "Environmental requirements" (page 78) for detailed environmental requirements.
• Do not block or cover ventilation openings at the front and rear of an enclosure. Never place an
enclosure near a radiator or heating vent. Failure to follow these guidelines can cause overheating and
affect the reliability and warranty of your enclosure.
• Leave a minimum of 15 cm (6 inches) at the front and back of each enclosure to ensure adequate
airflow for cooling. No cooling clearance is required on the sides, top, or bottom of enclosures.
• Leave enough space in front and in back of an enclosure to allow access to enclosure components for
servicing. Removing a component requires a clearance of at least 37 cm (15 inches) in front of and
behind the enclosure.
Cabling requirements
• Keep power and interface cables clear of foot traffic. Route cables in locations that protect the cables
from damage.
• Route interface cables away from motors and other sources of magnetic or radio frequency
interference.
• Stay within the cable length limitations.
Management host requirements
A local management host with at least one USB Type B port connection is recommended for the initial
installation and configuration of a controller enclosure. After you configure one or both of the controller
76Environmental requirements and specifications
modules with an Internet Protocol (IP) address, you then use a remote management host on an Ethernet
network to configure, manage, and monitor.
NOTE: Connections to this device must be made with shielded cables–grounded at both ends–with
metallic RFI/EMI connector hoods, in order to maintain compliance with NEBS and FCC Rules and
Regulations.
Physical requirements
The floor space at the installation site must be strong enough to support the combined weight of the rack,
controller enclosures, drive enclosures (expansion), and any additional equipment. The site also requires
sufficient space for installation, operation, and servicing of the enclosures, together with sufficient
ventilation to allow a free flow of air to all enclosures.
Table 27andTable 28list enclosure dimensions and weights. Weights are based on an enclosure having
a full complement of disk drives, two controller or expansion modules, and two power supplies installed.
“2U12” denotes the LFF enclosure (12 disks) and “2U24” denotes the SFF enclosure (24 disks).
Table 28 provides weight data for MSA 1040 controller enclosures and select drive enclosures. For
information about other HP MSA drive enclosures that may be cabled to these systems (i.e., D2700), check
QuickSpecs: h
Table 27Rackmount enclosure dimensions
SpecificationsRackmount
ttp://www.hp.com/support/msa1040/QuickSpecs.
2U Height (y-axis)8.9 cm (3.5 inches)
Width (x-axis):
• Chassis only
• Chassis with bezel ear caps
44.7 cm (17.6 inches)
47.9 cm (18.9 inches)
Depth (z-axis):
SFF drive enclosure (2U24)
• Back of chassis ear to controller latch
• Front of chassis ear to back of cable bend
50.5 cm (19.9 inches)
57.9 cm (22.8 inches)
LFF drive enclosure (2U12)
• Back of chassis ear to controller latch
• Front of chassis ear to back of cable bend
.
Table 28 Rackmount enclosure weights
60.2 cm (23.7 inches)
67.1 cm (26.4 inches)
SpecificationsRackmount
MSA 1040 Array SFF enclosure
• Chassis with FRUs (no disks)
1, 2
• Chassis with FRUs (including disk)
MSA 1040 Array LFF enclosure
• Chassis with FRUs (no disks)
1, 2
• Chassis with FRUs (including disks)
1, 3
1, 3
8.6 kg (19.0 lb) [chassis]
19.9 kg (44.0 lb)
25.4 kg (56.0 lb)
9.9 kg (22.0 lb) [chassis]
21.3 k g (47. 0 lb )
30.8 kg (68.0 lb)
MSA 2040 or P2000 6 Gb 3.5" drive enclosure
• Chassis with FRUs (no disks)
• Chassis with FRUs (including disks)
1
Weights shown are nominal, and subject to variances.
2
Weights may vary due to different power supplies, IOMs, and differing calibrations between scales.
3
Weights may vary due to actual number and type of disk drives (SAS or SSD) installed.
1, 2
1, 3
9.9 kg (22.0 lb) [chassis]
21.3 k g (47. 0 lb )
30.8 kg (68.0 lb)
Physical requirements77
Environmental requirements
NOTE: For operating and non-operating environmental technical specifications, see QuickSpecs:
http://www.hp.com/support/msa
1040/QuickSpecs.
Electrical requirements
Site wiring and power requirements
Each enclosure has two power supply modules for redundancy. If full redundancy is required, use a
separate power source for each module. The AC power supply unit in each power supply module is
auto-ranging and is automatically configured to an input voltage range from 88–264 VAC with an input
frequency of 47–63 Hz. The power supply modules meet standard voltage requirements for both U.S. and
international operation. The power supply modules use standard industrial wiring with line-to-neutral or
line-to-line power connections.
Power cord requirements
Each enclosure is equipped with two power supplies of the same type (both AC or both DC). For
enclosures equipped with AC power supply modules, use two power cords that are appropriate for use in
a typical outlet in the destination country. Whether using AC or DC power supplies, each power cable
connects one of the power supplies to an independent, external power source. To ensure power
redundancy, connect the two suitable power cords to two separate circuits; for example, to one commercial
circuit and one uninterruptible power source (UPS).
IMPORTANT:See QuickSpecs for information about power cables provided with your MSA 1040
Storage product.
78Environmental requirements and specifications
CElectrostatic discharge
Preventing electrostatic discharge
To prevent damaging the system, be aware of the precautions you need to follow when setting up the
system or handling parts. A discharge of static electricity from a finger or other conductor may damage
system boards or other static-sensitive devices. This type of damage may reduce the life expectancy of the
device.
To prevent electrostatic damage:
• Avoid hand contact by transporting and storing products in static-safe containers.
• Keep electrostatic-sensitive parts in their containers until they arrive at static-protected workstations.
• Place parts in a static-protected area before removing them from their containers.
• Avoid touching pins, leads, or circuitry.
• Always be properly grounded when touching a static-sensitive component or assembly.
Grounding methods to prevent electrostatic discharge
Several methods are used for grounding. Use one or more of the following methods when handling or
installing electrostatic-sensitive parts:
• Use a wrist strap connected by a ground card to a grounded workstation or computer chassis. Wrist
straps are flexible straps with a minimum of 1 megohm (± 10 percent) resistance in the ground cords.
To provide proper ground, wear the strap snug against the skin.
• Use heel straps, toe straps, or boot straps at standing workstations. Wear the straps on both feet when
standing on conductive floors or dissipating floor mats.
• Use conductive field service tools.
• Use a portable field service kit with a folding static-dissipating work mat.
If you do not have any of the suggested equipment for proper grounding, have an authorized reseller
install the part. For more information on static electricity or assistance with product installation, contact an
authorized reseller.
10GbE iSCSI
1Gb iSCSI
Ethernet
FCC compliance statement
Fibre Channel
routing requirements
shielded
USB for CLI
cabling
connecting controller and drive enclosures
direct attach configurations
switch attach configurations
to enable Remote Snap replication
cache
read ahead
self-refresh mode
write-through
clearance requirements
service
ventilation
command-line interface (CLI)
connecting USB cable to CLI port
using to set controller IP addresses
CompactFlash
card
17
transporting
components
MSA 1040
enclosure front panel
LFF enclosure
SFF enclosure
enclosure rear panel
AC power supply
CLI port (USB - Type B)
DC power supply
DC power switch
host ports
mini-SAS expansion port
network port
reserved port
31
31
32
32, 77
31
76
32, 77
38
31
32
32
16
73
16
76
76
38
38
52
13
13
14, 70
15, 70
14, 70
70
15, 70
15
15, 70
15, 70
19
SAS expansion port
service port 1
service port 2
supported drive enclosures
LFF drive enclosure
SFF drive enclosure
configuring
direct attach configurations
switch attach configurations
connections
verify
25
console requirement
controller enclosures
connecting to data hosts
connecting to remote management hosts
76
7015, 7015, 70
16
16
31
32
29
D
data hosts
defined
optional software
system requirements
DHCP
server
disk drive
slot numbering
29
37
LFF enclosure
SFF enclosure
29
29
13
13
E
electromagnetic compatibility (EMC) 75
electrostatic discharge
grounding methods
precautions
enclosure
cabling
dimensions
IDs, correcting
input frequency requirement
input voltage requirement
installation checklist
site requirements
troubleshooting
web-browser based configuring and
provisioning
weight
Ethernet cables
requirements
79
19
77
77
79
47
78
78
19
77
47
43
32
F
faults
isolating
expansion port connection fault
host-side connection
52
53
32
Index81
methodology 45
H
host interface ports
FC host interface protocol
loop topology
point-to-point protocol
iSCSI host interface protocol
1 Gb
30
10GbE
mutual CHAP
SFP transceivers
hosts
defined
stopping I/O
HP
customer self-repair (CSR)
product warranty
29
29
29
30
3011, 29
47
63
63
I
IDs, correcting for enclosure 47
installing enclosures
installation checklist
IP addresses
setting using CLI
setting using DHCP
19
38
37
L
LEDs
disk drives
enclosure front panel
Enclosure ID
Fault ID
Heartbeat
Unit Identification (UID)
enclosure rear panel
MSA 1040
power supply unit
supported drive enclosures (expansion)
LFF enclosure rear panel
69
67, 68
67, 68
67, 68
67, 68
10GbE iSCSI Host Link Status/Link Activity 71
1Gb iSCSI Host Link Status/Link Activity 72
Cache Status
Expansion Port Status
Fault/Service Required
FC Host Link Status/Link Activity
FRU OK
Network Port Link Active
Network Port Link Speed
OK to Remove
Unit Locator
Input Source Power Good
Voltage/Fan Fault/Service Required
Fault/Service Required
FRU OK
OK to Remove
power supply
SAS In Port Status
71, 72
71, 72
71, 72
71, 72
7171, 72
71, 72
71, 72
73
74
74
74
74
74
71
73
SAS Out Port Status
Unit Locator
local management host requirement
74
74
76
P
physical requirements 77
power cord requirements
power cycle
power off
power on
power supply
AC power requirements
DC power requirements
site wiring requirements
26, 2726, 27
78
757575
R
regulatory compliance
notices
shielded cables
requirements
cabling
clearance
Ethernet cables
host system
physical
ventilation
RFI/EMI connector hoods
20
76
77
76
32, 77
32
29
32, 77
S
safety precautions 75
sensors
cooling fan
locating
power supply
temperature
voltage
site planning
EMC
local management host requirement
physical requirements
safety precautions
SMU
accessing web-based management interface
defined
getting started
Remote Snap replication
storage system configuring and provisioning
storage system setup
configuring
provisioning
replicating
supercapacitor pack
59
59
59
59
60
75
76
77
75
43
43
32
43
43
43
17
T
troubleshooting 45
controller failure, single controller configuration 51
correcting enclosure IDs 47
enclosure does not initialize
47
43
43
82Index
expansion port connection fault 53
host-side connection fault
Remote Snap replication faults
using event notification
using system LEDs
using the CLI
using the SMU
48
46
45
52
46
V
ventilation requirements 76
W
warnings
rack stability
voltage and temperature
63
58
54
HP MSA 1040 User Guide83
84Index
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.