This document describes initial hardware setup for HPE MSA 1050 controller enclosures, and is intended for use by
storage system administrators familiar with servers and computer networks, network administration, storage system
installation and configuration, storage area network management, and relevant protocols.
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not
be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial
license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not responsible for information
outside the Hewlett Packard Enterprise website.
Acknowledgments
Microsoft® and Windows® are U.S. trademarks of the Microsoft group of companies.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java and Oracle are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
HPE MSA Storage models are high-performance storage solutions combining outstanding performance with high
reliability, availability, flexibility, and manageability. MSA 1050 enclosure models blend economy with utility for scalable
storage applications.
MSA 1050 Storage models
The MSA 1050 enclosures support large form factor (LFF 12-disk) and small form factor (SFF 24-disk) 2U chassis, using
AC power supplies. The MSA 1050 controllers are introduced below.
NOTE:For additional information about MSA 1050 controller modules, see the following subsections:
The MSA 1050 enclosures support virtual storage. For virtual storage, a group of disks with an assigned RAID level is
called a virtual disk group. This guide uses the term disk group for brevity.
MSA 1050 enclosure user interfaces
The MSA 1050 enclosures support the Storage Management Utility (SMU), which is a web-based application for
configuring, monitoring, and managing the storage system. Both the SMU and the command-line interface (CLI) are
briefly described.
•The SMU is the primary web interface to manage virtual storage.
•The CLI enables you to interact with the storage system using command syntax entered via the keyboard or
scripting.
NOTE:For more information about the SMU, see the SMU Reference Guide or online help. For more information about
the CLI, see the CLI Reference Guide. See also “Related MSA documentation” (page 9).
MSA 1050 controllers
The MSA 1050 controller enclosures are pre-configured at the factory to support one of these host interface protocols:
•8 Gb FC
•1 GbE iSCSI
•10 GbE iSCSI
•12 Gb HD mini-SAS
For FC and iSCSI host interfaces, the small from-factor pluggable (SFP transceiver or SFP) connector supporting the
pre-configured host interface protocol is pre-installed in the controller module. MSA 1050 controller enclosures do not
allow you to change the host interface protocol or increase speeds. Always use qualified SFP connectors and cables for
supporting the host interface protocol as described in the QuickSpecs. See also “Product QuickSpecs” (page 9).
For the HD mini-SAS host interface, both standard and fan-out cables are supported for host connection. Always use
qualified SAS cable options for supporting the host interface protocol as described in QuickSpecs. Host connection for
this controller module is described by cabling diagrams in“Connecting hosts” . Connection information for the SAS
fan-out cable options is provided in “SAS fan-out cable option” .
8Overview
TIP:See the topic about configuring host ports within the SMU Reference Guide.
Features and benefits
Product features and supported options are subject to change. Online documentation describes the latest product and
product family characteristics, including currently supported features, options, technical specifications, configuration
data, related optional software, and product warranty information.
Product QuickSpecs
Check the QuickSpecs for a complete list of supported servers, operating systems, disk drives, and options. See
www.hpe.com/support/MSA1050QuickSpecs
“HPE MSA 1050 quickspecs” will provide a link.
Related MSA documentation
Related support information is provided in the “Support and other resources” chapter. Firmware-related MSA
documentation titles directly pertaining to this guide are provided in the table below.
Table 1Related MSA firmware documentation
For information aboutSee
Using the Storage Management Utility (SMU) web
interface to configure and manage the product
Using the command-line interface (CLI) to configure and
manage the product
Event codes and recommended actionsHPE MSA Event Descriptions Reference Guide
. If a website location has changed, an Internet search for
HPE MSA 1050/2050 SMU Reference Guide
HPE MSA 1050/2050 CLI Reference Guide
To access the above MSA documentation, see the Hewlett Packard Enterprise Information Library:
www.hpe.com/support/msa1050
NOTE: The table above provides complete titles of MSA firmware documents used with this guide. Within this guide,
references to the documents listed are abbreviated as follows:
Integers on disks indicate drive slot numbering sequence.
The enlarged detail view at right shows LED icons from the bezel that correspond to the chassis LEDs.
Ball stud (two per ear flange)
Bezel icons for LEDs
Ball stud (two per ear flange)
The detail view locator circle (above right) identifies the ear kit that connects to LED light pipes in the bezel (or ear cover).
Front panel components
HPE MSA 1050 models support small form factor (SFF) and large form factor (LFF) enclosures. The SFF chassis,
configured with 24 2.5" SFF disks, and the LFF chassis, configured with 12 3.5" LFF disks, are used as either controller
enclosures or drive enclosures.
Supported drive enclosures, used for adding storage, are available in LFF or SFF chassis. The MSA 2050 LFF Disk
Enclosure is the large form factor drive enclosure and the MSA 2050 SFF Disk Enclosure is the small form factor drive
enclosure used for storage expansion.
HPE MSA 1050 models use either an enclosure bezel or traditional ear covers. The 2U bezel assembly is comprised of left
and right ear covers connected to the bezel body subassembly. A sample bezel is shown below.
Figure 1 Bezel used with MSA 1050 enclosures: front panel
The front panel illustrations that follow show the enclosures with the bezel removed, revealing ear flanges and disk drive
modules. Two sleeves protruding from the backside of each ear cover component of the bezel assembly push-fit onto the
two ball studs shown on each ear flange to secure the bezel. Remove the bezel to access the front panel components.
TIP:See “Enclosure bezel” (page 58) for bezel attachment and removal instructions, and pictorial views.
MSA 1050 Array SFF or supported 24-drive expansion enclosure
1Enclosure ID LED
2Disk drive Online/Activity LED
3Disk drive Fault/UID LED
Figure 2 MSA 1050 Array SFF or supported 24-drive expansion enclosure: front panel
4Unit Identification (UID) LED
5Heartbeat LED
6Fault ID LED
10Components
MSA 1050 Array LFF or supported 12-drive expansion enclosure
132
4
5
6
1
2
3
4
5
6
7
8
9
10
11
12
Left ear
Right ear
Notes:
Integers on disks indicate drive slot numbering sequence.
The enlarged detail view at right shows LED icons from the bezel that correspond to the chassis LEDs.
Bezel icons for LEDs
Ball stud (two per ear flange)Ball stud (two per ear flange)
The detail view locator circle (above right) identifies the ear kit that connects to LED light pipes in the bezel (or ear cover).
1Enclosure ID LED
2Disk drive Online/Activity LED
3Disk drive Fault/UID LED
Figure 3 MSA 1050 Array LFF or supported 12-drive expansion enclosure: front panel
NOTE:Either the bezel or the ear covers should be attached to the enclosure front panel to protect ear circuitry.
You can attach either the enclosure bezel or traditional ear covers to the enclosure front panel to protect the ears, and
provide label identification for the chassis LEDs. The bezel and the ear covers use the same attachment mechanism,
consisting of mounting sleeves on the cover back face:
•The enclosure bezel is introduced in Figure 1 (page 10).
•The ear covers are introduced in Figure 22 (page 58).
•The ball studs to which the bezel or ear covers attach are labeled in Figure 2 (page 10) and Figure 3 (page 11).
•Enclosure bezel alignment for attachment to the enclosure front panel is shown in Figure 21 (page 58).
•The sleeves that push-fit onto the ball studs to secure the bezel or ear covers are shown in Figure 22 (page 58).
Disk drives used in MSA 1050 enclosures
MSA 1050 enclosures support LFF/SFF Midline SAS and LFF/SFF Enterprise SAS disks, and LFF/SFF SSDs. For information
about creating disk groups and adding spares using these different disk drive types, see the SMU Reference Guide.
4Unit Identification (UID) LED
5Heartbeat LED
6Fault ID LED
NOTE:In addition to the front views of SFF and LFF disk modules shown in the figures above, see Figure 26 (page 61)
for pictorial views.
The diagram and table below display and identify important component items comprising the rear panel layout of the
MSA 1050 controller enclosure. An enclosure configured with SFPs is shown.
1AC Power supplies
2Controller module A (see face plate detail figures)
3Controller module B (see face plate detail figures)
Figure 4 MSA 1050 Array: rear panel
A controller enclosure accommodates a power supply FRU in each of the two power supply slots (see two instances of
callout 1 above). The controller enclosure accommodates two controller module FRUs of the same type within the I/O
module slots (see callouts 2 and 3 above).
IMPORTANT: MSA 1050 controller enclosures support dual-controller only. If a partner controller fails, the array will fail
over and run on a single controller until redundancy is restored. A controller module must be installed in each IOM slot to
ensure sufficient airflow through the enclosure during operation.
The diagrams with tables that immediately follow provide descriptions of the different controller modules and power
supply modules that can be installed into the rear panel of an MSA 1050 controller enclosure. Showing controller modules
and power supply modules separately from the enclosure provides improved clarity in identifying the component items
called out in the diagrams and described in the tables. Descriptions are also provided for optional drive enclosures
supported by MSA 1050 controller enclosures for expanding storage capacity.
NOTE: MSA 1050 controller enclosures support hot-plug replacement of redundant controller modules, fans, power
supplies, disk drives, and I/O modules. Hot-add of drive enclosures is also supported.
12Components
MSA 1050 controller module—rear panel components
34
8
2
= FC LEDs
= 10GbE iSCSI LEDs
157
6
= 1 Gb iSCSI LEDs (all host ports use 1 Gb RJ-45 SFPs in this figure)
34
8
2
= FC LEDs
157
6
Figure 5 shows an 8 Gb FC or 10GbE iSCSI controller module. The SFPs look identical. Refer to the LEDs that apply to the
specific configuration of your host interface ports.
1Host ports: used for host connection or replication
2CLI port (USB - Type B)
3Service port 2 (used by service personnel only)
4Reserved for future use
5Network management port
Figure 5 MSA 1050 controller module face plate (FC or 10GbE iSCSI)
Figure 6 shows a 1 Gb iSCSI (RJ-45) controller module.
6Service port 1 (used by service personnel only)
7Disabled button (used by engineering only)
(Sticker shown covering the opening)
8SAS expansion port
1Host ports: used for host connection or replication
more information about host port configuration, see the topic about configuring host ports within the SMU
6Service port 1 (used by service personnel only)
7Disabled button (used by engineering only)
(Sticker shown covering the opening)
8SAS expansion port
Controller enclosure—rear panel layout13
1HD mini-SAS ports: used for host connection
34
8
2
157
6
45716
2
3
1
2CLI port (USB - Type B)
3Service port 2 (used by service personnel only)
4Reserved for future use
5Network management port
Figure 7 MSA 1050 SAS controller module face plate (HD mini-SAS)
IMPORTANT: See Connecting to the controller CLI port for information about enabling the controller enclosure USB
Type–B CLI port for accessing the CLI to perform initial configuration tasks.
Drive enclosures
Drive enclosure expansion modules attach to MSA 1050 controller modules via the mini-SAS expansion port, allowing addition
of disk drives to the system. MSA 1050 controller enclosures support adding the 6 Gb drive enclosures described below.
LFF and SFF drive enclosure — rear panel layout
MSA 1050 controllers support the MSA 2050 LFF Disk Enclosure and the MSA 2050 SFF Disk Enclosure, which share the
same rear panel layout, as shown below.
To enable faster data access from disk storage, the following types of caching are performed:
•Write-back caching. The controller writes user data in the cache memory on the module rather than directly to the
drives. Later, when the storage system is either idle or aging—and continuing to receive new I/O data—the controller
writes the data to the drive array.
•Read-ahead caching. The controller detects sequential array access, reads ahead into the next sequence of data, and
stores the data in the read-ahead cache. Then, if the next read access is for cached data, the controller immediately
loads the data into the system memory, avoiding the latency of a disk access.
NOTE: See the SMU Reference Guide for more information about volume cache options.
Transportable CompactFlash
During a power loss or array controller failure, data stored in cache is saved off to non-volatile memory (CompactFlash).
The data is then written to disk after the issue is corrected. To protect against writing incomplete data to disk, the image
stored on the CompactFlash is verified before committing to disk.
The CompactFlash memory card is located at the midplane-facing end of the controller module as shown below.
Figure 9 MSA 1050 CompactFlash memory card
If one controller fails, then later another controller fails or does not start, and the Cache Status LED is on or blinking, the
CompactFlash will need to be transported to a replacement controller to recover data not flushed to disk (see “Controller
failure” (page 45) for more information).
Cache15
CAUTION: The CompactFlash memory card should only be removed for transportable purposes. To preserve the
existing data stored in the CompactFlash, you must transport the CompactFlash from the failed controller to the
replacement controller using a procedure outlined in the HPE MSA Controller Module Replacement Instructions shipped
with the replacement controller module. Failure to use this procedure will result in the loss of data stored in the cache
module.
IMPORTANT: In dual controller configurations featuring one healthy partner controller, there is no need to transport
failed controller cache to a replacement controller because the cache is duplicated between the controllers, provided that
volume cache is set to standard on all volumes in the pool owned by the failed controller.
Supercapacitor pack
To protect RAID controller cache in case of power failure, MSA 1050 controllers are equipped with supercapacitor
technology, in conjunction with CompactFlash memory, built into each controller module to provide extended cache
memory backup time. The supercapacitor pack provides energy for backing up unwritten data in the write cache to the
CompactFlash in the event of a power failure. Unwritten data in CompactFlash memory is automatically committed to
disk media when power is restored. While the cache is being maintained by the supercapacitor, the Cache Status LED
flashes at a rate of 1/10 second on and 9/10 second off.
Upgrading to the MSA 1050
For information about upgrading components for use with MSA controllers, see Upgrading to the HPE MSA
1050/2050/2052.
16Components
3Installing the enclosures
Installation checklist
The following table outlines the steps required to install the enclosures and initially configure the system. To ensure a
successful installation, perform the tasks in the order they are presented.
Table 2 Installation checklist
StepTaskWhere to find procedure
1.Install the controller enclosure and optional
drive enclosures in the rack, and attach the
bezel or ear caps.
2.Connect the controller enclosure and LFF/SFF
drive enclosures.
3.Connect power cords.See the quick start instructions.
See “Connecting controller and drive enclosures” (page 17).
If using the optional Remote Snap feature, also see
“Connecting two storage systems to replicate volumes”
(page 28).
See “Obtaining IP values” (page 33).
See “Connecting to the controller CLI port” (page 32); with
Linux and Windows topics.
Topics below correspond to bullets at left:
•Sign in to the web-based Storage
Management Utility (SMU).
•Initially configure and provision the storage
system using the SMU.
1
The SMU is introduced in “Accessing the SMU” (page 38). See the SMU Reference Guide or online help for additional information.
Connecting controller and drive enclosures
MSA 1050 controller enclosures support up to four enclosures (including the controller enclosure). You can cable drive
enclosures of the same type or of mixed LFF/SFF model type.
The firmware supports both straight-through and fault-tolerant SAS cabling. Fault-tolerant cabling allows any drive
enclosure to fail—or be removed—while maintaining access to other enclosures. Straight-through cabling does not
provide the same level of fault-tolerance as fault-tolerant cabling, but does provide some performance benefits as well as
ensuring that all disks are visible to the array. Fault tolerance and performance requirements determine whether to
optimize the configuration for high availability or high performance when cabling. MSA 1050 controller enclosures
support 12 Gb/s disk drives downshifted to 6 Gb/s. Each enclosure has an expansion port using 6 Gb/s SAS lanes. When
connecting multiple drive enclosures, use fault-tolerant cabling to ensure the highest level of fault tolerance.
For example, the illustration on the left in Figure 11 (page 20) shows controller module 1A connected to expansion
module 2A, with a chain of connections cascading down (blue). Controller module 1B is connected to the lower
expansion module (4B) of the last drive enclosure, with connections moving in the opposite direction (green).
See “Getting Started” in the HPE MSA 1050/2050 SMU Reference Guide.
See “Configuring the System” and “Provisioning the System”
topics (SMU Reference Guide or online help).
Installation checklist17
Connecting the MSA 1050 controller to the LFF or SFF drive enclosure
The MSA 2050 LFF Disk Enclosure and the MSA 2050 SFF Disk Enclosure can be attached to an MSA 1050 controller
enclosure using supported mini-SAS to mini-SAS cables of 0.5 m (1.64') to 2 m (6.56') length [see Figure 10 (page 20)].
Each drive enclosure provides two 0.5 m (1.64') mini-SAS to mini-SAS cables. Longer cables may be desired or required,
and can be purchased separately.
Cable requirements for MSA 1050 enclosures
IMPORTANT:
•When installing SAS cables to expansion modules, use only supported mini-SAS x4 cables with SFF-8088 connectors
supporting your 6 Gb application.
•See the QuickSpecs for information about which cables are provided with your MSA 1050 products.
www.hpe.com/support/MSA1050QuickSpecs
(If a website location has changed, an Internet search for “HPE MSA 1050 quickspecs” will provide a link.)
•The maximum expansion cable length allowed in any configuration is 2 m (6.56').
•When adding more than two drive enclosures, you may need to purchase additional 1 m or 2 m cables, depending
upon number of enclosures and cabling method used (see QuickSpecs for supported cables):
Spanning 3 or 4 drive enclosures requires 1 m (3.28') cables.
•See the QuickSpecs (link provided above) regarding information about cables supported for host connection:
Qualified Fibre Channel cable options
Qualified 10GbE iSCSI cable options or qualified 10GbE Direct Attach Copper (DAC) cables
Qualified 1 Gb RJ-45 cable options
Qualified HD mini-SAS standard cable and fan-out cable options supporting SFF-8644 and SFF-8088 host
connection [also see “12 Gb SAS protocol” (page 25)]:
–SFF-8644 to SFF-8644 cable option is used for connecting to a 12 Gb/s enabled host.
–SFF-8644 to SFF-8088 cable option is used for connecting to a 6 Gb/s enabled host/switch.
–A bifurcated SFF-8644 to SFF-8644 fan-out cable option is used for connecting to a 12Gb/s enabled host.
–A bifurcated SFF-8644 to SFF-8088 fan-out cable option is used for connecting to a 6Gb/s enabled
host/switch.
NOTE: Using fan-out cables instead of standard cables will double the number of hosts that can be attached to
a single system. Use of fan-out cables will halve the maximum bandwidth available to each host, but overall
bandwidth available to all hosts is unchanged.
See SAS fan-out cable option for more information about bifurcated SAS cables.
For additional information concerning cabling of MSA 1050 controllers, visit:
NOTE: For clarity, the schematic illustrations of controller and expansion modules shown in this section provide only
relevant details such as expansion ports within the module face plate outline. For detailed illustrations showing all
components, see “Controller enclosure—rear panel layout” (page 12).
IMPORTANT: MSA 1050 controller enclosures support dual-controller only. If a partner controller fails, the array will fail
over and run on a single controller until the redundancy is restored. A controller module must be installed in each IOM
slot to ensure sufficient airflow through the enclosure during operation.
Connecting controller and drive enclosures19
Figure 10 Cabling connections between the MSA 1050 controller and a single drive enclosure
InOut
1B
1A
2A
2B
Controller A
Controller B
InOut
= LFF 12-drive or SFF 24-drive enclosure
1
1
Controller A
Controller B
1A
1B
In
Out
2A
2B
3A
3B
4A
4B
In Out
In Out
In Out
In Out
In Out
Fault-tolerant cabling
Controller A
Controller B
1A
1B
In
Out
2A
2B
3A
3B
4A
4B
In Out
In Out
In Out
In Out
In Out
Straight-through cabling
1
1
1
1
1
1
= LFF 12-drive or SFF 24-drive enclosure
1
20Installing the enclosures
Figure 11 Cabling connections between MSA 1050 controllers and LFF and SFF drive enclosures
The diagram at left (above) shows fault-tolerant cabling of a dual-controller enclosure cabled to either the
MSA 2050 LFF Disk Enclosure or the MSA 2050 SFF Disk Enclosure featuring dual-expansion modules. Controller
module 1A is connected to expansion module 2A, with a chain of connections cascading down (blue). Controller module
1B is connected to the lower expansion module (4B), of the last drive enclosure, with connections moving in the opposite
direction (green). Fault-tolerant cabling allows any drive enclosure to fail—or be removed—while maintaining access to
other enclosures.
The diagram at right (above) shows the same storage components connected using straight-through cabling. Using this
method, if a drive enclosures fails, the enclosures that follow the failed enclosure in the chain are no longer accessible
until the failed enclosure is repaired or replaced.
Figure 11 (page 20) provides example diagrams reflecting fault-tolerant (left) and straight-through (right) cabling for the
maximum number of supported MSA 1050 enclosures (four enclosures including the controller enclosure).
IMPORTANT: For comprehensive configuration options and associated illustrations, refer to the HPE MSA 1050 Cable
Configuration Guide.
Testing enclosure connections
NOTE: Once the power-on sequence for enclosures succeeds, the storage system is ready to be connected to hosts, as
described in “Connecting the enclosure to data hosts” (page 23).
Powering on/powering off
Before powering on the enclosure for the first time:
•Install all disk drives in the enclosure so the controller can identify and configure them at power-up.
•Connect the cables and power cords to the enclosures as explained in the quick start instructions.
NOTE:
Power supplies used in MSA 1050 enclosures
The MSA 1050 controller enclosures and drive enclosures are equipped with AC power supplies that do not have power
switches (they are switchless). They power on when connected to a power source, and they power off when disconnected.
•When powering up, make sure to power up the enclosures and associated host in the following order:
Drive enclosures first
This ensures that disks in each drive enclosure have enough time to completely spin up before being scanned by
the controller modules within the controller enclosure.
While enclosures power up, their LEDs blink. After the LEDs stop blinking—if no LEDs on the front and back of
the enclosure are amber—the power-on sequence is complete, and no faults have been detected. See “LED
descriptions” (page 58) for descriptions of LED behavior.
Controller enclosure next
Depending upon the number and type of disks in the system, it may take several minutes for the system to
become ready.
Hosts last (if powered down for maintenance purposes)
TIP: When powering off, you will reverse the order of steps used for powering on.
IMPORTANT: See “Power cord requirements” (page 72) and the QuickSpecs for more information about power cords
supported by MSA 1050 enclosures.
Testing enclosure connections21
AC power supply
Power cord connect
Enclosures equipped with switchless power supplies rely on the power cord for power cycling. Connecting the cord from
the power supply power cord connector to the appropriate power source facilitates power on, whereas disconnecting the
cord from the power source facilitates power off.
Figure 12 AC power supply
AC power cycle
To power on the system:
1. Obtain a suitable AC power cord for each AC power supply that will connect to a power source.
2. Plug the power cord into the power cord connector on the back of the drive enclosure (see Figure 12). Plug the other
end of the power cord into the rack power source. Wait several seconds to allow the disks to spin up.
Repeat this sequence for each power supply within each drive enclosure.
3. Plug the power cord into the power cord connector on the back of the controller enclosure (see Figure 12). Plug the
other end of the power cord into the rack power source.
Repeat the sequence for the controller enclosure’s other switchless power supply.
To power off the system:
1. Stop all I/O from hosts to the system [see “Stopping I/O” (page 41)].
2. Shut down both controllers using either method described below:
Use the SMU to shut down both controllers, as described in the online help and web-posted HPE MSA 1050/2050
SMU Reference Guide.
Proceed to step 3.
Use the CLI to shut down both controllers, as described in the HPE MSA 1050/2050 CLI Reference Guide.
3. Disconnect the power cord female plug from the power cord connector on the power supply module.
Perform this step for each power supply module (controller enclosure first, followed by drive enclosures).
22Installing the enclosures
4Connecting hosts
Host system requirements
Data hosts connected to HPE MSA 1050 arrays must meet requirements described herein. Depending on your system
configuration, data host operating systems may require that multi-pathing is supported.
If fault-tolerance is required, then multi-pathing software may be required. Host-based multi-path software should be
used in any configuration where two logical paths between the host and any storage volume may exist at the same time.
This would include most configurations where there are multiple connections to the host or multiple connections
between a switch and the storage.
•Use native Microsoft MPIO DSM support with Windows Server 2016 and Windows Server 2012. Use either the
Server Manager or the command-line interface (mpclaim CLI tool) to perform the installation. Refer to the following
web sites for information about using Windows native MPIO DSM:
http://support.microsoft.com
http://technet.microsoft.com (search the site for “multipath I/O overview”)
•Use the HPE Multi-path Device Mapper for Linux Software with Linux servers. To download the appropriate device
mapper multi-path enablement kit for your specific enterprise Linux operating system, go to
www.hpe.com/storage/spock
Connecting the enclosure to data hosts
A host identifies an external port to which the storage system is attached. The external port may be a port in an I/O
adapter (such as an FC HBA) in a server. Cable connections vary depending on configuration. Common cable
configurations are shown in this section. A list of supported configurations is available on the Hewlett Packard Enterprise
site at: www.hpe.com/support/msa1050
.
:
•HPE MSA 1050 Quick Start Instructions
•HPE MSA 1050 Cable Configuration Guide
These documents provide installation details and describe supported direct attach, switch-connect, and storage
expansion configuration options for MSA 1050 products. For specific information about qualified host cabling options,
see “Cable requirements for MSA 1050 enclosures” (page 18).
MSA 1050 Storage host interface protocols
The small form-factor pluggable (SFP transceiver of SFP) connectors used in pre-configured host ports of FC and iSCSI
MSA 1050 models are further described in the subsections below. Also see “MSA 1050 Storage models” (page 8) for more
information concerning use of these host ports.
NOTE: MSA 1050 FC and iSCSI controllers support the optionally-licensed Remote Snap replication feature. Remote
Snap supports FC and iSCSI host interface protocols for replication. Use the SMU or CLI commands to create and view
replication sets.
MSA 1050 SAS models use high-density mini-SAS (Serial Attached SCSI) interface protocol for host connection. These
models do not support Remote Snap replication.
Fibre Channel protocol
The MSA 1050 controller enclosures support two controller modules using the Fibre Channel interface protocol for host
connection. Each controller module provides two host ports designed for use with an FC SFP supporting data rates up to
8 Gb/s. MSA 1050 FC controllers can also be cabled to support the optionally-licensed Remote Snap replication feature
via the FC ports.
Host system requirements23
The MSA 1050 FC controllers support Fibre Channel Arbitrated Loop (public or private) or point-to-point topologies.
Loop protocol can be used in a physical loop or in a direct connection between two devices. Point-to-point protocol is
used to connect to a fabric switch. Point-to-point protocol can also be used for direct connection. See the
host-parameters
parameter settings relative to supported link speeds.
Fibre Channel ports are used in either of two capacities:
•To connect two storage systems through a Fibre Channel switch for use of Remote Snap replication.
•For attachment to FC hosts directly, or through a switch used for the FC traffic.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second option
requires that the host computer supports FC and optionally, multipath I/O.
TIP: Use the SMU Configuration Wizard to set FC port speed. Within the SMU Reference Guide, see “Using the
Configuration Wizard” and scroll to FC port options. Use the
and use the
10GbE iSCSI protocol
The MSA 1050 controller enclosures support two controller modules using the Internet SCSI interface protocol for host
connection. Each controller module provides two host ports designed for use with a 10GbE iSCSI SFP or approved DAC
cable supporting data rates up to 10 Gb/s, using either one-way or mutual CHAP (Challenge-Handshake Authentication
Protocol).
command within the CLI Reference Guide for command syntax and details about connection mode
show ports
set host-parameters
CLI command to view information about host ports.
CLI command to set FC port options,
set
TIP: See the topics about configuring CHAP, and CHAP and replication in the SMU Reference Guide.
TIP: Use the SMU Configuration Wizard to set iSCSI port options. Within the SMU Reference Guide, see “Using the
Configuration Wizard” and scroll to iSCSI port options. Use the
options, and use the
The 10GbE iSCSI ports are used in either of two capacities:
•To connect two storage systems through a switch for use of Remote Snap replication.
•For attachment to 10GbE iSCSI hosts directly, or through a switch used for the 10GbE iSCSI traffic.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second option
requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
1 Gb iSCSI protocol
The MSA 1050 controller enclosures support two controller modules using the Internet SCSI interface protocol for host
port connection. Each controller module provides two iSCSI host ports configured with an RJ-45 SFP supporting data
rates up to 1 Gb/s, using either one-way or mutual CHAP.
TIP: See the topics about configuring CHAP, and CHAP and replication in the SMU Reference Guide.
show ports
set host-parameters
CLI command to view information about host ports.
CLI command to set iSCSI port
TIP: Use the SMU Configuration Wizard to set iSCSI port options. Within the SMU Reference Guide, see “Using the
Configuration Wizard” and scroll to iSCSI port options. Use the
options, and use the
24Connecting hosts
show ports
set host-parameters
CLI command to view information about host ports.
CLI command to set iSCSI port
The 1 Gb iSCSI ports are used in either of two capacities:
•To connect two storage systems through a switch for use of Remote Snap replication.
•For attachment to 1 Gb iSCSI hosts directly, or through a switch used for the 1 Gb iSCSI traffic.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second option
requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
12 Gb SAS protocol
The MSA 1050 SAS controller enclosures support two controller modules using the Serial Attached SCSI (Small Computer
System Interface) protocol for host connection. Each enclosure provides two controller modules using dual SFF-8644 HD
mini-SAS host ports supporting data rates up to 12 Gb/s. HD mini-SAS host interface ports connect to hosts; they are not
used for replication. Host ports can be configured via management interfaces to use standard cables or fan-out cables.
Host connection configurations
The MSA 1050 controller enclosures support up to four direct-connect server connections, two per controller module.
Connect appropriate cables from the server HBAs to the controller host ports as described below, and shown in the
following illustrations.
NOTE: Not all operating systems support direct-connect. For more information, see the Single Point of Connectivity
Knowledge (SPOCK) Storage compatibility matrix: www.hpe.com/storage/spock
To connect the MSA 1050 controller to a server or switch—using FC SFPs in controller ports—select Fibre Channel cables
supporting 8 Gb data rates, that are compatible with the host port SFP connector (see the QuickSpecs). Such cables are
also used for connecting a local storage system to a remote storage system via a switch, to facilitate use of the optional
Remote Snap replication feature.
.
To connect the MSA 1050 controller to a server or switch—using 10GbE iSCSI SFPs or approved DAC cables in controller
ports—select the appropriate qualified 10GbE SFP option (see the QuickSpecs). Such cables are also used for connecting
a local storage system to a remote storage system via a switch, to facilitate use of the optional Remote Snap replication
feature.
To connect the MSA 1050 controller to a server or switch—using the 1 Gb SFPs in controller ports—select the appropriate
qualified RJ-45 SFP option (see the QuickSpecs). Such cables are also used for connecting a local storage system to a
remote storage system via a switch, to facilitate use of the optional Remote Snap replication feature.
To connect the MSA 1050 SAS controller supporting HD mini-SAS host interface ports to a server HBA or switch—using
the controller’s SFF-8644 dual HD mini-SAS host ports—select a qualified HD mini-SAS cable option (see QuickSpecs). A
qualified SFF-8644 to SFF-8644 cable option is used for connecting to a 12 Gb/s host; whereas a qualified SFF-8644 to
SFF-8088 cable option is used for connecting to a 6 Gb/s host. Management interfaces distinguish between standard
(dual cable with single connector at each end) and fan-out SAS cables. The fan-out SAS cable is comprised of a single
SFF-8644 connector that branches into two cable segments, each of which is terminated by a connector. The terminating
connectors attach to the host or switch, and are either both of type SF-8644 or SFF-8088. The storage system must be
cabled using either standard cables or fan-out cables; a mixture of cable types is not supported.
IMPORTANT: Before attaching a fan-out cable, make sure to update firmware for the SAS HBA for devices that will be
attached to the fan-out cable.
See the SMU Reference Guide or CLI Reference Guide for more information about the fan-out setting and changing of
host-interface settings for MSA 1050 controller modules. See “SAS fan-out cable option” (page 74) for more information
about fan-out cable options.
Connecting the enclosure to data hosts25
Connecting direct attach configurations
6Gb/s
6Gb/s
Server
MSA 1050 FC or iSCSI
Server
MSA 1050 SAS
12Gb/s
S
S
A
6Gb/s
6Gb/s
12Gb/s
S
S
A
Server
6Gb/s
6Gb/s
Server 1Server 2
MSA 1050 FC or iSCSI
12Gb/s
S
S
A
6Gb/s
6Gb/s
12Gb/s
S
S
A
Server 1Server 2
MSA 1050 SAS
MSA 1050 controller enclosures support dual-controller only. If a partner controller fails, the array will fail over and run on
a single controller until the redundancy is restored. A controller module must be installed in each IOM slot to ensure
sufficient airflow through the enclosure during operation.
NOTE: The MSA 1050 diagrams that follow use a single representation for FC or iSCSI host interface protocols. This is
due to the fact that the port locations and labeling are identical for each of the three possible SFPs supported by the
storage system. Within each host connection cabling category, the HD mini-SAS model is shown beneath the SFP model.
One server/one HBA/dual path
Two servers/one HBA per server/dual path
Figure 13 Connecting hosts: direct attach—one server/one HBA/dual path
Figure 14 Connecting hosts: direct attach—two servers/one HBA per server/dual path
26Connecting hosts
Figure 14 (page 26) includes host connection of an enclosure using standard SAS cables (bottom diagram); whereas
12Gb/s
S
S
A
6Gb/s
6Gb/s
12Gb/s
S
S
A
Server 1Server 2
Server 3Server 4
MSA 1050 SAS
6Gb/s
6Gb/s
Server 1Server 2
Switch ASwitch B
MSA 1050 FC or iSCSI
Figure 15 shows host connection using fan-out cables.
Figure 15 Connecting hosts: direct attach—four servers/one HBA per server/dual path (fan-out)
The management host directly manages systems out-of-band over an Ethernet network.
1. Connect an RJ-45 Ethernet cable to the network management port on each MSA 1050 controller.
2. Connect the other end of each Ethernet cable to a network that your management host can access (preferably on the
same subnet).
NOTE: Connections to this device must be made with shielded cables—grounded at both ends—with metallic RFI/EMI
connector hoods, in order to maintain compliance with FCC Rules and Regulations.
Connecting remote management hosts27
NOTE: Access via HTTPS and SSH is enabled by default, and access via HTTP and Telnet is disabled by default.
Connecting two storage systems to replicate volumes
Remote Snap replication is a licensed feature for disaster-recovery. This feature performs asynchronous replication of
block-level data from a volume in a primary system to a volume in a secondary system by creating an internal snapshot of
the primary volume, and copying the changes to the data since the last replication to the secondary system via FC or
iSCSI links.
The two associated volumes form a replication set, and only the primary volume (source of data) can be mapped for
access by a server. Both systems must be licensed to use Remote Snap, and must be connected through switches to the
same fabric or network (no direct attach). The server accessing the replication set need only be connected to the primary
system. If the primary system goes offline, a connected server can access the replicated data from the secondary system.
Replication configuration possibilities are many, and can be cabled—in switch attach fashion—to support MSA 1050 FC
and iSCSI systems on the same network, or on different networks (MSA 1050 SAS systems do not support replication). As
you consider the physical connections of your system—specifically connections for replication—keep several important
points in mind:
•Ensure that controllers have connectivity between systems, whether the destination system is co-located or remotely
located.
•FC and iSCSI controller models can be used for host I/O or replication, or both.
•The storage system does not provide for specific assignment of ports for replication. However, this can be
accomplished using virtual LANs for iSCSI and zones for FC, or by using physically separate infrastructure.
See also paragraph above Figure 17 (page 29).
•For remote replication, ensure that all ports assigned for replication are able to communicate appropriately with the
remote replication system (see the CLI Reference Guide for more information) by using the
query peer-connection
•Allow a sufficient number of ports to perform replication. This permits the system to balance the load across those
ports as I/O demands rise and fall. If some of the volumes replicated are owned by controller A and others are owned
by controller B, then allow at least one port for replication on each controller module—and possibly more than one
port per controller module—depending on replication traffic load.
•For the sake of system security, do not unnecessarily expose the controller module network port to an external
network connection.
CLI command.
Conceptual cabling examples address cabling on the same network and cabling relative to different networks.
IMPORTANT: Remote Snap must be licensed on all systems configured for replication, and the controller module
firmware version must be compatible on all systems licensed for replication.
NOTE: Systems must be correctly cabled before performing replication. See the following references for more
information about using Remote Snap to perform replication tasks:
•HPE MSA 1050 Best Practices: www.hpe.com/support/MSA1050BestPractices
•HPE MSA 1050/2050 SMU Reference Guide
•HPE MSA 1050/2050 CLI Reference Guide
•HPE MSA Event Descriptions Reference Guide
•HPE MSA 1050 Cable Configuration Guide
To access MSA 1050 documentation, see the Hewlett Packard Enterprise Information Library:
www.hpe.com/support/msa1050
28Connecting hosts
Cabling for replication
6Gb/s
6Gb/s
6Gb/s
6Gb/s
MSA 1050 Storage system
SwitchTo host server(s)
MSA 1050 Storage system
This section shows example replication configurations for MSA 1050 FC or iSCSI controller enclosures. The following
illustrations provide conceptual examples of cabling to support Remote Snap replication.
NOTE: Simplified versions of controller enclosures are used in cabling illustrations to show host ports used for I/O or
replication, given that only the external connectors used in the host interface ports differ.
•Virtual replication supports FC and iSCSI host interface protocols.
•The 2U enclosure rear panel represents MSA 1050 models using FC or iSCSI SFPs.
•Host ports used for replication must use the same protocol (either FC or iSCSI)
•Blue cables show I/O traffic and green cables show replication traffic.
Once the MSA 1050 systems are physically cabled, see the SMU Reference Guide or online help for information about
configuring, provisioning, and using the optional Remote Snap feature.
Host ports and replication
IMPORTANT: MSA 1050 controller enclosures support dual-controller configuration only. A controller module must be
installed in each IOM slot to ensure sufficient airflow through the enclosure during operation.
Each of the following diagrams show the rear panel of two MSA 1050 FC or iSCSI controller enclosures equipped with
dual-controller modules.
IMPORTANT: MSA 1050 controllers support FC and iSCSI host interface protocols for host connection or for
performing replications.
Multiple servers/single network
The diagram below shows the rear panel of two MSA 1050 controller enclosures with both I/O and replication occurring
on the same physical network. With the replication configuration shown below, Virtual Local Area Network (VLAN) and
zoning could be employed to provide separate networks for iSCSI and FC, respectively. Create a VLAN or zone for I/O and
a VLAN or zone for replication to isolate I/O traffic from replication traffic. The configuration would appear physically as
a single network, while logically, it would function as multiple networks
Figure 17 Connecting two storage systems for Remote Snap: multiple servers/one switch/one location
The diagram below shows the rear panel of two MSA 1050 controller enclosures with I/O and replication occurring on
different physical networks. Use three switches to enable host I/O and replication. Connect one port from each controller
module in the left storage enclosure to the left switch. Connect one port from each controller module in the right storage
enclosure to the right switch. Connect one port from each controller module in each enclosure to the middle switch. Use
Connecting two storage systems to replicate volumes29
multiple switches to avoid a single point of failure inherent to using a single switch, and to physically isolate replication
6Gb/s
6Gb/s
6Gb/s
6Gb/s
I/O switch
To host servers
To ho s t s e rvers
I/O switch
MSA 1050 Storage systemMSA 1050 Storage system
Switch (replication)
6Gb/s
6Gb/s
6Gb/s
6Gb/s
Remote site "A"
Ethernet
WAN
I/O switch
To host servers
Remote site "B"
To ho s t s e rvers
I/O switch
Peer sites with failover
MSA 1050 Storage systemMSA 1050 Storage system
traffic from I/O traffic.
Figure 18 Connecting two storage systems for Remote Snap: multiple servers/switches/one location
The diagram below shows the rear panel of two MSA 1050 controller enclosures with both I/O and replication occurring
on different networks.
Figure 19 Connecting two storage systems for Remote Snap: multiple servers/switches/two locations
Although not shown in the preceding cabling examples, you can cable replication-enabled MSA 1050 systems and
compatible MSA 1040/2040 systems—via switch attach—for performing replication tasks limited to the Remote Snap
functionality of the MSA 1040/2040 storage system.
Updating firmware
After installing the hardware and powering on the storage system components for the first time, verify that the controller
modules, expansion modules, and disk drives are using the current firmware release.
NOTE: Update component firmware by installing a firmware file obtained from the HPE web download site as
described in “Accessing updates” (page 55). To install an HPE ROM Flash Component or firmware Smart Component,
follow the instructions on the HPE website.
Otherwise, to install a firmware binary file, follow the steps below.
Using the SMU, in the System topic, select Action > Update Firmware.
30Connecting hosts
The Update Firmware panel opens. The Update Controller Modules tab shows versions of firmware components
currently installed in each controller.
NOTE: Partner Firmware Update using management interfaces:
•The
•To enable or disable the setting via the CLI, use the
SMU
provides an option for enabling or disabling Partner Firmware Update for the partner controller.
partner-firmware-upgrade
set advanced-settings
command, and set the
parameter. See the CLI Reference Guide for more information about command
parameter syntax.
•HPE recommends that Partner Firmware Update is enabled (the default setting).
Optionally, you can update firmware using SFTP or FTP as described in the SMU Reference Guide.
IMPORTANT: See the topics about updating firmware within the SMU Reference Guide before performing a firmware
update.
NOTE: To locate and download the latest software and firmware updates for your product, go to
www.hpe.com/support/downloads
.
Updating firmware31
5Connecting to the controller CLI port
Device description
The MSA 1050 controllers feature a command-line interface port used to cable directly to the controller and initially set IP
addresses, or perform other configuration tasks. This port employs a mini-USB Type B form factor, requiring a cable that
is supplied with the controller, and additional support described herein, so that a server or other computer running a
Linux or Windows operating system can recognize the controller enclosure as a connected device. Without this support,
the computer might not recognize that a new device is connected, or might not be able to communicate with it. The USB
device driver is implemented using the abstract control model (ACM) to ensure broad support.
For Linux computers, no new driver files are needed, but depending on the version of operating system, a Linux
configuration file may need to be created or modified.
For Windows computers, if you are not using Windows 10/Server 2016, the Windows USB device driver must be
downloaded from the HPE website, and installed on the computer that will be cabled directly to the controller
command-line interface port (see also www.hpe.com/su
NOTE: Directly cabling to the CLI port is an out-of-band connection because it communicates outside the data paths
used to transfer information from a computer or network to the controller enclosure.
Emulated serial port
Once attached to the controller module as shown in Figure 20 (page 34), the management computer should detect a new
USB device. Using the Emulated Serial Port interface, the controller presents a single serial port using a vendor ID and
product ID. Effective presentation of the emulated serial port assumes the management controller previously had a
terminal emulator installed (see Tab le 3 ). MSA 1050 controllers support the following applications to facilitate connection.
.
Table 3 Supported terminal emulator applications
pport/downloads).
ApplicationOperating system
HyperTerminal, TeraTerm, PuTTYMicrosoft Windows (all versions)
MinicomLinux (all versions)
Solaris
HP-UX
Certain operating systems require a device driver or special mode of operation. Vendor and product identification are
provided in Tab le 4 .
.
Table 4 Terminal emulator display settings
USB identification code typeCode
USB vendor identification0x210c
USB product identification0xa4a7
Preparing a Linux computer for cabling to the CLI port
You can determine if the operating system recognizes the USB (ACM) device by entering a command:
cat /proc/devices/ |grep -i "ttyACM"
If a device driver is discovered, the output will display:
ttyACM
(and a device number)
You can query information about USB buses and the devices connected to them by entering a command:
32Connecting to the controller CLI port
lsusb
If a USB device driver is discovered, the output will display:
ID 210c:a4a7
The ID above is comprised of vendor ID and product ID terms as shown in Table 4 (page 3 2 ) .
IMPORTANT: Although Linux systems do not require installation of a device driver, on some operating system
versions, certain parameters must be provided during driver loading to enable recognition of the MSA 1050 controllers.
To load the Linux device driver with the correct parameters on these systems, the following command is required:
Optionally, the information can be incorporated into the /etc/
modules.conf
Preparing a Windows computer for cabling to the CLI port
A Windows USB device driver is used for communicating directly with the controller command-line interface port using a
USB cable to connect the controller enclosure and the computer.
IMPORTANT: If using Windows 10/Server 2016, the operating system provides a native USB serial driver that supports
the controller module’s USB CLI port. However, if using an older version of Windows, you should download and install the
USB device driver from your HPE MSA support page at www.hpe.com/support/downloads
Obtaining IP values
One method of obtaining IP values for your system is to use a network management utility to discover “HPE MSA
Storage” devices on the local LAN through SNMP. Alternative methods for obtaining IP values for your system are
described in the following subsections.
Setting network port IP addresses using DHCP
In DHCP mode, network port IP address, subnet mask, and gateway values are obtained from a DHCP server if one is
available. If a DHCP server is unavailable, current addressing is unchanged.
file.
.
1. Look in the DHCP server’s pool of leased addresses for two IP addresses assigned to “HPE MSA Storage.”
2. Use a ping broadcast to try to identify the device through the ARP table of the host.
If you do not have a DHCP server, you will need to ask your system administrator to allocate two IP addresses, and
set them using the command-line interface during initial configuration (described below).
NOTE: For more information, see the Configuration Wizard topic about network configuration within the SMU
Reference Guide.
Setting network port IP addresses using the CLI port and cable
You can set network port IP addresses manually using the command-line interface port and cable. If you have not done
so already, you need to enable your system for using the command-line interface port [also see “Using the CLI port and
cable—known issues on Windows” (page 37)].
NOTE: For Linux systems, see “Preparing a Linux computer for cabling to the CLI port” (page 32). For Windows
systems see “Preparing a Windows computer for cabling to the CLI port” (page 33).
Obtaining IP values33
Network ports on controller module A and controller module B are configured with the following factory-default IP settings:
CACHE
LINK
DIRTY
LINK
ACT
CLI
CLI
Host Interface
Not Shown
SERVICE−2
SERVICE−1
6Gb/s
Connect USB cable to CLI
port on controller faceplate
•Management Port IP Address: 10.0.0.2 (controller A), 10.0.0.3 (controller B)
•IP Subnet Mask: 255.255.255.0
•Gateway IP Address: 10.0.0.1
If the default IP addresses are not compatible with your network, you must set an IP address for each network port using
the command-line interface.
Use the CLI commands described in the steps below to set the IP address for the network port on each controller module.
Once new IP addresses are set, you can change them as needed using the SMU.
NOTE: Changing IP settings can cause management hosts to lose access to the storage system.
1. From your network administrator, obtain an IP address, subnet mask, and gateway address for controller A, and
another for controller B.
Record these IP addresses so that you can specify them whenever you manage the controllers using the SMU or the CLI.
2. Use the provided USB cable to connect controller A to a USB port on a host computer. The USB mini 5 male
connector plugs into the CLI port as shown in Figure 20 (generic controller module is shown).
Figure 20 Connecting a USB cable to the CLI port
3. Enable the CLI port for subsequent communication.
34Connecting to the controller CLI port
If the USB device is supported natively by the operating system, proceed to step 4.
Linux customers should enter the command syntax provided in “Preparing a Linux computer for cabling to the
CLI port” (page 32).
Windows customers should locate the downloaded device driver described in “Preparing a Windows computer for
cabling to the CLI port” (page 33), and follow the instructions provided for proper installation.
4. Start and configure a terminal emulator, such as HyperTerminal or VT-100, using the display settings in Tab le 5 and
the connection settings in Tab le 6 (also, see the note following this procedure).
.
Table 5 Terminal emulator display settings
ParameterValue
Terminal emulation modeVT-100 or ANSI (for color support)
FontTerminal
TranslationsNone
Columns80
Table 6 Terminal emulator connection settings
ParameterValue
ConnectorCOM3 (for example)
1,2
Baud rate115,200
Data bits8
ParityNone
Stop bits1
Flow controlNone
1
Your server or laptop configuration determines which COM port is used for Disk Array USB Port.
2
Verify the appropriate COM port for use with the CLI.
5. In the terminal emulator, connect to controller A.
6. Press Enter to display the CLI prompt (#).
The CLI displays the system version, MC version, and login prompt:
a. At the login prompt, enter the default user
b. Enter the default password
!manage
.
manage
.
If the default user or password—or both—have been changed for security reasons, enter the secure login credentials
instead of the defaults shown above.
7.At the prompt, type the following command to set the values you obtained in step 1 for each network port, first for
controller A and then for controller B:
set network-parameters ip address netmask netmask gateway gateway controller a|b
where:
address
netmask
gateway
a|b
is the IP address of the controller
is the subnet mask
is the IP address of the subnet router
specifies the controller whose network parameters you are setting
For example:
# set network-parameters ip 192.168.0.10 netmask 255.255.255.0 gateway 192.168.0.1 controller a
# set network-parameters ip 192.168.0.11 netmask 255.255.255.0 gateway 192.168.0.1 controller b
8. Type the following command to verify the new IP addresses:
show network-parameters
Network parameters, including the IP address, subnet mask, and gateway address are displayed for each controller.
9. Use the
ping command to verify network connectivity.
Obtaining IP values35
For example, to ping the gateway in the examples above:
# ping 192.168.0.1
Info: Pinging 192.168.0.1 with 4 packets.
Success: Command completed successfully. - The remote computer responded with 4 packets.
10. In the host computer's command window, type the following command to verify connectivity, first for controller A and
then for controller B:
ping controller-IP-address
If you cannot access your system for at least three minutes after changing the IP address, your network might require
you to restart the Management Controller(s) using the CLI. When you restart a Management Controller,
communication with it is temporarily lost until it successfully restarts.
Type the following command to restart the management controller on both controllers:
restart mc both
11. When you are done using the CLI, exit the emulator.
12. Retain the new IP addresses to access and manage the controllers, using either the SMU or the CLI.
NOTE: Using HyperTerminal with the CLI on a Microsoft Windows host:
On a host computer connected to a controller module’s mini-USB CLI port, incorrect command syntax in a HyperTerminal
session can cause the CLI to hang. To avoid this problem, use correct syntax, use a different terminal emulator, or connect
to the CLI using SSH rather than the mini-USB cable.
Be sure to close the HyperTerminal session before shutting down the controller or restarting its Management Controller.
Otherwise, the host’s CPU cycles may rise unacceptably.
If communication with the CLI is disrupted when using an out-of-band cable connection, communication can sometimes
be restored by disconnecting and reattaching the mini-USB cable as described in step 2 on page 34.
NOTE: If using a Windows operating system version older than Windows 10/Server 2016, access the USB device driver
download from the HPE MSA support website at www.hpe.com/support/downloads
.
36Connecting to the controller CLI port
Using the CLI port and cable—known issues on Windows
When using the CLI port and cable for setting controller IP addresses and other operations, be aware of the following
known issues on Microsoft Windows platforms.
Problem
On Windows operating systems, the USB CLI port may encounter issues preventing the terminal emulator from
reconnecting to storage after the Management Controller (MC) restarts or the USB cable is unplugged and reconnected.
Workaround
Follow these steps when using the mini-USB cable and USB Type B CLI port to communicate out-of-band between the
host and controller module for setting network port IP addresses.
To restore a hung connection when the MC is restarted (any supported terminal emulator):
1. If the connection hangs, disconnect and quit the terminal emulator program.
a. Using Device Manager, locate the COMn port assigned to the Disk Array Port.
b. Right-click on the hung Disk Array USB Port (COMn), and select Disable.
c. Wait for the port to disable.
2. Right-click on the previously hung—now disabled—Disk Array USB Port (COMn), and select Enable.
3. Start the terminal emulator and connect to the COM port.
4. Set network port IP addresses using the CLI (see procedure on page 33).
NOTE: When using Windows 10/Server 2016 with PuTTY, the
not open.
XON/XOFF
setting must be disabled, or the COM port will
Using the CLI port and cable—known issues on Windows37
6Basic operation
Verify that you have completed the sequential “Installation Checklist” instructions in Table2 (page17). Once you have
successfully completed steps 1 through 8 therein, you can access the management interface using your web browser to
complete the system setup.
Accessing the SMU
Upon completing the hardware installation, you can access the web-based management interface—SMU (Storage
Management Utility)—from the controller module to monitor and manage the storage system. Invoke your web browser,
and enter the
completion of “Installation Checklist” step 8), then press Enter. To Sign In to the SMU, use the default user name
and password
secure login credentials instead of the defaults. This brief Sign In discussion assumes proper web browser setup.
IMPORTANT: For detailed information about accessing and using the SMU, see the topic about getting started in the
SMU Reference Guide.
The Getting Started section provides instructions for signing-in to the SMU, introduces key concepts, addresses browser
setup, and provides tips for using the main window and the help window.
https://IP-address
!manage
. If the default user or password—or both—have been changed for security reasons, enter the
of the controller module’s network port in the address field (obtained during
manage
TIP: After signing in to the
SMU
, you can use online help as an alternative to consulting the reference guide.
Configuring and provisioning the storage system
Once you have familiarized yourself with the SMU, use it to configure and provision the storage system. If you are
licensed to use the optional Remote Snap feature, you may also need to set up storage systems for replication. Refer to
the following topics within the SMU Reference Guide or online help:
•Configuring the system
•Provisioning the system
•Using Remote Snap to replicate volumes
IMPORTANT: Some features within the storage system require a license. The license is specific to the controller
enclosure and firmware version. See the topic about installing a license within the SMU Reference Guide for instructions
about viewing the status of licensed features and installing a license.
IMPORTANT: If the system is used in a VMware environment, set the system Missing LUN Response option to use its
Illegal Request setting. To do so, see either the topic about changing the missing LUN response in the
Guide, or the topic about the
set-advanced-settings
command in the CLI Reference Guide.
SMU Reference
38Basic operation
7Troubleshooting
These procedures are intended to be used only during initial configuration, for the purpose of verifying that hardware
setup is successful. They are not intended to be used as troubleshooting procedures for configured systems using
production data and I/O.
USB CLI port connection
MSA 1050 controllers feature a CLI port employing a mini-USB Type B form factor. If you encounter problems
communicating with the port after cabling your computer to the USB device, you may need to either download a device
driver (Windows), or set appropriate parameters via an operating system command (Linux). See “Connecting to the
controller CLI port” (page 32) for more information.
Fault isolation methodology
MSA 1050 controllers provide many ways to isolate faults. This section presents the basic methodology used to locate
faults within a storage system, and to identify the associated Field Replaceable Units (FRUs) affected.
As noted in “Basic operation” (page 38), use the SMU to configure and provision the system upon completing the
hardware installation. As part of this process, configure and enable event notification so the system will notify you when
a problem occurs that is at or above the configured severity (see “Using the Configuration Wizard > Configuring event
notification” within the SMU Reference Guide). With event notification configured and enabled, you can follow the
recommended actions in the notification message to resolve the problem, as further discussed in the options presented
below.
Basic steps
The basic fault isolation steps are listed below:
•Gather fault information, including using system LEDs [see “Gather fault information” (page 40)].
•Determine where in the system the fault is occurring [see “Determine where the fault is occurring” (page 40)].
•Review event logs [see “Review the event logs” (page 40)].
•If required, isolate the fault to a data path component or configuration [see “Isolate the fault” (page 41)].
Cabling systems to enable use of the licensed Remote Snap feature—to replicate volumes—is another important fault
isolation consideration pertaining to initial system installation. See “Isolating Remote Snap replication faults” (page 49)
for more information about troubleshooting during initial setup.
Options available for performing basic steps
When performing fault isolation and troubleshooting steps, select the option or options that best suit your site
environment. Use of any option (four options are described below) is not mutually-exclusive to the use of another option.
You can use the SMU to check the health icons/values for the system and its components to ensure that everything is
okay, or to drill down to a problem component. If you discover a problem, both the SMU and the CLI provide
recommended action text online. Options for performing basic steps are listed according to frequency of use:
•Use the SMU.
•Use the CLI.
•Monitor event notification.
•View the enclosure LEDs.
Use the SMU
The SMU uses health icons to show OK, Degraded, Fault, or Unknown status for the system and its components. The
SMU enables you to monitor the health of the system and its components. If any component has a problem, the system
health will be Degraded, Fault, or Unknown. Use the SMU to drill down to find each component that has a problem, and
follow actions in the Recommendation field for the component to resolve the problem.
USB CLI port connection39
Use the CLI
As an alternative to using the SMU, you can run the show system
and its components. If any component has a problem, the system health will be Degraded, Fault, or Unknown, and those
components will be listed as Unhealthy Components. Follow the recommended actions in the component Health
Recommendation field to resolve the problem.
Monitor event notification
With event notification configured and enabled, you can view event logs to monitor the health of the system and its
components. If a message tells you to check whether an event has been logged, or to view information about an event in
the log, you can do so using either the SMU or the CLI. Using the SMU, you would view the event log and then click on the
event message to see detail about that event. Using the CLI, you would run the
additional parameters to filter the output) to see the detail for an event.
View the enclosure LEDs
You can view the LEDs on the hardware (while referring to LED descriptions for your enclosure model) to identify
component status. If a problem prevents access to either the SMU or the CLI, this is the only option available. However,
monitoring/management is often done at a management console using storage management interfaces, rather than
relying on line-of-sight to LEDs of racked hardware components.
Performing basic steps
You can use any of the available options in performing the basic steps comprising the fault isolation methodology.
Gather fault information
When a fault occurs, it is important to gather as much information as possible. Doing so will help you determine the
correct action needed to remedy the fault.
command in the CLI to view the health of the system
show events detail
command (with
Begin by reviewing the reported fault:
•Is the fault related to an internal data path or an external data path?
•Is the fault related to a hardware component such as a disk drive module, controller module, or power supply?
By isolating the fault to one of the components within the storage system, you will be able to determine the necessary
action more quickly.
Determine where the fault is occurring
Once you have an understanding of the reported fault, review the enclosure LEDs. The enclosure LEDs are designed to
alert users of any system faults, and might be what alerted the user to a fault in the first place.
When a fault occurs, the Fault ID status LED on the enclosure right ear [see “Front panel components” (page 10)]
illuminates. Check the LEDs on the back of the enclosure to narrow the fault to a FRU, connection, or both. The LEDs also
help you identify the location of a FRU reporting a fault.
Use the SMU to verify any faults found while viewing the LEDs. The SMU is also a good tool to use in determining where
the fault is occurring if the LEDs cannot be viewed due to the location of the system. The SMU provides you with a visual
representation of the system and where the fault is occurring. It can also provide more detailed information about FRUs,
data, and faults.
Review the event logs
The event logs record all system events. Each event has a numeric code that identifies the type of event that occurred,
and has one of the following severities:
•Critical. A failure occurred that may cause a controller to shut down. Correct the problem immediately.
•Error. A failure occurred that may affect data integrity or system stability. Correct the problem as soon as possible.
•Warning. A problem occurred that may affect system stability, but not data integrity. Evaluate the problem and
correct it if necessary.
40Troubleshooting
•Informational. A configuration or state change occurred, or a problem occurred that the system corrected. No
immediate action is required.
For information about specific events, see the Event Descriptions Reference Guide, located on the Hewlett Packard
Enterprise Information Library at: www.hpe.com/support/msa
The event logs record all system events. It is very important to review the logs, not only to identify the fault, but also to
search for events that might have caused the fault to occur. For example, a host could lose connectivity to a disk group if
a user changes channel settings without taking the storage resources assigned to it into consideration. In addition, the
type of fault can help you isolate the problem to either hardware or software.
Isolate the fault
Occasionally it might become necessary to isolate a fault. This is particularly true with data paths, due to the number of
components comprising the data path. For example, if a host-side data error occurs, it could be caused by any of the
components in the data path: controller module, cable, connectors, switch, or data host.
If the enclosure does not initialize
It may take up to two minutes for the enclosures to initialize. If the enclosure does not initialize:
•Perform a rescan.
•Power cycle the system.
•Make sure the power cord is properly connected, and check the power source that it is connected to.
•Check the event log for errors.
Correcting enclosure IDs
1050.
When installing a system with drive enclosures attached, the enclosure IDs might not agree with the physical cabling
order. This is because the controller might have been previously attached to some of the same enclosures during factory
testing, and it attempts to preserve the previous enclosure IDs if possible. To correct this condition, make sure that both
controllers are up, and perform a rescan using the SMU or the CLI. This will reorder the enclosures, but can take up to two
minutes for the enclosure IDs to be corrected.
To perform a rescan using the CLI, type the following command:
rescan
To rescan using the SMU:
1. Verify that both controllers are operating normally.
2. Do one of the following:
Point to the System tab and select Rescan Disk Channels.
In the System topic, select Action > Rescan Disk Channels.
3. Click Rescan.
Stopping I/O
When troubleshooting disk drive and connectivity faults, stop I/O to the affected disk groups from all hosts and remote
systems as a data protection precaution. As an additional data protection precaution, it is recommended to conduct
regularly scheduled backups of your data.
IMPORTANT: Stopping I/O to a disk group is a host-side task, and falls outside the scope of this document.
When on-site, you can verify there is no I/O activity by briefly monitoring the system LEDs. When accessing the storage
system remotely, this is not possible. Remotely, you can use the
determine if input and output has stopped. Perform these steps:
show disk-group-statistics
CLI command to
Stopping I/O41
1. Using the CLI, run the
The
Reads
reset, or since the controller was restarted. Record the numbers displayed.
2. Run the
show disk-group-statistics
This provides you a specific window of time (the interval between requesting the statistics) to determine if data is
being written to or read from the disk group. Record the numbers displayed.
3. To determine if any reads or writes occur during this interval, subtract the set of numbers you recorded in step 1 from
the numbers you recorded in step 2.
If the resulting difference is zero, then I/O has stopped.
If the resulting difference is not zero, a host is still reading from or writing to this disk group. Continue to stop I/O
from hosts, and repeat step 1 and step 2 until the difference in step 3 is zero.
See the CLI Reference Guide for additional information on the Hewlett Packard Enterprise Information Library at:
w
ww.hpe.com/support/msa1050.
Diagnostic steps
This section describes possible reasons and actions to take when an LED indicates a fault condition during initial system
setup. See “LED descriptions” (page 58) for descriptions of all LED statuses.
NOTE: Once event notification is configured and enabled using the SMU, you can view event logs to monitor the health
of the system and its components using the GUI.
and
show disk-group-statistics
Writes
outputs show the number of these operations that have occurred since the statistic was last
command.
command a second time.
In addition to monitoring LEDs via line-of-sight observation of racked hardware components when performing diagnostic
steps, you can also monitor the health of the system and its components using the management interfaces. Be mindful of
this when reviewing the Actions column in the diagnostics tables, and when reviewing the step procedures provided in
this chapter.
Is the enclosure front panel Fault/Service Required LED amber?
Table 7 Diagnostics LED status: Front panel “Fault/Service Required”
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
YesA fault condition exists/occurred.
If installing an I/O module FRU, the module
has not gone online and likely failed its
self-test.
•Check the LEDs on the back of the controller enclosure to narrow
the fault to a FRU, connection, or both.
•Check the event log for specific information regarding the fault.
Follow any recommended actions.
•If installing an IOM FRU, try removing and reinstalling the new IOM,
and check the event log for errors.
•If the above actions do not resolve the fault, isolate the fault, and
contact an authorized service provider for assistance. Replacement
may be necessary.
42Troubleshooting
Is the enclosure rear panel FRU OK LED off?
Tab l e 8D i a gnosti c s L ED sta t us: Rear panel “FRU OK”
AnswerPossible reasons Actions
No
(blinking)
YesThe controller module is not powered on.
System functioning properly.
System is booting.
The controller module has failed.
No action required.
Wait for system to boot.
•Check that the controller module is fully inserted and latched in
place, and that the enclosure is powered on.
•Check the event log for specific information regarding the failure.
Is the enclosure rear panel Fault/Service Required LED amber?
Table 9 Diagnostics LED status: Rear panel “Fault/Service Required”
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
Yes
(blinking)
One of the following errors occurred:
•Hardware-controlled power-up error
•Cache flush error
•Cache self-refresh error
•Restart this controller from the other controller using the SMU or
the CLI.
•If the above action does not resolve the fault, remove the
controller and reinsert it.
•If the above action does not resolve the fault, contact an
authorized service provider for assistance. It may be necessary to
replace the controller.
Are both disk drive module LEDs off (Online/Activity and Fault/UID)?
Table 10 Diagnostics LED status: Front panel disks “Online/Activity” and “Fault/UID”
AnswerPossible reasons Actions
Yes•There is no power.
•The disk is offline.
•The disk is not configured.
•Check that the disk drive is fully inserted and latched in place, and
that the enclosure is powered on.
NOTE: See “Disk drives used in MSA 1050 enclosures” (page 11).
Is the disk drive module Fault/UID LED blinking amber?
Table 11 Diagnostics LED status: Front panel disks “Fault/UID”
AnswerPossible reasons Actions
No, but the
Online/Activity
LED is blinking.
The disk drive is rebuilding.No action required.
CAUTION: Do not remove a disk drive that is
reconstructing. Removing a reconstructing disk drive might
terminate the current operation and cause data loss.
Diagnostic steps43
Table 11 Diagnostics LED status: Front panel disks “Fault/UID” (continued)
AnswerPossible reasons Actions
Yes , and t he
Online/Activity
LED is off.
Yes , and t he
Online/Activity
LED is blinking.
The disk drive is offline. A predictive
failure alert may have been received for
this device.
The disk drive is active, but a predictive
failure alert may have been received for
this device.
•Check the event log for specific information regarding the fault.
•Isolate the fault.
•Contact an authorized service provider for assistance.
•Check the event log for specific information regarding the fault.
•Isolate the fault.
•Contact an authorized service provider for assistance.
NOTE: See “Connecting controller and drive enclosures” (page 17) and Figure 26 (page 61).
Is a connected host port Host Link Status LED off?
Table 12 Diagnostics LED status: Rear panel “Host Link Status”
AnswerPossible reasons Actions
NoSystem functioning properly.No action required (see Link LED note: page 65).
YesThe link is down.•Check cable connections and reseat if necessary.
•Inspect cables for damage. Replace cable if necessary.
•Swap cables to determine if fault is caused by a defective cable.
•Verify that the switch, if any, is operating properly. If possible, test
•Verify that the HBA is fully seated, and that the PCI slot is
•In the SMU, review event logs for indicators of a specific fault in a
•Contact an authorized service provider for assistance.
•See “Isolating a host-side connection fault” (page 46).
Replace cable if necessary.
with another port.
powered on and operational.
host data path component. Follow any recommended actions.
Is a connected port Expansion Port Status LED off?
Table 13 Diagnostics LED status: Rear panel “Expansion Port Status”
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
YesThe link is down.•Check cable connections and reseat if necessary.
•Inspect cable for damage. Replace cable if necessary.
•Swap cables to determine if fault is caused by a defective cable.
•In the SMU, review event logs for indicators of a specific fault in a
•Contact an authorized service provider for assistance.
•See “Isolating a controller module expansion port connection
44Troubleshooting
Replace cable if necessary.
host data path component. Follow any recommended actions.
fault” (page 49).
Is a connected port Network Port Link Status LED off?
Table 14 Diagnostics LED status: Rear panel “Network Port Link Status”
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
YesThe link is down.Use standard networking troubleshooting procedures to isolate faults on
the network.
Is the power supply Input Power Source LED off?
Table 15 Diagnostics LED status: Rear panel power supply “Input Power Source”
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
YesThe power supply is not receiving
adequate power.
•Verify that the power cord is properly connected and check the
power source to which it connects.
•Check that the power supply FRU is firmly locked into position.
•In the SMU, check the event log for specific information regarding the
fault. Follow any recommended actions.
•If the above action does not resolve the fault, isolate the fault, and
contact an authorized service provider for assistance.
Is the power supply Voltage/Fan Fault/Service Required LED amber?
Table 16 Diagnostics LED status: Rear panel power supply: “Voltage/Fan Fault/Service Required”
AnswerPossible reasons Actions
NoSystem functioning properly.No action required.
YesThe power supply unit or a fan is
Controller failure
Cache memory is flushed to CompactFlash in the case of a controller failure or power loss. During the write to
CompactFlash process, only the components needed to write the cache to the CompactFlash are powered by the
supercapacitor. This process typically takes 60 seconds per 1 Gbyte of cache. After the cache is copied to CompactFlash,
the remaining power left in the supercapacitor is used to refresh the cache memory. While the cache is being maintained
by the supercapacitor, the Cache Status LED flashes at a rate of 1/10 second on and 9/10 second off.
IMPORTANT: Transportable cache only applies to single-controller configurations. In dual controller configurations,
there is no need to transport cache from a failed controller to a replacement controller because the cache is duplicated
between the peer controllers (subject to volume cache optimization setting).
operating at an unacceptable
voltage/RPM level, or has failed.
When isolating faults in the power supply, remember that the fans in both
modules receive power through a common bus on the midplane, so if a
power supply unit fails, the fans continue to operate normally.
•Check that the power supply FRU is firmly locked into position.
•Check that the power cable is connected to a power source.
•Check that the power cable is connected to the power supply module.
Controller failure45
If the controller has failed or does not start, is the Cache Status LED on/blinking?
Table 17 Diagnostics LED status: Rear panel “Cache Status”
AnswerActions
No, the Cache LED status is off, and the controller
does not boot.
No, the Cache Status LED is off, and the controller
boots.
Yes, at a strobe 1:10 rate - 1 Hz, and the controller does
not boot.
Yes, at a strobe 1:10 rate - 1 Hz, and the controller
boots.
Yes, at a blink 1:1 rate - 1 Hz, and the controller does
not boot.
Yes, at a blink 1:1 rate - 1 Hz, and the controller boots.The system is in self-refresh mode. If the problem persists, replace the
If valid data is thought to be in Flash, see Transporting cache; otherwise,
replace the controller module.
The system has flushed data to disks. If the problem persists, replace the
controller module.
See Transporting cache.
The system is flushing data to CompactFlash. If the problem persists,
replace the controller module.
See Transporting cache.
controller module.
NOTE: See also “Cache Status LED details” (page 66).
Transporting cache
To preserve the existing data stored in the CompactFlash, you must transport the CompactFlash from the failed
controller to a replacement controller using the procedure outlined in HPE MSA Controller Module Replacement Instructions shipped with the replacement controller module. Failure to use this procedure will result in the loss of data
stored in the cache module.
CAUTION: Remove the controller module only after the copy process is complete, which is indicated by the Cache
Status LED being off, or blinking at 1:10 rate.
Isolating a host-side connection fault
During normal operation, when a controller module host port is connected to a data host, the port’s host link status/link
activity LED is green. If there is I/O activity, the LED blinks green. If data hosts are having trouble accessing the storage
system, and you cannot locate a specific fault or cannot access the event logs, use the following procedure. This
procedure requires scheduled downtime.
IMPORTANT: Do not perform more than one step at a time. Changing more than one variable at a time can complicate
the troubleshooting process.
Host-side connection troubleshooting featuring host ports with SFPs
The procedure below applies to MSA 1050 controller enclosures employing small form factor pluggable (SFP) transceiver
connectors in 8 Gb FC, 10GbE iSCSI, or 1 Gb iSCSI host interface ports. In the following procedure, “SFP and host cable” is
used to refer to FC or iSCSI controller ports used for I/O or replication.
NOTE: When experiencing difficulty diagnosing performance problems, consider swapping out one SFP at a time to see
if performance improves.
46Troubleshooting
1. Halt all I/O to the storage system as described in “Stopping I/O” (page 41).
2. Check the host link status/link activity LED.
If there is activity, halt all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
Solid – Cache contains data yet to be written to the disk.
Blinking – Cache data is being written to CompactFlash.
Flashing at 1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
Off – Cache is clean (no unwritten data).
4. Remove the SFP and host cable and inspect for damage.
5. Reseat the SFP and host cable.
Is the host link status/link activity LED on?
Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again, clean the
connections to ensure that a dirty connector is not interfering with the data path.
No – Proceed to the next step.
6. Move the SFP and host cable to a port with a known good link status.
This step isolates the problem to the external data path (SFP, host cable, and host-side devices) or to the controller
module port.
Is the host link status/link activity LED on?
Yes – You now know that the SFP, host cable, and host-side devices are functioning properly. Return the SFP and
cable to the original port. If the link status/link activity LED remains off, you have isolated the fault to the
controller module port. Replace the controller module.
No – Proceed to the next step.
7.Swap the SFP with the known good one.
Is the host link status/link activity LED on?
Yes – You have isolated the fault to the SFP. Replace the SFP.
No – Proceed to the next step.
8. Re-insert the original SFP and swap the cable with a known good one.
Is the host link status/link activity LED on?
Yes – You have isolated the fault to the cable. Replace the cable.
No – Proceed to the next step.
9. Verify that the switch, if any, is operating properly. If possible, test with another port.
10. Verify that the HBA is fully seated, and that the PCI slot is powered on and operational.
11. Replace the HBA with a known good HBA, or move the host side cable and SFP to a known good HBA.
Is the host link status/link activity LED on?
Yes – You have isolated the fault to the HBA. Replace the HBA.
No – It is likely that the controller module needs to be replaced.
12. Move the cable and SFP back to its original port.
Is the host link status/link activity LED on?
No – The controller module port has failed. Replace the controller module.
Yes – Monitor the connection for a period of time. It may be an intermittent problem, which can occur with
damaged SFPs, cables, and HBAs.
Isolating a host-side connection fault47
Host-side connection troubleshooting featuring SAS host ports
The procedure below applies to MSA 1050 SAS controller enclosures employing 12 Gb SFF-8644 connectors in the HD
mini-SAS host ports used for I/O.
1. Halt all I/O to the storage system as described in “Stopping I/O” (page 41).
2. Check the host activity LED.
If there is activity, halt all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
Solid – Cache contains data yet to be written to the disk.
Blinking – Cache data is being written to CompactFlash.
Flashing at 1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
Off – Cache is clean (no unwritten data).
4. Reseat the host cable and inspect for damage.
Is the host link status LED on?
Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again, clean the
connections to ensure that a dirty connector is not interfering with the data path.
No – Proceed to the next step.
5. Move the host cable to a port with a known good link status.
This step isolates the problem to the external data path (host cable and host-side devices) or to the controller
module port.
Is the host link status LED on?
Yes – You now know that the host cable and host-side devices are functioning properly. Return the cable to the
original port. If the link status LED remains off, you have isolated the fault to the controller module port. Replace
the controller module.
No – Proceed to the next step.
6. Verify that the switch, if any, is operating properly. If possible, test with another port.
7.Verify that the HBA is fully seated, and that the PCI slot is powered on and operational.
8. Replace the HBA with a known good HBA, or move the host side cable to a known good HBA.
Is the host link status LED on?
Yes – You have isolated the fault to the HBA. Replace the HBA.
No – It is likely that the controller module needs to be replaced.
9. Move the host cable back to its original port.
Is the host link status LED on?
No – The controller module port has failed. Replace the controller module.
Yes – Monitor the connection for a period of time. It may be an intermittent problem, which can occur with
damaged cables and HBAs.
48Troubleshooting
Isolating a controller module expansion port connection fault
During normal operation, when a controller module expansion port is connected to a drive enclosure, the expansion port
status LED is green. If the connected port’s expansion port LED is off, the link is down. Use the following procedure to
isolate the fault.
This procedure requires scheduled downtime.
NOTE: Do not perform more than one step at a time. Changing more than one variable at a time can complicate the
troubleshooting process.
1. Halt all I/O to the storage system as described in “Stopping I/O” (page 41).
2. Check the host activity LED.
If there is activity, halt all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
Solid – Cache contains data yet to be written to the disk.
Blinking – Cache data is being written to CompactFlash.
Flashing at 1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
Off – Cache is clean (no unwritten data).
4. Reseat the expansion cable, and inspect it for damage.
Is the expansion port status LED on?
Yes – Monitor the status to ensure there is no intermittent error present. If the fault occurs again, clean the
connections to ensure that a dirty connector is not interfering with the data path.
No – Proceed to the next step.
5. Move the expansion cable to a port on the controller enclosure with a known good link status.
This step isolates the problem to the expansion cable or to the controller module expansion port.
Is the expansion port status LED on?
Yes – You now know that the expansion cable is good. Return the cable to the original port. If the expansion port
status LED remains off, you have isolated the fault to the controller module expansion port. Replace the
controller module.
No – Proceed to the next step.
6. Move the expansion cable back to the original port on the controller enclosure.
7.Move the expansion cable on the drive enclosure to a known good expansion port on the drive enclosure.
Is the expansion port status LED on?
Yes – You have isolated the problem to the drive enclosure port. Replace the expansion module.
No – Proceed to the next step.
8. Replace the cable with a known good cable, ensuring the cable is attached to the original ports used by the previous
cable.
Is the host link status LED on?
Yes – Replace the original cable. The fault has been isolated.
No – It is likely that the controller module must be replaced.
Isolating Remote Snap replication faults
Remote Snap replication is a licensed disaster-recovery feature that performs asynchronous replication of block-level
data from a volume in a primary storage system to a volume in a secondary system. Remote Snap creates an internal
snapshot of the primary volume, and copies changes to the data since the last replication to the secondary system via
iSCSI or FC links. The primary volume exists in a primary pool in the primary storage system. Replication can be
completed using either the SMU or CLI.
connection information concerning Remote Snap.
See “Connecting two storage systems to replicate volumes” (page 28) for host
Isolating a controller module expansion port connection fault49
Replication setup and verification
After storage systems and hosts are cabled for replication, you can use the SMU to prepare to use the Remote Snap
feature. Optionally, you can use SSH to access the IP address of the controller module and access the Remote Snap
feature using the CLI.
NOTE: Refer to the following references for more information about replication setup:
•See HPE Remote Snaptechnical white paper for replication best practices: MSA Remote Snap Software
•See HPE MSA 1050/2050 SMU Reference Guide for procedures to setup and manage replications
•See HPE MSA 1050/2050 CLI Reference Guide for replication commands and syntax
To create a peer connection, use the
select Action > Create Peer Connection.
Create a replication set.
To create a replication set, use the
select Action > Create Replication Set.
Replicate.
To initiate replication, use the
Replicate.
•For descriptions of replication-related events, see the Event Descriptions Reference Guide.
show ports
show ports
command on the secondary system.
CLI command on the primary system, using a port address obtained from
command above.
create peer-connection
create replication-set
replicate
CLI command or in the SMU Replications topic, select Action >
CLI command or in the SMU Replications topic,
CLI command or in the SMU Replications topic,
Diagnostic steps for replication setup
The tables in this subsection show menu navigation for replication using the SMU.
IMPORTANT: Remote Snap must be licensed on all systems configured for replication, and the controller module
firmware must be compatible on all systems licensed for replication.
supporting Remote Snap is not running
on each system used for replication.
NoInvalid cabling connection.
(If multiple controller enclosures are
used, check the cabling for each system)
No
A system does not have a pool configured.
Verify licensing of the optional feature per system:
•In the Home topic in the SMU, select Action > Install License.
•The License Settings panel opens and displays information about each
licensed feature.
•If the Replication feature is not enabled, obtain and install a valid license
for Remote Snap.
•For more information on licensing, see the “Installing a license” chapter in
the SMU Reference Guide.
Perform the following actions on each system used for replication:
•In the System topic, select Action > Update Firmware.
The Update Firmware panel opens. The Update Controller Modules tab
shows firmware versions installed in each controller.
•If necessary, update the controller module firmware to ensure
compatibility with other systems.
•For more information on compatible firmware, see the “Updating
firmware” chapter in the SMU Reference Guide.
Verify controller enclosure cabling.
•Verify use of proper cables.
•Verify proper cabling paths for host connections.
•Verify cabling paths between replication ports and switches are visible to
one another.
•Verify that cable connections are securely fastened.
•Inspect cables for damage and replace if necessary.
Configure each system to have a storage pool.
Table 18 Diagnostics for replication setup: Using Remote Snap feature
Can you create a replication set?
IMPORTANT: Remote Snap must be licensed on all systems configured for replication, and the controller module
firmware must be compatible on all systems licensed for replication.
interface ports, replication set creation
fails due to use of CHAP.
NoUnable to create the secondary volume
(the destination volume on the pool to
which you will replicate data from the
primary volume)?
Table 19 Diagnostics for replication setup: Creating a replication set
If using CHAP (Challenge-Handshake Authentication Protocol), see the topics
about configuring CHAP and working in replications within the SMU Reference
Guide.
•Review event logs (in the footer, click the events panel and select Show
Event List) for indicators of a specific fault in a replication data path
1
component. Follow any recommended actions.
•Verify valid specification of the secondary volume according to either of
the following criteria:
A conflicting volume does not already exist
Available free space in the pool
Isolating Remote Snap replication faults51
AnswerPossible reasons Actions
NoCommunication link is down.Review event logs for indicators of a specific fault in a host or replication data
1
After ensuring valid licensing, valid cabling connections, and network availability, create the replication set using the Replications topic; select
Action > Create Replication Set.
Table 19 Diagnostics for replication setup: Creating a replication set (continued)
Can you replicate a volume?
IMPORTANT: Remote Snap must be licensed on all systems configured for replication, and the controller module
firmware must be compatible on all systems licensed for replication.
NoNonexistent replication set.•Determine existence of primary or secondary volumes.
NoNetwork error occurred during
in-progress replication.
NoCommunication link is down.Review event logs for indicators of a specific fault in a host or replication data
path component.
See actions described in “Can you successfully use the Remote Snap feature?”
(page 51).
•If a replication set has not been successfully created, use the Replications
topic: select Action > Create Replication Set to create one.
•Review event logs (in the footer, click the events panel and select Show
Event List) for indicators of a specific fault in a replication data path
component. Follow any recommended actions.
•Review event logs for indicators of a specific fault in a replication data
path component. Follow any recommended actions.
•Click in the Vo lum es topic, then click on a volume name in the volumes
list. Click the Replication Sets tab to display replications and associated
metadata.
•Replications that enter the suspended state can be resumed manually
(see the SMU Reference Guide for additional information).
path component.
Table 20 Diagnostics for replication setup: Replicating a volume
NoLast Successful Run shows N/A.•In the Vol ume s topic, click on the volume that is a member of the
NoCommunication link is down.Review event logs for indicators of a specific fault in a host or replication data
Table 21 Diagnostics for replication setup: Checking for a successful replication
52Troubleshooting
replication set.
Select the Replication Sets table.
Check the Last Successful Run information.
•If a replication has not run successfully, use the SMU to replicate as
described in the section about working in the Replications topic within the
SMU Reference Guide.
path component.
Resolving voltage and temperature warnings
1. Check that all of the fans are working by making sure the Voltage/Fan Fault/Service Required LED on each power
supply is off, or by using the SMU to check enclosure health status.
In the lower corner of the footer, overall health status of the enclosure is indicated by a health status icon. For
more information, point to the System tab and select View System to see the System panel. You can select from
Front, Rear, and Tab le views on the System panel. If you point to a component, its associated metadata and
health status displays onscreen.
See “Options available for performing basic steps” (page 39) for a description of health status icons and alternatives
for monitoring enclosure health.
2. Make sure that all modules are fully seated in their slots with latches locked.
3. Make sure that no slots are left open for more than two minutes.
If you need to replace a module, leave the old module in place until you have the replacement or use a blank module
to fill the slot. Leaving a slot open negatively affects the airflow and can cause the enclosure to overheat.
4. Make sure there is proper air flow, and no cables or other obstructions are blocking the front or rear of the array.
5. Try replacing each power supply module one at a time.
6. Replace the controller modules one at a time.
7.Replace SFPs one at a time (MSA 1050 FC or iSCSI storage systems).
Sensor locations
The storage system monitors conditions at different points within each enclosure to alert you to problems. Power, cooling
fan, temperature, and voltage sensors are located at key points in the enclosure. In each controller module and expansion
module, the enclosure management processor (EMP) monitors the status of these sensors to perform SCSI enclosure
services (SES) functions.
The following sections describe each element and its sensors.
Power supply sensors
Each enclosure has two fully redundant power supplies with load-sharing capabilities. The power supply sensors
described in the following table monitor the voltage, current, temperature, and fans in each power supply. If the power
supply sensors report a voltage that is under or over the threshold, check the input voltage.
Table 22 Power supply sensor descriptions
DescriptionEvent/Fault ID LED condition
Power supply 1Voltage, current, temperature, or fan fault
Power supply 2Voltage, current, temperature, or fan fault
Cooling fan sensors
Each power supply includes two fans. The normal range for fan speed is 4,000 to 6,000 RPM. When a fan speed drops
below 4,000 RPM, the EMP considers it a failure and posts an alarm in the storage system event log. The following table
lists the description, location, and alarm condition for each fan. If the fan speed remains under the 4,000 RPM threshold,
the internal enclosure temperature may continue to rise. Replace the power supply reporting the fault.
Table 23 Cooling fan sensor descriptions
DescriptionLocationEvent/Fault ID LED condition
Fan 1Power supply 1< 4,000 RPM
Fan 2Power supply 1< 4,000 RPM
Fan 3Power supply 2< 4,000 RPM
Fan 4Power supply 2< 4,000 RPM
Resolving voltage and temperature warnings53
During a shutdown, the cooling fans do not shut off. This allows the enclosure to continue cooling.
Temperature sensors
Extreme high and low temperatures can cause significant damage if they go unnoticed. When a temperature fault is
reported, it must be remedied as quickly as possible to avoid system damage. This can be done by warming or cooling
the installation location.
Table 24 Controller platform temperature sensor descriptions
When a power supply sensor goes out of range, the Fault/ID LED illuminates amber and an event is logged.
Table 25 Power supply temperature sensor descriptions
DescriptionNormal operating range
Power Supply 1 temperature–10C–80C
Power Supply 2 temperature–10°C–80°C
Power supply module voltage sensors
Power supply voltage sensors ensure that the enclosure power supply voltage is within normal ranges. There are three
voltage sensors per power supply.
Warning operating
range
2°C–98°C0°C–1°C,
range
99°C–104°C
113°C–115°C
Critical operating
range
None0C
None0C
Shutdown
values
104C
115C
Table 26 Voltage sensor descriptions
SensorEvent/Fault LED condition
Power supply 1 voltage, 12V< 11.00V
Power supply 1 voltage, 5V< 4.00V
Power supply 1 voltage, 3.3V< 3.00V
54Troubleshooting
> 13.00V
> 6.00V
> 3.80V
8Support and other resources
Accessing Hewlett Packard Enterprise Support
•For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website:
www.hpe.com/assistance
•To access documentation and support services, go to the Hewlett Packard Enterprise Support Center website:
www.hpe.com/support/hpesc
Information to collect
•Technical support registration number (if applicable)
•Product name, model or version, and serial number
•Operating system name and version
•Firmware version
•Error messages
•Product-specific reports and logs
•Add-on products or components
•Third-party products or components
Accessing updates
•Some software products provide a mechanism for accessing software updates through the product interface.
Review your product documentation to identify the recommended software update method.
•To download product updates, go to either of the following:
Hewlett Packard Enterprise Support Center
www.hpe.com/support/hpesc
Hewlett Packard Enterprise Support Center: Software downloads
www.hpe.com/support/downloads
Software Depot
www.hpe.com/support/softwaredepot
•To subscribe to eNewsletters and alerts:
www.hpe.com/support/e-updates
•To view and update your entitlements, and to link your contracts and warranties with your profile, go to the
Hewlett Packard Enterprise Support Center More Information on Access to HP Support Materials page:
www.hpe.com/support/AccessToSupportMaterials
IMPORTANT: Access to some updates might require product entitlement when accessed through the Hewlett
Packard Enterprise Support Center. You must have an HPE Passport set up with relevant entitlements.
Accessing Hewlett Packard Enterprise Support55
Customer self repair
Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product. If a CSR part needs to
be replaced, it will be shipped directly to you so that you can install it at your convenience. Some parts do not qualify for
CSR. Your Hewlett Packard Enterprise authorized service provider will determine whether a repair can be accomplished
by CSR.
For more information about CSR, contact your local service provider or go to the CSR website:
www.hpe.com/support/selfrepair
Remote support
Remote support is available with supported devices as part of your warranty or contractual support agreement. It
provides intelligent event diagnosis, and automatic, secure submission of hardware event notifications to Hewlett
Packard Enterprise, which will initiate a fast and accurate resolution based on your product’s service level. Hewlett
Packard Enterprise strongly recommends that you register your device for remote support.
If your product includes additional remote support details, use search to locate that information.
Remote support and Proactive Care information
HPE Get Connected
www.hpe.com/services/getconnected
HPE Proactive Care services
www.hpe.com/services/proactivecare
HPE Proactive Care service: Supported products list
To view the warranty for your product, see the Safety and Compliance Information for Server, Storage, Power,
Networking, and Rack Products document, available at the Hewlett Packard Enterprise Support Center:
To view the regulatory information for your product, view the Safety and Compliance Information for Server, Storage,
Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise Support Center:
Hewlett Packard Enterprise is committed to providing our customers with information about the chemical substances in
our products as needed to comply with legal requirements such as REACH (Regulation EC No 1907/2006 of the European
Parliament and the Council). A chemical information report for this product can be found at:
www.hpe.com/info/reach
For Hewlett Packard Enterprise product environmental and safety information and compliance data, including RoHS and
REACH, see:
www.hpe.com/info/ecodata
For Hewlett Packard Enterprise environmental information, including company programs, product recycling, and energy
efficiency, see:
www.hpe.com/info/environment
Documentation feedback
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the
documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hpe.com
When submitting your feedback, include the document title, part number, edition, and publication date located on the
front cover of the document. For online help content, include the product name, product version, help edition, and
publication date located on the legal notices page.
).
Regulatory information57
ALED descriptions
2U12 chassis shown
Bezel vents are simplified
Left ear cover
backside view
To ball studTo ball stud
To ball studTo ball stud
Right ear cover
backside view
Front panel LEDs
HPE MSA 1050 models support small form factor (SFF) and large form factor (LFF) enclosures. The SFF chassis,
configured with 24 2.5" SFF disks, is used as a controller enclosure or drive enclosure. The LFF chassis, configured with 12
3.5" LFF disks, is used as either a controller enclosure or drive enclosure.
Enclosure bezel
The MSA 1050 enclosures are equipped with a removable bezel designed to cover the front panel during enclosure
operation. The bezel assembly consists of a main body subassembly and two ear flange subassemblies, which attach the
bezel to the left and right ear flanges of the 2U enclosure. See Figure 1 (page 10) for front view display of the bezel.
Orient the enclosure bezel to align its back side with the front face of the enclosure as shown in Figure 21. Face the front
of the enclosure, and while supporting the base of the bezel—while grasping the left and right ear covers—position the
bezel such that the mounting sleeves within the integrated ear covers align with the ball studs on the ear flanges (see
Figure 22). Gently push-fit the bezel onto the ball studs to securely attach it to the front of the enclosure.
Figure 22 Detail views of enclosure ear cover mounting sleeves
58LED descriptions
Enclosure bezel removal
123
Left earRight ear
4
5
6
12345678 9101112131415161718192021222324
Notes:
Integers on disks indicate drive slot numbering sequence.
The enlarged detail view at right shows LED icons from the bezel that correspond to chassis LEDs.
Bezel icons for LEDs
Ball stud (two per ear flange)Ball stud (two per ear flange)
The detail view locator circle (above right) identifies the ear kit that connects to LED light pipes in the bezel (or ear cover).
TIP: Please refer to Figure 21 (bezel front) and Figure 22 (bezel back) on page 58 before removing the bezel from the
enclosure front panel.
You may need to remove the bezel to access front panel components such as disk drives and ear kits. Although disk drive
LEDs are not visible when the bezel is attached, you can monitor disk behavior from the management interfaces (see
“Fault isolation methodology” (page 39) for more information about using LEDs together with event notification, the CLI,
and the SMU for managing the storage system).
While facing the front of the enclosure, grasp the left and right ear covers, such that your fingers cup the bottom of each
ear cover, with thumb at the top of each cover. Gently pull the top of the bezel while applying slight inward pressure
below, to release the bezel from the ball studs.
NOTE: The bezel should be attached to the enclosure during operation to protect ear circuitry. To reattach the bezel to
the enclosure front panel, follow the instructions provided in “Enclosure bezel attachment” (page 58).
MSA 1050 Array SFF or supported 24-drive expansion enclosure
LEDDescriptionDefinition
1Enclosure ID Green — On
Enables you to correlate the enclosure with logical views presented by
management software. Sequential enclosure ID numbering of controller
enclosures begins with the integer 1. The enclosure ID for an attached drive
enclosure is nonzero.
5HeartbeatGreen — The enclosure is powered on with at least one power supply
6Fault IDAmber — Fault condition exists. The event has been identified, but the
Figure 23 LEDs: MSA 1050 Array SFF or supported 24-drive expansion enclosure: front panel
Off — Identity LED off.
operating normally.
Off — Both power supplies are off; the system is powered off.
problem needs attention.
Off — No fault condition exists.
Front panel LEDs59
MSA 1050 Array LFF or supported 12-drive expansion enclosure
123
Left ear
Right ear
4
5
6
1
2
3
4
5
6
7
8
9
10
11
12
Notes:
Integers on disks indicate drive slot numbering sequence.
The enlarged detail view at right shows LED icons from the bezel that correspond to chassis LEDs.
Bezel icons for LEDs
Ball stud (two per ear flange)Ball stud (two per ear flange)
The detail view locator circle (above right) identifies the ear kit that connects to LED light pipes in the bezel (or ear cover).
Left ear coverLFF 12-drive right ear coverSFF 24-drive right ear cover
1
4
5
6
Disk slot numbers
4
5
6
Callout numbers pertain to chassis LED descriptions in the table above.
LEDDescriptionDefinition
1Enclosure ID Green — On
Enables you to correlate the enclosure with logical views presented by
management software. Sequential enclosure ID numbering of controller
enclosures begins with the integer 1. The enclosure ID for an attached drive
enclosure is nonzero.
OnOffNormal operation. The disk drive is online, but it is not currently
Blinking irregularlyOffThe disk drive is active and operating normally.
OffAmber; blinking regularly
OnAmber; blinking regularly
Blinking irregularlyAmber; blinking regularly
OffAmber; solid
OffBlue; solidOffline: the disk drive has been selected by a management
On or blinkingBlue; solidThe controller is driving I/O to the disk, and it has been selected by
Blinking regularly (1 Hz)Off
(1 Hz)
(1 Hz)
(1 Hz)
active.
Offline: the disk is not being accessed. A predictive failure alert may
have been received for this device. Further investigation is required.
Online: possible I/O activity. A predictive failure alert may have
been received for this device. Further investigation is required.
The disk drive is active, but a predictive failure alert may have been
received for this disk. Further investigation is required.
1
Offline: no activity. A failure or critical fault condition has been
identified for this disk.
application such as the SMU.
a management application such as the SMU.
CAUTION: Do not remove the disk drive. Removing a
disk may terminate the current operation and cause data
loss. The disk is reconstructing.
OffOffEither there is no power, the drive is offline, or the drive is not
configured.
1
This Fault/UID state can indicate that the disk is a leftover. The fault may involve metadata on the disk rather than the disk itself. See the Clearing disk
metadata topic in the SMU Reference Guide or online help.
Figure 26 LEDs: Disk drive combinations — enclosure front panel
The diagram and table below display and identify important component items comprising the rear panel layout of the
MSA 1050 controller enclosure. The example configuration shown in Figure 27 uses FC SFPs. Diagrams and tables on the
following pages further describe rear panel LED behavior for component field-replaceable units.
1AC Power supplies [see Figure 31 (page 67)]
2Controller module A [see Figure 28 (page 63)]
3Controller module B [see Figure 28 (page 63)]
4Host ports: used for host connection or replication
5CLI port (USB - Type B)
6Service port 2 (used by service personnel only)
Figure 27 MSA 1050 Array: rear panel
A controller enclosure accommodates two power supply FRUs within the two power supply slots (see two instances of
callout 1 above). The controller enclosure accommodates two controller module FRUs of the same type within the I/O
module slots (see callouts 2 and 3 above).
IMPORTANT: MSA 1050 controller enclosures support dual-controller only. If a partner controller fails, the array will
fail over and run on a single controller until the redundancy is restored. A controller module must be installed in each IOM
slot to ensure sufficient airflow through the enclosure during operation.
The diagrams with tables that immediately follow provide descriptions of the different controller modules and power
supply modules that can be installed into the rear panel of an MSA 1050 controller enclosure. The controller module for
your product is pre-configured with the appropriate external connector for the selected host interface protocol. Showing
controller modules and power supply modules separately from the enclosure provides improved clarity in identifying the
component items called out in the diagrams and described in the tables.
Descriptions are also provided for optional drive enclosures supported by MSA 1050 controller enclosures for expanding
storage capacity.
7Reserved for future use
8Network management port
9Service port 1 (used by service personnel only)
10 Disabled button (used by engineering only)
(Stickers shown covering the openings)
11 SAS expansion port
62LED descriptions
MSA 1050 controller module—rear panel LEDs
5
3
468 9
7
:
= FC LEDs= iSCSI LEDs
1
2
LEDDescriptionDefinition
1Host 8 Gb FC
Link Status/
Link Activity
2Host 10GbE iSCSI
Link Status/
Link Activity
3Network Port Link
Active Status
4Network Port Link Speed
5OK to RemoveOff — The controller module is not prepared for removal.
6Unit LocatorOff — Normal operation.
7FRU OKOff — Controller module is not OK.
8Fault/Service RequiredAmber — A fault has been detected or a service action is required.
9Cache StatusGreen — Cache contains unwritten data and operation is normal. The unwritten
10Expansion Port StatusOff — The port is empty or the link is down.
1
FC SFPs must be qualified 8 Gb options as described in the QuickSpecs. An 8 Gb/s SFP can run at 8 Gb/s, 4 Gb/s, or auto-negotiate its link speed.
2
10GbE iSCSI SFPs must be qualified 10GbE options. These, and qualified 10GbE iSCSI DAC cables are described in the QuickSpecs.
3
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will switch to the mode of operation.
Off — No link detected.
Green — The port is connected and the link is up.
Blinking green — The link has I/O or replication activity.
Off — No link detected.
Green — The port is connected and the link is up.
Blinking green — The link has I/O or replication activity.
Off — The Ethernet link is not established, or the link is down.
Green — The Ethernet link is up (applies to all negotiated link speeds).
4
Off — Link is up at 10/100base-T negotiated speeds.
Amber — Link is up and negotiated at 1000base-T.
Blue — The controller module is prepared for removal.
Blinking white — Physically identifies the controller module.
Blinking green — System is booting.
Green — Controller module is operating normally.
Blinking amber — Hardware-controlled power-up or a cache flush or restore error.
information can be log or debug data that remains in the cache, so a Green cache status
LED does not, by itself, indicate that any user data is at risk or that any action is
necessary.
Off — In a working controller, cache is clean (contains no unwritten data).
This is an occasional condition that occurs while the system is booting.
Blinking green — A CompactFlash flush or cache self-refresh is in progress, indicating
cache activity.
See also Cache Status LED details.
On — The port is connected and the link is up.
Rear panel LEDs63
LEDDescriptionDefinition
5
3
468 9
7
:
= FC LEDs= iSCSI LEDs
1
2
1Not used in example
2Host 1 Gb iSCSI
Link Status/
Link Activity
3Network Port Link
Active Status
4Network Port Link Speed
5OK to RemoveOff — The controller module is not prepared for removal.
6Unit LocatorOff — Normal operation.
7FRU OKOff — Controller module is not OK.
8Fault/Service RequiredAmber — A fault has been detected or a service action is required.
9Cache StatusGreen — Cache contains unwritten data and operation is normal. The unwritten
10Expansion Port StatusOff — The port is empty or the link is down.
1
2,3
4
The FC SFP is not show in this example [see Figure 28 (page 63)].
Off — No link detected.
Green — The port is connected and the link is up; or the link has I/O or replication
activity.
Off — The Ethernet link is not established, or the link is down.
Green — The Ethernet link is up (applies to all negotiated link speeds).
4
Off — Link is up at 10/100base-T negotiated speeds.
Amber — Link is up and negotiated at 1000base-T.
Blue — The controller module is prepared for removal.
Blinking white — Physically identifies the controller module.
Blinking green — System is booting.
Green — Controller module is operating normally.
Blinking amber — Hardware-controlled power-up or a cache flush or restore error.
information can be log or debug data that remains in the cache, so a Green cache status
LED does not, by itself, indicate that any user data is at risk or that any action is
necessary.
Off — In a working controller, cache is clean (contains no unwritten data).
This is an occasional condition that occurs while the system is booting.
Blinking green — A CompactFlash flush or cache self-refresh is in progress, indicating
cache activity.
See also Cache Status LED details.
On — The port is connected and the link is up.
1
FC SFPs must be qualified 8 Gb options as described in the QuickSpecs. An 8 Gb/s SFP can run at 8 Gb/s, 4 Gb/s, or auto-negotiate its link speed.
2
1 Gb SFPs must be qualified RJ-45 iSCSI options as described in the QuickSpecs. The 1 Gb iSCSI mode does not support an iSCSI optic option.
3
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will switch to the mode of operation.
5OK to RemoveOff — The controller module is not prepared for removal.
6Unit LocatorOff — Normal operation.
7FRU OKOff — Controller module is not OK.
8Fault/Service RequiredAmber — A fault has been detected or a service action is required.
9Cache StatusGreen — Cache contains unwritten data and operation is normal. The unwritten
10Expansion Port StatusOff — The port is empty or the link is down.
1
1
2
Off — No link detected.
Green — The port is connected and the link is up.
Off — The link is idle.
Blinking green — The link has I/O activity.
Off — The Ethernet link is not established, or the link is down.
Green — The Ethernet link is up (applies to all negotiated link speeds).
2
Off — Link is up at 10/100base-T negotiated speeds.
Amber — Link is up and negotiated at 1000base-T.
Blue — The controller module is prepared for removal.
Blinking white — Physically identifies the controller module.
Blinking green — System is booting.
Green — Controller module is operating normally.
Blinking amber — Hardware-controlled power-up or a cache flush or restore error.
information can be log or debug data that remains in the cache, so a Green cache status
LED does not, by itself, indicate that any user data is at risk or that any action is necessary.
Off — In a working controller, cache is clean (contains no unwritten data).
This is an occasional condition that occurs while the system is booting.
Blinking green — A CompactFlash flush or cache self-refresh is in progress, indicating
cache activity.
See also Cache Status LED details.
On — The port is connected and the link is up.
1
See the qualified HD mini-SAS host cable options described in the QuickSpecs.
NOTE: Once a Link Status LED is lit, it remains so, even if the controller is shutdown via the SMU or CLI.
When a controller is shutdown or otherwise rendered inactive—its Link Status LED remains illuminated—falsely
indicating that the controller can communicate with the host. Though a link exists between the host and the chip on the
controller, the controller is not communicating with the chip. To reset the LED, the controller must be properly
power-cycled [see “Powering on/powering off” (page 21)].
Rear panel LEDs65
Cache Status LED details
Power on/off behavior
The storage enclosure's unified CPLD provides integrated Power Reset Management (PRM) functions. During power on,
discrete sequencing for power on display states of internal components is reflected by blinking patterns displayed by the
Cache Status LED (see Tab le 2 7).
Table 27 Cache Status LED – power on behavior
ItemDisplay states reported by Cache Status LED during power on sequence
Display state01234567
ComponentVPSCSAS BEASICHostBootNormalReset
Blink patternOn 1/Off 7On 2/Off 6 On 3/Off 5 On 4/Off 4 On 5/Off 3 On 6/Off 2Solid/OnSteady
Once the enclosure has completed the power on sequence, the Cache Status LED displays Solid/On (Normal), before
assuming the operating state for cache purposes.
Cache status behavior
If the LED is blinking evenly, a cache flush is in progress. When a controller module loses power and write cache contains
data that has not been written to disk, the supercapacitor pack provides backup power to flush (copy) data from write
cache to CompactFlash memory. When cache flush is complete, the cache transitions into self-refresh mode.
If the LED is blinking momentarily slowly, the cache is in a self-refresh mode. In self-refresh mode, if primary power is
restored before the backup power is depleted (3–30 minutes, depending on various factors), the system boots, finds data
preserved in cache, and writes it to disk. This means the system can be operational within 30 seconds, and before the
typical host I/O time-out of 60 seconds, at which point system failure would cause host-application failure. If primary
power is restored after the backup power is depleted, the system boots and restores data to cache from CompactFlash,
which can take about 90 seconds. The cache flush and self-refresh mechanism is an important data protection feature;
essentially four copies of user data are preserved: one in controller cache and one in CompactFlash of each controller.
The Cache Status LED illuminates solid green during the boot-up process. This behavior indicates the cache is logging all
POSTs, which will be flushed to the CompactFlash the next time the controller shuts down.
CAUTION: If the Cache Status LED illuminates solid green—and you wish to shut-down the controller—do so from the
user interface, so unwritten data can be flushed to CompactFlash.
Power supply LEDs
Power redundancy is achieved through two independent load-sharing power supplies. In the event of a power supply
failure, or the failure of the power source, the storage system can operate continuously on a single power supply. Greater
redundancy can be achieved by connecting the power supplies to separate circuits. AC power supplies do not have a
power switch. Power supplies are used by controller and drive enclosures.
66LED descriptions
LEDDescriptionDefinition
1
2
AC model
1Input Source Power GoodGreen — Power is on and input voltage is normal.
Off — Power is off or input voltage is below the minimum threshold.
2Voltage/Fan Fault/Service RequiredAmber — Output voltage is out of range or a fan is operating below the
minimum required RPM.
Off — Output voltage is normal.
Figure 31 LEDs: MSA 1050 Storage system enclosure power supply modules
NOTE: See “Powering on/powering off” (page 21) for information about power-cycling enclosures.
Rear panel LEDs67
LFF and SFF drive enclosures—rear panel layout
2
345
7
1
6
1
MSA 1050 controllers support the 3.5" 12-drive enclosure and the 2.5" 24-drive enclosure for adding storage. The front
panel of the 12-drive enclosure looks identical to the MSA 1050 Array LFF front panel. The front panel of the 24-drive
enclosure looks identical to the MSA 1050 Array SFF front panel. The rear panel of the MSA 2050 LFF Disk Enclosure
(12-drive) and the MSA 2050 SFF Disk Enclosure (24-drive) enclosures are identical, as shown below.
Install the system in accordance with the local safety codes and regulations at the facility site. Follow all cautions and
instructions marked on the equipment. Also, refer to the documentation included with your product ship kit.
Site requirements and guidelines
The following sections provide requirements and guidelines that you must address when preparing your site for the
installation.
When selecting an installation site for the system, choose a location not subject to excessive heat, direct sunlight, dust, or
chemical exposure. These conditions greatly reduce the system’s longevity and might void your warranty.
Site wiring and AC power requirements
The following are required for all installations using AC power supplies:
•All AC mains and supply conductors to power distribution boxes for the rack-mounted system must be enclosed in a
metal conduit or raceway when specified by local, national, or other applicable government codes and regulations.
•Ensure that the voltage and frequency of your power source match the voltage and frequency inscribed on the
equipment’s electrical rating label.
•To ensure redundancy, provide two separate power sources for the enclosures. These power sources must be
independent of each other, and each must be controlled by a separate circuit breaker at the power distribution point.
•The system requires voltages within minimum fluctuation. The customer-supplied facilities’ voltage must maintain a
voltage with not more than ± 5 percent fluctuation. The customer facilities must also provide suitable surge
protection.
•Site wiring must include an earth ground connection to the AC power source. The supply conductors and power
distribution boxes (or equivalent metal enclosure) must be grounded at both ends.
•Power circuits and associated circuit breakers must provide sufficient power and overload protection. To prevent
possible damage to the AC power distribution boxes and other components in the rack, use an external, independent
power source that is isolated from large switching loads (such as air conditioning motors, elevator motors, and
factory loads).
NOTE: For power requirements, see the QuickSpecs: www.hpe.com/support/MSA1050QuickSpecs. If a website location
has changed, an Internet search for “HPE MSA 1050 quickspecs” will provide a link.
Weight and placement guidelines
Refer to “Physical requirements” (page 70) for detailed size and weight specifications.
•The weight of an enclosure depends on the number and type of modules installed.
•Ideally, use two people to lift an enclosure. However, one person can safely lift an enclosure if its weight is reduced by
removing the power supply modules and disk drive modules.
•Do not place enclosures in a vertical position. Always install and operate the enclosures in a horizontal/level
orientation.
•When installing enclosures in a rack, make sure that any surfaces over which you might move the rack can support
the weight. To prevent accidents when moving equipment, especially on sloped loading docks and up ramps to raised
floors, ensure you have a sufficient number of helpers. Remove obstacles such as cables and other objects from the
floor.
•To prevent the rack from tipping, and to minimize personnel injury in the event of a seismic occurrence, securely
anchor the rack to a wall or other rigid structure that is attached to both the floor and to the ceiling of the room.
Safety requirements69
Electrical guidelines
•These enclosures work with single-phase power systems having an earth ground connection. To reduce the risk of
electric shock, do not plug an enclosure into any other type of power system. Contact your facilities manager or a
qualified electrician if you are not sure what type of power is supplied to your building.
•Enclosures are shipped with a grounding-type (three-wire) power cord. To reduce the risk of electric shock, always
plug the cord into a grounded power outlet.
•Do not use household extension cords with the enclosures. Not all power cords have the same current ratings.
Household extension cords do not have overload protection and are not meant for use with computer systems.
Ventilation requirements
Refer to “Environmental requirements” (page 71) for detailed environmental requirements.
•Do not block or cover ventilation openings at the front and rear of an enclosure. Never place an enclosure near a
radiator or heating vent. Failure to follow these guidelines can cause overheating and affect the reliability and
warranty of your enclosure.
•Leave a minimum of 15 cm (6 inches) at the front and back of each enclosure to ensure adequate airflow for cooling.
No cooling clearance is required on the sides, top, or bottom of enclosures.
•Leave enough space in front and in back of an enclosure to allow access to enclosure components for servicing.
Removing a component requires a clearance of at least 37 cm (15 inches) in front of and behind the enclosure.
Cabling requirements
•Keep power and interface cables clear of foot traffic. Route cables in locations that protect the cables from damage.
•Route interface cables away from motors and other sources of magnetic or radio frequency interference.
•Stay within the cable length limitations.
Management host requirements
A local management host with at least one USB Type B port connection is recommended for the initial installation and
configuration of a controller enclosure. After you configure one or both of the controller modules with an Internet
Protocol (IP) address, you then use a remote management host on an Ethernet network to configure, manage, and
monitor.
NOTE: Connections to this device must be made with shielded cables–grounded at both ends–with metallic RFI/EMI
connector hoods, in order to maintain compliance with FCC Rules and Regulations.
Physical requirements
The floor space at the installation site must be strong enough to support the combined weight of the rack, controller
enclosures, drive enclosures (expansion), and any additional equipment. The site also requires sufficient space for installation,
operation, and servicing of the enclosures, together with sufficient ventilation to allow a free flow of air to all enclosures
Table 2 8 and Table 2 9 list enclosure dimensions and weights. Weights are based on an enclosure having a full
complement of disk drives, two controller or expansion modules, and two power supplies installed. “2U12” denotes the
LFF enclosure (12 disks) and “2U24” denotes the SFF enclosure (24 disks).
.
70Specifications and requirements
Table 2 9 provides weight data for MSA 1050 controller enclosures and drive enclosures. For information about other HPE
MSA drive enclosures that may be cabled to these systems, check the QuickSpecs at:
www.hpe.com/support/MSA1050QuickSpecs
“HPE MSA 1050 quickspecs” will provide a link.
.
Table 28 Rackmount enclosure dimensions
. If a website location has changed, an Internet search for
SpecificationsRackmount
2U Height (y-axis)8.9 cm (3.5 inches)
Width (x-axis):
•Chassis only
•Chassis with bezel ear caps
44.7 cm (17.6 inches)
47.9 cm (18.9 inches)
Depth (z-axis):
SFF drive enclosure (2U24)
•Back of chassis ear to controller latch
•Front of chassis ear to back of cable bend
50.5 cm (19.9 inches)
57.9 cm (22.8 inches)
LFF drive enclosure (2U12)
•Back of chassis ear to controller latch
•Front of chassis ear to back of cable bend
60.2 cm (23.7 inches)
67.1 cm (26.4 inches)
Table 29 Rackmount enclosure weights
SpecificationsRackmount
MSA 1050 Array SFF Enclosure
•Chassis with FRUs (no disks)
1,2
•Chassis with FRUs (including disk)
MSA 1050 Array LFF Enclosure
•Chassis with FRUs (no disks)
1,2
•Chassis with FRUs (including disks)
MSA 2050 SFF Disk Enclosure
•Chassis with FRUs (no disks)
1,2
•Chassis with FRUs (including disks)
MSA 2050 LFF Disk Enclosure
•Chassis with FRUs (no disks)
1,2
•Chassis with FRUs (including disks)
1,3
1,3
1,3
1,3
8.6 kg (19.0 lb) [chassis]
19.9 kg (44.0 lb)
25.4 kg (56.0 lb)
9.9 kg (22.0 lb) [chassis]
21.3 kg (47.0 lb)
30.8 kg (68.0 lb)
8.6 kg (19.0 lb) [chassis]
19.9 kg (44.0 lb)
25.4 kg (56.0 lb)
9.9 kg (22.0 lb) [chassis]
21.3 kg (47.0 lb)
30.8 kg (68.0 lb)
1
Weights shown are nominal, and subject to variances.
2
Weights may vary due to different power supplies, IOMs, and differing calibrations between scales.
3
Weights may vary due to actual number and type of disk drives (SAS or SSD) installed.
Environmental requirements
NOTE: For operating and non-operating environmental technical specifications, see the QuickSpecs at:
www.hpe.com/support/MSA1050QuickSpecs
quickspecs” will provide a link.
. If a website location has changed, an Internet search for “HPE MSA 1050
Environmental requirements71
Electrical requirements
Site wiring and power requirements
Each enclosure has two power supply modules for redundancy. If full redundancy is required, use a separate power
source for each module. The AC power supply unit in each power supply module is auto-ranging and is automatically
configured to an input voltage range from 100–240 VAC with an input frequency of 50–60 Hz. The power supply
modules meet standard voltage requirements for both U.S. and international operation. The power supply modules use
standard industrial wiring with line-to-neutral or line-to-line power connections.
Power cord requirements
Each enclosure is equipped with two power supplies of the same type (both AC). Use two AC power cords that are
appropriate for use in a typical outlet in the destination country. Each power cable connects one of the power supplies to
an independent, external power source. To ensure power redundancy, connect the two suitable power cords to two
separate circuits: for example, to one commercial circuit and one uninterruptible power source (UPS).
IMPORTANT: See the QuickSpecs for information about power cables provided with your MSA 1050 Storage product. If
a website location has changed, an Internet search for “HPE MSA 1050 quickspecs” will provide a link.
72Specifications and requirements
CElectrostatic discharge
Preventing electrostatic discharge
To prevent damaging the system, be aware of the precautions you need to follow when setting up the system or handling
parts. A discharge of static electricity from a finger or other conductor may damage system boards or other
static-sensitive devices. This type of damage may reduce the life expectancy of the device.
To prevent electrostatic damage:
•Avoid hand contact by transporting and storing products in static-safe containers.
•Keep electrostatic-sensitive parts in their containers until they arrive at static-protected workstations.
•Place parts in a static-protected area before removing them from their containers.
•Avoid touching pins, leads, or circuitry.
•Always be properly grounded when touching a static-sensitive component or assembly.
Grounding methods to prevent electrostatic discharge
Several methods are used for grounding. Use one or more of the following methods when handling or installing
electrostatic-sensitive parts:
•Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist straps are
flexible straps with a minimum of 1 megohm (± 10 percent) resistance in the ground cords. To provide proper ground,
wear the strap snug against the skin.
•Use heel straps, toe straps, or boot straps at standing workstations. Wear the straps on both feet when standing on
conductive floors or dissipating floor mats.
•Use conductive field service tools.
•Use a portable field service kit with a folding static-dissipating work mat.
If you do not have any of the suggested equipment for proper grounding, have an authorized reseller install the part. For
more information on static electricity or assistance with product installation, contact an authorized reseller.
Preventing electrostatic discharge73
DSAS fan-out cable option
Locate the SAS fan-out cable
Locate the appropriate qualified SAS fan-out cable option for your MSA 1050 SAS controller module. Qualified fan-out
cable options are briefly described in “Cable requirements for MSA 1050 enclosures” (page 18).
NOTE: Qualified SAS fan-out cable options are labeled on the host connection end of each bifurcated cable. Hosts
should be connected to the same ports on both controller modules to align with the usage shown in the SMU.
Check the QuickSpecs for additional information such as supported cable lengths.
See ww
w.hpe.com/support/MSA1050QuickSpecs. If a website location has changed, an Internet search for
“HPE MSA 1050 quickspecs” will provide a link.
A cabling example showing use of SAS fan-out cables is provided in “Connecting direct attach configurations” (page 26).
See also Figure 15 (page 27).
10GbE iSCSI
1Gb iSCSI
Ethernet
FCC compliance statement
Fibre Channel
routing requirements
shielded
USB for CLI
cabling
connecting controller and drive enclosures
direct attach configurations
switch attach configurations
to enable Remote Snap replication
cache
read ahead
self-refresh mode
write-through
clearance requirements
service
ventilation
command-line interface (CLI)
connecting USB cable to CLI port
using to set controller IP addresses
CompactFlash
memory card
transporting
components
MSA 1050
enclosure front panel
enclosure rear panel
25
25
27
27, 70
25
70
27, 70
34
25
27
15
66
15
70
70
15
46
LFF enclosure
SFF enclosure
AC power supply
CLI port (USB - Type B)
controller module A
controller module B
host ports
mini-SAS expansion port
network management port
1110
62
13, 14, 626262
13, 14, 62
28
34
13, 14
13, 14, 62
38
34
17
network port
reserved port
SAS expansion port
service port 1
service port 2
supported drive enclosures
configuring
direct attach configurations
switch attach configurations
connections
verify
21
console requirement
controller enclosures
connecting to data hosts
connecting to remote management hosts
13
13, 14, 62
62
13, 14, 62
13, 14, 62
14
25
27
70
23
27
D
data hosts
defined
23
optional software
system requirements
DHCP
server
33
disk drive
slot numbering
LFF enclosure
SFF enclosure
23
23
1110
E
electromagnetic compatibility (EMC) 69
electrostatic discharge
grounding methods
precautions
enclosure
cabling
dimensions
IDs, correcting
input frequency requirement
input voltage requirement
installation checklist
site requirements
troubleshooting
web-browser based configuring and provisioning
weight
Ethernet cables
requirements
73
17
71
71
73
41
72
72
17
70
41
27
F
faults
isolating
expansion port connection fault
host-side connection
46
49
38
Index75
methodology 39
H
host interface ports
FC host interface protocol
loop topology
point-to-point protocol
iSCSI host interface protocol
1 Gb
24
10GbE
mutual CHAP
hosts
defined
23
stopping I/O
24
24
24
24
41
I
IDs, correcting for enclosure 41
installing enclosures
installation checklist
IP addresses
setting using CLI
setting using DHCP
17
33
33
L
LEDs
disk drives
enclosure front panel
enclosure rear panel
61
Enclosure ID
Fault ID
Heartbeat
Unit Identification (UID)
MSA 1050
10GbE iSCSI Host Link Status/Link Activity
1Gb iSCSI Host Link Status/Link Activity
Cache Status
Expansion Port Status
Fault/Service Required
FC Host Link Status/Link Activity
FRU OK
Network Port Link Active
Network Port Link Speed
OK to Remove
Unit Locator
MSA 1050 SAS
6/12 Gb Host Link Activity
6/12 Gb Host Link Status
Cache Status
Expansion Port Status
Fault/Service Required
FRU OK
Network Port Link Active
Network Port Link Speed
OK to Remove
Unit Locator
59, 60
59, 60
59, 60
63, 64
63, 64
63, 64
63, 64
65
65
65
65
59, 60
63, 64
63, 64
63, 6463, 64
65
65
65
65
6565
63
64
63
power supply unit
Input Source Power Good
Voltage/Fan Fault/Service Required
supported drive enclosures (expansion)
enclosure rear panel
Fault/Service Required
FRU OK
OK to Remove
power supply
SAS In Port Status
SAS Out Port Status
Unit Locator
local management host requirement
68
68
68
68
67
68
68
68
70
P
physical requirements 70
power cord requirements
power cycle
power off
power on
power supply
AC power requirements
site wiring requirements
22
22
72
69
69
R
regulatory compliance
notices
shielded cables
requirements
cabling
18
clearance
Ethernet cables
host system
physical
ventilation
RFI/EMI connector hoods
70
70
27, 70
27
23
70
27, 70
S
safety precautions 69
sensors
locating
53
power supply
temperature
voltage
site planning
EMC
69
local management host requirement
physical requirements
safety precautions
SMU
8
accessing web-based management interface
defined
getting started
Remote Snap replication
storage system configuring and provisioning
53
54
54
70
70
69
38
38
28, 49
67
38
38
76Index
storage system setup
configuring
provisioning
replicating
supercapacitor pack
38
38
38
16
T
troubleshooting 39
controller failure, single controller configuration
correcting enclosure IDs
enclosure does not initialize
expansion port connection fault
host-side connection fault
Remote Snap replication faults
using event notification
using system LEDs
using the CLI
using the SMU
40
39
41
41
49
46, 48
49
40
42
V
ventilation requirements 70
W
warnings
voltage and temperature
53
45
Index77
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.