This document describes initial hardware setup for HPE MSA 1050 controller enclosures, and is intended for use by
storage system administrators familiar with servers and computer networks, network administration, storage system
installation and configuration, storage area network management, and relevant protocols.
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not
be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial
license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not responsible for information
outside the Hewlett Packard Enterprise website.
Acknowledgments
Microsoft® and Windows® are U.S. trademarks of the Microsoft group of companies.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java and Oracle are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
HPE MSA Storage models are high-performance storage solutions combining outstanding performance with high
reliability, availability, flexibility, and manageability. MSA 1050 enclosure models blend economy with utility for scalable
storage applications.
MSA 1050 Storage models
The MSA 1050 enclosures support large form factor (LFF 12-disk) and small form factor (SFF 24-disk) 2U chassis, using
AC power supplies. The MSA 1050 controllers are introduced below.
NOTE:For additional information about MSA 1050 controller modules, see the following subsections:
The MSA 1050 enclosures support virtual storage. For virtual storage, a group of disks with an assigned RAID level is
called a virtual disk group. This guide uses the term disk group for brevity.
MSA 1050 enclosure user interfaces
The MSA 1050 enclosures support the Storage Management Utility (SMU), which is a web-based application for
configuring, monitoring, and managing the storage system. Both the SMU and the command-line interface (CLI) are
briefly described.
•The SMU is the primary web interface to manage virtual storage.
•The CLI enables you to interact with the storage system using command syntax entered via the keyboard or
scripting.
NOTE:For more information about the SMU, see the SMU Reference Guide or online help. For more information about
the CLI, see the CLI Reference Guide. See also “Related MSA documentation” (page 9).
MSA 1050 controllers
The MSA 1050 controller enclosures are pre-configured at the factory to support one of these host interface protocols:
•8 Gb FC
•1 GbE iSCSI
•10 GbE iSCSI
•12 Gb HD mini-SAS
For FC and iSCSI host interfaces, the small from-factor pluggable (SFP transceiver or SFP) connector supporting the
pre-configured host interface protocol is pre-installed in the controller module. MSA 1050 controller enclosures do not
allow you to change the host interface protocol or increase speeds. Always use qualified SFP connectors and cables for
supporting the host interface protocol as described in the QuickSpecs. See also “Product QuickSpecs” (page 9).
For the HD mini-SAS host interface, both standard and fan-out cables are supported for host connection. Always use
qualified SAS cable options for supporting the host interface protocol as described in QuickSpecs. Host connection for
this controller module is described by cabling diagrams in“Connecting hosts” . Connection information for the SAS
fan-out cable options is provided in “SAS fan-out cable option” .
8Overview
TIP:See the topic about configuring host ports within the SMU Reference Guide.
Features and benefits
Product features and supported options are subject to change. Online documentation describes the latest product and
product family characteristics, including currently supported features, options, technical specifications, configuration
data, related optional software, and product warranty information.
Product QuickSpecs
Check the QuickSpecs for a complete list of supported servers, operating systems, disk drives, and options. See
www.hpe.com/support/MSA1050QuickSpecs
“HPE MSA 1050 quickspecs” will provide a link.
Related MSA documentation
Related support information is provided in the “Support and other resources” chapter. Firmware-related MSA
documentation titles directly pertaining to this guide are provided in the table below.
Table 1Related MSA firmware documentation
For information aboutSee
Using the Storage Management Utility (SMU) web
interface to configure and manage the product
Using the command-line interface (CLI) to configure and
manage the product
Event codes and recommended actionsHPE MSA Event Descriptions Reference Guide
. If a website location has changed, an Internet search for
HPE MSA 1050/2050 SMU Reference Guide
HPE MSA 1050/2050 CLI Reference Guide
To access the above MSA documentation, see the Hewlett Packard Enterprise Information Library:
www.hpe.com/support/msa1050
NOTE: The table above provides complete titles of MSA firmware documents used with this guide. Within this guide,
references to the documents listed are abbreviated as follows:
Integers on disks indicate drive slot numbering sequence.
The enlarged detail view at right shows LED icons from the bezel that correspond to the chassis LEDs.
Ball stud (two per ear flange)
Bezel icons for LEDs
Ball stud (two per ear flange)
The detail view locator circle (above right) identifies the ear kit that connects to LED light pipes in the bezel (or ear cover).
Front panel components
HPE MSA 1050 models support small form factor (SFF) and large form factor (LFF) enclosures. The SFF chassis,
configured with 24 2.5" SFF disks, and the LFF chassis, configured with 12 3.5" LFF disks, are used as either controller
enclosures or drive enclosures.
Supported drive enclosures, used for adding storage, are available in LFF or SFF chassis. The MSA 2050 LFF Disk
Enclosure is the large form factor drive enclosure and the MSA 2050 SFF Disk Enclosure is the small form factor drive
enclosure used for storage expansion.
HPE MSA 1050 models use either an enclosure bezel or traditional ear covers. The 2U bezel assembly is comprised of left
and right ear covers connected to the bezel body subassembly. A sample bezel is shown below.
Figure 1 Bezel used with MSA 1050 enclosures: front panel
The front panel illustrations that follow show the enclosures with the bezel removed, revealing ear flanges and disk drive
modules. Two sleeves protruding from the backside of each ear cover component of the bezel assembly push-fit onto the
two ball studs shown on each ear flange to secure the bezel. Remove the bezel to access the front panel components.
TIP:See “Enclosure bezel” (page 58) for bezel attachment and removal instructions, and pictorial views.
MSA 1050 Array SFF or supported 24-drive expansion enclosure
1Enclosure ID LED
2Disk drive Online/Activity LED
3Disk drive Fault/UID LED
Figure 2 MSA 1050 Array SFF or supported 24-drive expansion enclosure: front panel
4Unit Identification (UID) LED
5Heartbeat LED
6Fault ID LED
10Components
MSA 1050 Array LFF or supported 12-drive expansion enclosure
132
4
5
6
1
2
3
4
5
6
7
8
9
10
11
12
Left ear
Right ear
Notes:
Integers on disks indicate drive slot numbering sequence.
The enlarged detail view at right shows LED icons from the bezel that correspond to the chassis LEDs.
Bezel icons for LEDs
Ball stud (two per ear flange)Ball stud (two per ear flange)
The detail view locator circle (above right) identifies the ear kit that connects to LED light pipes in the bezel (or ear cover).
1Enclosure ID LED
2Disk drive Online/Activity LED
3Disk drive Fault/UID LED
Figure 3 MSA 1050 Array LFF or supported 12-drive expansion enclosure: front panel
NOTE:Either the bezel or the ear covers should be attached to the enclosure front panel to protect ear circuitry.
You can attach either the enclosure bezel or traditional ear covers to the enclosure front panel to protect the ears, and
provide label identification for the chassis LEDs. The bezel and the ear covers use the same attachment mechanism,
consisting of mounting sleeves on the cover back face:
•The enclosure bezel is introduced in Figure 1 (page 10).
•The ear covers are introduced in Figure 22 (page 58).
•The ball studs to which the bezel or ear covers attach are labeled in Figure 2 (page 10) and Figure 3 (page 11).
•Enclosure bezel alignment for attachment to the enclosure front panel is shown in Figure 21 (page 58).
•The sleeves that push-fit onto the ball studs to secure the bezel or ear covers are shown in Figure 22 (page 58).
Disk drives used in MSA 1050 enclosures
MSA 1050 enclosures support LFF/SFF Midline SAS and LFF/SFF Enterprise SAS disks, and LFF/SFF SSDs. For information
about creating disk groups and adding spares using these different disk drive types, see the SMU Reference Guide.
4Unit Identification (UID) LED
5Heartbeat LED
6Fault ID LED
NOTE:In addition to the front views of SFF and LFF disk modules shown in the figures above, see Figure 26 (page 61)
for pictorial views.
The diagram and table below display and identify important component items comprising the rear panel layout of the
MSA 1050 controller enclosure. An enclosure configured with SFPs is shown.
1AC Power supplies
2Controller module A (see face plate detail figures)
3Controller module B (see face plate detail figures)
Figure 4 MSA 1050 Array: rear panel
A controller enclosure accommodates a power supply FRU in each of the two power supply slots (see two instances of
callout 1 above). The controller enclosure accommodates two controller module FRUs of the same type within the I/O
module slots (see callouts 2 and 3 above).
IMPORTANT: MSA 1050 controller enclosures support dual-controller only. If a partner controller fails, the array will fail
over and run on a single controller until redundancy is restored. A controller module must be installed in each IOM slot to
ensure sufficient airflow through the enclosure during operation.
The diagrams with tables that immediately follow provide descriptions of the different controller modules and power
supply modules that can be installed into the rear panel of an MSA 1050 controller enclosure. Showing controller modules
and power supply modules separately from the enclosure provides improved clarity in identifying the component items
called out in the diagrams and described in the tables. Descriptions are also provided for optional drive enclosures
supported by MSA 1050 controller enclosures for expanding storage capacity.
NOTE: MSA 1050 controller enclosures support hot-plug replacement of redundant controller modules, fans, power
supplies, disk drives, and I/O modules. Hot-add of drive enclosures is also supported.
12Components
MSA 1050 controller module—rear panel components
34
8
2
= FC LEDs
= 10GbE iSCSI LEDs
157
6
= 1 Gb iSCSI LEDs (all host ports use 1 Gb RJ-45 SFPs in this figure)
34
8
2
= FC LEDs
157
6
Figure 5 shows an 8 Gb FC or 10GbE iSCSI controller module. The SFPs look identical. Refer to the LEDs that apply to the
specific configuration of your host interface ports.
1Host ports: used for host connection or replication
2CLI port (USB - Type B)
3Service port 2 (used by service personnel only)
4Reserved for future use
5Network management port
Figure 5 MSA 1050 controller module face plate (FC or 10GbE iSCSI)
Figure 6 shows a 1 Gb iSCSI (RJ-45) controller module.
6Service port 1 (used by service personnel only)
7Disabled button (used by engineering only)
(Sticker shown covering the opening)
8SAS expansion port
1Host ports: used for host connection or replication
more information about host port configuration, see the topic about configuring host ports within the SMU
6Service port 1 (used by service personnel only)
7Disabled button (used by engineering only)
(Sticker shown covering the opening)
8SAS expansion port
Controller enclosure—rear panel layout13
1HD mini-SAS ports: used for host connection
34
8
2
157
6
45716
2
3
1
2CLI port (USB - Type B)
3Service port 2 (used by service personnel only)
4Reserved for future use
5Network management port
Figure 7 MSA 1050 SAS controller module face plate (HD mini-SAS)
IMPORTANT: See Connecting to the controller CLI port for information about enabling the controller enclosure USB
Type–B CLI port for accessing the CLI to perform initial configuration tasks.
Drive enclosures
Drive enclosure expansion modules attach to MSA 1050 controller modules via the mini-SAS expansion port, allowing addition
of disk drives to the system. MSA 1050 controller enclosures support adding the 6 Gb drive enclosures described below.
LFF and SFF drive enclosure — rear panel layout
MSA 1050 controllers support the MSA 2050 LFF Disk Enclosure and the MSA 2050 SFF Disk Enclosure, which share the
same rear panel layout, as shown below.
To enable faster data access from disk storage, the following types of caching are performed:
•Write-back caching. The controller writes user data in the cache memory on the module rather than directly to the
drives. Later, when the storage system is either idle or aging—and continuing to receive new I/O data—the controller
writes the data to the drive array.
•Read-ahead caching. The controller detects sequential array access, reads ahead into the next sequence of data, and
stores the data in the read-ahead cache. Then, if the next read access is for cached data, the controller immediately
loads the data into the system memory, avoiding the latency of a disk access.
NOTE: See the SMU Reference Guide for more information about volume cache options.
Transportable CompactFlash
During a power loss or array controller failure, data stored in cache is saved off to non-volatile memory (CompactFlash).
The data is then written to disk after the issue is corrected. To protect against writing incomplete data to disk, the image
stored on the CompactFlash is verified before committing to disk.
The CompactFlash memory card is located at the midplane-facing end of the controller module as shown below.
Figure 9 MSA 1050 CompactFlash memory card
If one controller fails, then later another controller fails or does not start, and the Cache Status LED is on or blinking, the
CompactFlash will need to be transported to a replacement controller to recover data not flushed to disk (see “Controller
failure” (page 45) for more information).
Cache15
CAUTION: The CompactFlash memory card should only be removed for transportable purposes. To preserve the
existing data stored in the CompactFlash, you must transport the CompactFlash from the failed controller to the
replacement controller using a procedure outlined in the HPE MSA Controller Module Replacement Instructions shipped
with the replacement controller module. Failure to use this procedure will result in the loss of data stored in the cache
module.
IMPORTANT: In dual controller configurations featuring one healthy partner controller, there is no need to transport
failed controller cache to a replacement controller because the cache is duplicated between the controllers, provided that
volume cache is set to standard on all volumes in the pool owned by the failed controller.
Supercapacitor pack
To protect RAID controller cache in case of power failure, MSA 1050 controllers are equipped with supercapacitor
technology, in conjunction with CompactFlash memory, built into each controller module to provide extended cache
memory backup time. The supercapacitor pack provides energy for backing up unwritten data in the write cache to the
CompactFlash in the event of a power failure. Unwritten data in CompactFlash memory is automatically committed to
disk media when power is restored. While the cache is being maintained by the supercapacitor, the Cache Status LED
flashes at a rate of 1/10 second on and 9/10 second off.
Upgrading to the MSA 1050
For information about upgrading components for use with MSA controllers, see Upgrading to the HPE MSA
1050/2050/2052.
16Components
3Installing the enclosures
Installation checklist
The following table outlines the steps required to install the enclosures and initially configure the system. To ensure a
successful installation, perform the tasks in the order they are presented.
Table 2 Installation checklist
StepTaskWhere to find procedure
1.Install the controller enclosure and optional
drive enclosures in the rack, and attach the
bezel or ear caps.
2.Connect the controller enclosure and LFF/SFF
drive enclosures.
3.Connect power cords.See the quick start instructions.
See “Connecting controller and drive enclosures” (page 17).
If using the optional Remote Snap feature, also see
“Connecting two storage systems to replicate volumes”
(page 28).
See “Obtaining IP values” (page 33).
See “Connecting to the controller CLI port” (page 32); with
Linux and Windows topics.
Topics below correspond to bullets at left:
•Sign in to the web-based Storage
Management Utility (SMU).
•Initially configure and provision the storage
system using the SMU.
1
The SMU is introduced in “Accessing the SMU” (page 38). See the SMU Reference Guide or online help for additional information.
Connecting controller and drive enclosures
MSA 1050 controller enclosures support up to four enclosures (including the controller enclosure). You can cable drive
enclosures of the same type or of mixed LFF/SFF model type.
The firmware supports both straight-through and fault-tolerant SAS cabling. Fault-tolerant cabling allows any drive
enclosure to fail—or be removed—while maintaining access to other enclosures. Straight-through cabling does not
provide the same level of fault-tolerance as fault-tolerant cabling, but does provide some performance benefits as well as
ensuring that all disks are visible to the array. Fault tolerance and performance requirements determine whether to
optimize the configuration for high availability or high performance when cabling. MSA 1050 controller enclosures
support 12 Gb/s disk drives downshifted to 6 Gb/s. Each enclosure has an expansion port using 6 Gb/s SAS lanes. When
connecting multiple drive enclosures, use fault-tolerant cabling to ensure the highest level of fault tolerance.
For example, the illustration on the left in Figure 11 (page 20) shows controller module 1A connected to expansion
module 2A, with a chain of connections cascading down (blue). Controller module 1B is connected to the lower
expansion module (4B) of the last drive enclosure, with connections moving in the opposite direction (green).
See “Getting Started” in the HPE MSA 1050/2050 SMU Reference Guide.
See “Configuring the System” and “Provisioning the System”
topics (SMU Reference Guide or online help).
Installation checklist17
Connecting the MSA 1050 controller to the LFF or SFF drive enclosure
The MSA 2050 LFF Disk Enclosure and the MSA 2050 SFF Disk Enclosure can be attached to an MSA 1050 controller
enclosure using supported mini-SAS to mini-SAS cables of 0.5 m (1.64') to 2 m (6.56') length [see Figure 10 (page 20)].
Each drive enclosure provides two 0.5 m (1.64') mini-SAS to mini-SAS cables. Longer cables may be desired or required,
and can be purchased separately.
Cable requirements for MSA 1050 enclosures
IMPORTANT:
•When installing SAS cables to expansion modules, use only supported mini-SAS x4 cables with SFF-8088 connectors
supporting your 6 Gb application.
•See the QuickSpecs for information about which cables are provided with your MSA 1050 products.
www.hpe.com/support/MSA1050QuickSpecs
(If a website location has changed, an Internet search for “HPE MSA 1050 quickspecs” will provide a link.)
•The maximum expansion cable length allowed in any configuration is 2 m (6.56').
•When adding more than two drive enclosures, you may need to purchase additional 1 m or 2 m cables, depending
upon number of enclosures and cabling method used (see QuickSpecs for supported cables):
Spanning 3 or 4 drive enclosures requires 1 m (3.28') cables.
•See the QuickSpecs (link provided above) regarding information about cables supported for host connection:
Qualified Fibre Channel cable options
Qualified 10GbE iSCSI cable options or qualified 10GbE Direct Attach Copper (DAC) cables
Qualified 1 Gb RJ-45 cable options
Qualified HD mini-SAS standard cable and fan-out cable options supporting SFF-8644 and SFF-8088 host
connection [also see “12 Gb SAS protocol” (page 25)]:
–SFF-8644 to SFF-8644 cable option is used for connecting to a 12 Gb/s enabled host.
–SFF-8644 to SFF-8088 cable option is used for connecting to a 6 Gb/s enabled host/switch.
–A bifurcated SFF-8644 to SFF-8644 fan-out cable option is used for connecting to a 12Gb/s enabled host.
–A bifurcated SFF-8644 to SFF-8088 fan-out cable option is used for connecting to a 6Gb/s enabled
host/switch.
NOTE: Using fan-out cables instead of standard cables will double the number of hosts that can be attached to
a single system. Use of fan-out cables will halve the maximum bandwidth available to each host, but overall
bandwidth available to all hosts is unchanged.
See SAS fan-out cable option for more information about bifurcated SAS cables.
For additional information concerning cabling of MSA 1050 controllers, visit:
NOTE: For clarity, the schematic illustrations of controller and expansion modules shown in this section provide only
relevant details such as expansion ports within the module face plate outline. For detailed illustrations showing all
components, see “Controller enclosure—rear panel layout” (page 12).
IMPORTANT: MSA 1050 controller enclosures support dual-controller only. If a partner controller fails, the array will fail
over and run on a single controller until the redundancy is restored. A controller module must be installed in each IOM
slot to ensure sufficient airflow through the enclosure during operation.
Connecting controller and drive enclosures19
Figure 10 Cabling connections between the MSA 1050 controller and a single drive enclosure
InOut
1B
1A
2A
2B
Controller A
Controller B
InOut
= LFF 12-drive or SFF 24-drive enclosure
1
1
Controller A
Controller B
1A
1B
In
Out
2A
2B
3A
3B
4A
4B
In Out
In Out
In Out
In Out
In Out
Fault-tolerant cabling
Controller A
Controller B
1A
1B
In
Out
2A
2B
3A
3B
4A
4B
In Out
In Out
In Out
In Out
In Out
Straight-through cabling
1
1
1
1
1
1
= LFF 12-drive or SFF 24-drive enclosure
1
20Installing the enclosures
Figure 11 Cabling connections between MSA 1050 controllers and LFF and SFF drive enclosures
The diagram at left (above) shows fault-tolerant cabling of a dual-controller enclosure cabled to either the
MSA 2050 LFF Disk Enclosure or the MSA 2050 SFF Disk Enclosure featuring dual-expansion modules. Controller
module 1A is connected to expansion module 2A, with a chain of connections cascading down (blue). Controller module
1B is connected to the lower expansion module (4B), of the last drive enclosure, with connections moving in the opposite
direction (green). Fault-tolerant cabling allows any drive enclosure to fail—or be removed—while maintaining access to
other enclosures.
The diagram at right (above) shows the same storage components connected using straight-through cabling. Using this
method, if a drive enclosures fails, the enclosures that follow the failed enclosure in the chain are no longer accessible
until the failed enclosure is repaired or replaced.
Figure 11 (page 20) provides example diagrams reflecting fault-tolerant (left) and straight-through (right) cabling for the
maximum number of supported MSA 1050 enclosures (four enclosures including the controller enclosure).
IMPORTANT: For comprehensive configuration options and associated illustrations, refer to the HPE MSA 1050 Cable
Configuration Guide.
Testing enclosure connections
NOTE: Once the power-on sequence for enclosures succeeds, the storage system is ready to be connected to hosts, as
described in “Connecting the enclosure to data hosts” (page 23).
Powering on/powering off
Before powering on the enclosure for the first time:
•Install all disk drives in the enclosure so the controller can identify and configure them at power-up.
•Connect the cables and power cords to the enclosures as explained in the quick start instructions.
NOTE:
Power supplies used in MSA 1050 enclosures
The MSA 1050 controller enclosures and drive enclosures are equipped with AC power supplies that do not have power
switches (they are switchless). They power on when connected to a power source, and they power off when disconnected.
•When powering up, make sure to power up the enclosures and associated host in the following order:
Drive enclosures first
This ensures that disks in each drive enclosure have enough time to completely spin up before being scanned by
the controller modules within the controller enclosure.
While enclosures power up, their LEDs blink. After the LEDs stop blinking—if no LEDs on the front and back of
the enclosure are amber—the power-on sequence is complete, and no faults have been detected. See “LED
descriptions” (page 58) for descriptions of LED behavior.
Controller enclosure next
Depending upon the number and type of disks in the system, it may take several minutes for the system to
become ready.
Hosts last (if powered down for maintenance purposes)
TIP: When powering off, you will reverse the order of steps used for powering on.
IMPORTANT: See “Power cord requirements” (page 72) and the QuickSpecs for more information about power cords
supported by MSA 1050 enclosures.
Testing enclosure connections21
AC power supply
Power cord connect
Enclosures equipped with switchless power supplies rely on the power cord for power cycling. Connecting the cord from
the power supply power cord connector to the appropriate power source facilitates power on, whereas disconnecting the
cord from the power source facilitates power off.
Figure 12 AC power supply
AC power cycle
To power on the system:
1. Obtain a suitable AC power cord for each AC power supply that will connect to a power source.
2. Plug the power cord into the power cord connector on the back of the drive enclosure (see Figure 12). Plug the other
end of the power cord into the rack power source. Wait several seconds to allow the disks to spin up.
Repeat this sequence for each power supply within each drive enclosure.
3. Plug the power cord into the power cord connector on the back of the controller enclosure (see Figure 12). Plug the
other end of the power cord into the rack power source.
Repeat the sequence for the controller enclosure’s other switchless power supply.
To power off the system:
1. Stop all I/O from hosts to the system [see “Stopping I/O” (page 41)].
2. Shut down both controllers using either method described below:
Use the SMU to shut down both controllers, as described in the online help and web-posted HPE MSA 1050/2050
SMU Reference Guide.
Proceed to step 3.
Use the CLI to shut down both controllers, as described in the HPE MSA 1050/2050 CLI Reference Guide.
3. Disconnect the power cord female plug from the power cord connector on the power supply module.
Perform this step for each power supply module (controller enclosure first, followed by drive enclosures).
22Installing the enclosures
4Connecting hosts
Host system requirements
Data hosts connected to HPE MSA 1050 arrays must meet requirements described herein. Depending on your system
configuration, data host operating systems may require that multi-pathing is supported.
If fault-tolerance is required, then multi-pathing software may be required. Host-based multi-path software should be
used in any configuration where two logical paths between the host and any storage volume may exist at the same time.
This would include most configurations where there are multiple connections to the host or multiple connections
between a switch and the storage.
•Use native Microsoft MPIO DSM support with Windows Server 2016 and Windows Server 2012. Use either the
Server Manager or the command-line interface (mpclaim CLI tool) to perform the installation. Refer to the following
web sites for information about using Windows native MPIO DSM:
http://support.microsoft.com
http://technet.microsoft.com (search the site for “multipath I/O overview”)
•Use the HPE Multi-path Device Mapper for Linux Software with Linux servers. To download the appropriate device
mapper multi-path enablement kit for your specific enterprise Linux operating system, go to
www.hpe.com/storage/spock
Connecting the enclosure to data hosts
A host identifies an external port to which the storage system is attached. The external port may be a port in an I/O
adapter (such as an FC HBA) in a server. Cable connections vary depending on configuration. Common cable
configurations are shown in this section. A list of supported configurations is available on the Hewlett Packard Enterprise
site at: www.hpe.com/support/msa1050
.
:
•HPE MSA 1050 Quick Start Instructions
•HPE MSA 1050 Cable Configuration Guide
These documents provide installation details and describe supported direct attach, switch-connect, and storage
expansion configuration options for MSA 1050 products. For specific information about qualified host cabling options,
see “Cable requirements for MSA 1050 enclosures” (page 18).
MSA 1050 Storage host interface protocols
The small form-factor pluggable (SFP transceiver of SFP) connectors used in pre-configured host ports of FC and iSCSI
MSA 1050 models are further described in the subsections below. Also see “MSA 1050 Storage models” (page 8) for more
information concerning use of these host ports.
NOTE: MSA 1050 FC and iSCSI controllers support the optionally-licensed Remote Snap replication feature. Remote
Snap supports FC and iSCSI host interface protocols for replication. Use the SMU or CLI commands to create and view
replication sets.
MSA 1050 SAS models use high-density mini-SAS (Serial Attached SCSI) interface protocol for host connection. These
models do not support Remote Snap replication.
Fibre Channel protocol
The MSA 1050 controller enclosures support two controller modules using the Fibre Channel interface protocol for host
connection. Each controller module provides two host ports designed for use with an FC SFP supporting data rates up to
8 Gb/s. MSA 1050 FC controllers can also be cabled to support the optionally-licensed Remote Snap replication feature
via the FC ports.
Host system requirements23
The MSA 1050 FC controllers support Fibre Channel Arbitrated Loop (public or private) or point-to-point topologies.
Loop protocol can be used in a physical loop or in a direct connection between two devices. Point-to-point protocol is
used to connect to a fabric switch. Point-to-point protocol can also be used for direct connection. See the
host-parameters
parameter settings relative to supported link speeds.
Fibre Channel ports are used in either of two capacities:
•To connect two storage systems through a Fibre Channel switch for use of Remote Snap replication.
•For attachment to FC hosts directly, or through a switch used for the FC traffic.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second option
requires that the host computer supports FC and optionally, multipath I/O.
TIP: Use the SMU Configuration Wizard to set FC port speed. Within the SMU Reference Guide, see “Using the
Configuration Wizard” and scroll to FC port options. Use the
and use the
10GbE iSCSI protocol
The MSA 1050 controller enclosures support two controller modules using the Internet SCSI interface protocol for host
connection. Each controller module provides two host ports designed for use with a 10GbE iSCSI SFP or approved DAC
cable supporting data rates up to 10 Gb/s, using either one-way or mutual CHAP (Challenge-Handshake Authentication
Protocol).
command within the CLI Reference Guide for command syntax and details about connection mode
show ports
set host-parameters
CLI command to view information about host ports.
CLI command to set FC port options,
set
TIP: See the topics about configuring CHAP, and CHAP and replication in the SMU Reference Guide.
TIP: Use the SMU Configuration Wizard to set iSCSI port options. Within the SMU Reference Guide, see “Using the
Configuration Wizard” and scroll to iSCSI port options. Use the
options, and use the
The 10GbE iSCSI ports are used in either of two capacities:
•To connect two storage systems through a switch for use of Remote Snap replication.
•For attachment to 10GbE iSCSI hosts directly, or through a switch used for the 10GbE iSCSI traffic.
The first usage option requires valid licensing for the Remote Snap replication feature, whereas the second option
requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
1 Gb iSCSI protocol
The MSA 1050 controller enclosures support two controller modules using the Internet SCSI interface protocol for host
port connection. Each controller module provides two iSCSI host ports configured with an RJ-45 SFP supporting data
rates up to 1 Gb/s, using either one-way or mutual CHAP.
TIP: See the topics about configuring CHAP, and CHAP and replication in the SMU Reference Guide.
show ports
set host-parameters
CLI command to view information about host ports.
CLI command to set iSCSI port
TIP: Use the SMU Configuration Wizard to set iSCSI port options. Within the SMU Reference Guide, see “Using the
Configuration Wizard” and scroll to iSCSI port options. Use the
options, and use the
24Connecting hosts
show ports
set host-parameters
CLI command to view information about host ports.
CLI command to set iSCSI port
Loading...
+ 53 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.