subsidiaries. Other trademarks may be trademarks of their respective owners.
2019 - 09
Rev. A04
Contents
1 Before you begin.......................................................................................................................... 6
Unpack the enclosure........................................................................................................................................................... 6
Rack system safety precautions....................................................................................................................................9
Planning for installation........................................................................................................................................................10
Preparing for installation..................................................................................................................................................... 10
Preparing the site and host server...............................................................................................................................10
Requirements for rackmount installation..................................................................................................................... 11
Disk drive module.................................................................................................................................................................. 11
Drive carrier module in 2U chassis................................................................................................................................ 11
Drive status indicators................................................................................................................................................... 12
DDIC in a 5U enclosure..................................................................................................................................................13
Populating drawers with DDICs..........................................................................................................................................14
2 Mount the enclosures in the rack................................................................................................. 15
Install the 2U enclosure....................................................................................................................................................... 15
Install the 2U enclosure front bezel............................................................................................................................. 16
Install the 5U84 enclosure...................................................................................................................................................16
Connecting the enclosure to hosts................................................................................................................................... 22
SAS protocol.................................................................................................................................................................. 24
Connecting direct attach configurations....................................................................................................................26
5 Connect power cables and power on the storage system............................................................... 30
Power cable connection..................................................................................................................................................... 30
6 Perform system and storage setup.............................................................................................. 32
Record storage system information..................................................................................................................................32
Using guided setup..............................................................................................................................................................32
Web browser requirements and setup........................................................................................................................32
Access the ME Storage Manager............................................................................................................................... 32
Host system requirements................................................................................................................................................. 40
About multipath configuration..................................................................................................................................... 40
Windows hosts.................................................................................................................................................................... 40
Fibre Channel host server configuration for Windows Server................................................................................ 40
iSCSI host server configuration for Windows Server............................................................................................... 42
SAS host server configuration for Windows Server................................................................................................. 45
Linux hosts........................................................................................................................................................................... 46
Fibre Channel host server configuration for Linux ...................................................................................................46
iSCSI host server configuration for Linux...................................................................................................................48
SAS host server configuration for Linux..................................................................................................................... 51
8 Troubleshooting and problem solving........................................................................................... 59
Locate the service tag........................................................................................................................................................59
Options available for performing basic steps.............................................................................................................. 71
If the enclosure does not initialize................................................................................................................................72
Dealing with hardware faults........................................................................................................................................73
A Cabling for replication................................................................................................................ 76
Connecting two storage systems to replicate volumes................................................................................................. 76
Host ports and replication.................................................................................................................................................. 76
Example cabling for replication...........................................................................................................................................77
Single-controller module configuration for replication.............................................................................................. 77
Dual-controller module configuration for replication................................................................................................. 77
Microsoft Windows drivers........................................................................................................................................... 91
Linux drivers....................................................................................................................................................................91
Contents
5
1
Before you begin
Unpack the enclosure
Examine the packaging for crushes, cuts, water damage, or any other evidence of mishandling during transit. If you suspect that damage
has happened, photograph the package before opening, for possible future reference. Retain the original packaging materials for use with
returns.
•Unpack the 2U storage system and identify the items in your shipment.
NOTE: The cables that are used with the enclosure are not shown in Figure 1. Unpacking the 2U12 and 2U24
enclosures. The rail kit and accessories box is located below the 2U enclosure shipping box lid.
Figure 1. Unpacking the 2U12 and 2U24 enclosures
Storage system enclosure2. Rackmount left rail (2U)
1.
3. Rackmount right rail (2U)4. Documentation
5. Enclosure front-panel bezel option6. Rack mount ears
•2U enclosures are shipped with the controller modules or input/output modules (IOMs) installed. Blank drive carrier modules must
be installed in the unused drive slots.
•For enclosures configured with CNC controller modules, locate the SFP+ transceivers included with the shipment. See SFP+
transceiver for FC/iSCSI ports.
•Unpack the 5U84 storage system and identify the items in your shipment.
NOTE:
kit and accessories box is located below the 5U84 enclosure shipping box lid.
The cables that are used with the enclosure are not shown in Figure 2. Unpacking the 5U84 enclosure. The rail
6Before you begin
Figure 2. Unpacking the 5U84 enclosure
1. Storage system enclosure2. DDICs (Disk Drive in Carriers)
3. Documentation4. Rackmount left rail (5U84)
5. Rackmount right rail (5U84)6. Drawers
•DDICs ship in a separate container and must be installed into the enclosure drawers during product installation. For rackmount
installations, DDICs are installed after the enclosure is mounted in the rack. See Populating drawers with DDICs.
•For enclosures configured with CNC controller modules, locate the SFP+ transceivers included with the shipment. See SFP+
transceiver for FC/iSCSI ports.
CAUTION:
• A 5U enclosure does not ship with DDICs installed, but the rear panel controller modules or IOMs are installed.
This partially populated enclosure is weights approximately 64 kg (142 lb). You need a minimum of two people to
remove the enclosure from the box.
• When lifting the enclosure, verify that the straps are securely wrapped and buckled and position one person at
each side of the enclosure. Then, grip the straps securely by the loops and lift the enclosure out of the box by
using proper lifting techniques. Place the enclosure in an electrostatically protected area.
Safety guidelines
Always follow these safety guidelines to avoid injury and damage to ME4 Series components.
If you use this equipment in a manner that is not specified by Dell EMC, the protection that is provided by the equipment could be
impaired. For your safety and precaution, observe the rules that are described in the following sections:
NOTE:
regulatory information. Warranty information is included as a separate document.
See the
Dell EMC PowerVault ME4 Series Storage System Getting Started Guide
for product safety and
Safe handling
Dell EMC recommends that only individuals with rack-mounting experience install an enclosure into a rack.
CAUTION:
provided by the equipment.
• Unplug the enclosure before you move it or if you think that it has become damaged in any way.
• A safe lifting height is 20U.
• Always remove the power cooling modules (PCMs) to minimize weight before you move the enclosure.
• Do not lift the enclosures by the handles on the PCMs—they are not designed to take the weight.
Use this equipment in a manner specified by Dell EMC. Failure to do so may cancel the protection that is
Before you begin7
CAUTION: Do not try to lift the enclosure by yourself:
• Fully configured 2U12 enclosures can weigh up to 32 kg (71 lb)
• Fully configured 2U24 enclosures can weigh up to 30 kg (66 lb)
• Fully configured 5U84 enclosures can weigh up to 135 kg (298 lb). An unpopulated enclosure weighs 46 kg (101 lb).
• Use a minimum of two people to lift the 5U84 enclosure from the shipping box and install it in the rack.
Before lifting the enclosure:
• Avoid lifting the enclosure using the handles on any of the CRUs because they are not designed to take the weight.
• Do not lift the enclosure higher than 20U. Use mechanical assistance to lift above this height.
• Observe the lifting hazard label affixed to the storage enclosure.
Safe operation
Operation of the enclosure with modules missing disrupts the airflow and prevents the enclosure from receiving sufficient cooling.
NOTE: For a 2U enclosure, all IOM and PCM slots must be populated. In addition, empty drive slots (bays) in 2U
enclosures must hold blank drive carrier modules. For a 5U enclosure, all controller module, IOM, FCM, and PSU slots
must be populated.
• Follow the instructions in the module bay caution label affixed to the module being replaced.
• Replace a defective PCM with a fully operational PCM within 24 hours. Do not remove a defective PCM unless you
have a replacement model of the correct type ready for insertion.
• Before removal/replacement of a PCM or PSU, disconnect supply power from the module to be replaced. See the
Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
• Follow the instructions in the hazardous voltage warning label affixed to power cooling modules.
.
CAUTION: 5U84 enclosures only
• To prevent a rack from tipping over, drawer interlocks stop users from opening both drawers simultaneously. Do not
attempt to force open a drawer when the other drawer in the enclosure is already open. In a rack containing more
than one 5U84 enclosure, do not open more than one drawer per rack at a time.
• Observe the hot surface label that is affixed to the drawer. Operating temperatures inside enclosure drawers can
reach 60°C (140°F) . Take care when opening drawers and removing DDICs.
• Due to product acoustics, ear protection should be worn during prolonged exposure to the product in operation.
• Observe the drawer caution label. Do not use open drawers to support any other objects or equipment.
Electrical safety
•The 2U enclosure must be operated from a power supply input voltage range of 100–240 VAC, 50/60Hz.
•The 5U enclosure must be operated from a power supply input voltage range of 200–240 VAC, 50/60Hz.
•Provide a power source with electrical overload protection to meet the requirements in the technical specification.
•The power cord must have a safe electrical grounding connection. Check the grounding connection of the enclosure before you
switch on the power supply.
NOTE:
• The plug on the power supply cord is used as the main disconnect device. Ensure that the socket outlets are located
• 2U enclosures are intended to operate with two PCMs.
• 5U84 enclosures are intended to operate with two PSUs.
• Follow the instructions that are shown on the power-supply disconnection caution label that is affixed to power
The enclosure must be grounded before applying power.
near the equipment and are accessible.
cooling modules.
CAUTION: Do not remove the covers from the enclosure or any of the modules as there is a danger of electric shock
inside.
8Before you begin
Rack system safety precautions
The following safety requirements must be considered when the enclosure is mounted in a rack:
•The rack construction must support the total weight of the installed enclosures. The design should incorporate stabilizing features to
prevent the rack from tipping or being pushed over during installation or in normal use.
•When loading a rack with enclosures, fill the rack from the bottom up; and empty the rack from the top down.
•Always remove all power supply modules to minimize weight, before loading the enclosure into the rack.
•Do not try to lift the enclosure by yourself.
CAUTION: To prevent of the rack falling over, never move more than one enclosure out of the cabinet at any one time.
•The system must be operated with low-pressure rear exhaust installation. The back pressure that is created by rack doors and
obstacles must not exceed 5 pascals (0.5 mm water gauge).
•The rack design should take into consideration the maximum operating ambient temperature for the enclosure. The maximum
operating temperature is 35ºC (95ºF) for controllers and 40ºC (104ºF) for expansion enclosures.
•The rack should have a safe electrical distribution system. It must provide overcurrent protection for the enclosure. Make sure that the
rack is not overloaded by the total number of enclosures that are installed in the rack. Consideration should be given to the electrical
power consumption rating shown on the nameplate.
•The electrical distribution system must provide a reliable connection for each enclosure in the rack.
•Each PSU or PCM in each enclosure has a grounding leakage current of 1.0 mA. The design of the electrical distribution system must
take into consideration the total grounding leakage current from all the PSUs/PCMs in all the enclosures. The rack requires labeling
with “High Leakage Current. Grounding connection essential before connecting supply.”
Installation checklist
This section shows how to plan for and successfully install your enclosure system into an industry standard 19-inch rack cabinet.
CAUTION:
The following table outlines the steps that are required to install the enclosures, and initially configure and provision the storage system:
NOTE:
Table 1. Installation checklist
StepTaskWhere to find procedure
1Unpack the enclosure.See Unpack the enclosure.
2Install the controller enclosure and optional expansion enclosures in the
3Populate drawers with disks (DDICs) in 5U84 enclosure; 2U enclosures ship
4Cable the optional expansion enclosures.See Connect optional expansion enclosures.
5Connect the management ports.See Connect to the management network.
6Cable the controller host ports.
7Connect the power cords and power on the system.See Power cable connection.
8Perform system and storage setup.See Using guided setup.
9Perform host setup:
10Perform the initial configuration tasks.
Use only the power cables supplied when installing the storage system.
To ensure successful installation, perform the tasks in the order presented.
1
rack.
with disks installed.
2
•Attach the host servers.
•Install the required host software.
3
See Required tools.
See Requirements for rackmount installation.
See Install the 2U enclosure.
See Install the 5U84 enclosure.
See Populating drawers with DDICs.
See Connecting the enclosure to hosts.
See Host system requirements.
See Attach host servers.
See Windows hosts and Linux hosts.
See VMware ESXi hosts.
See Using guided setup.
Before you begin9
1
The environment in which the enclosure operates must be dust-free to ensure adequate airflow.
2
For more information about hosts, see the About hosts topic in the Dell EMC PowerVault ME4 Series Storage System Administrator’s
Guide.
3
The ME Storage Manager is introduced in Using guided setup. See the Dell EMC PowerVault ME4 Series Storage System
Administrator’s Guide or online help for additional information.
Planning for installation
Before beginning the enclosure installation, familiarize yourself with the system configuration requirements.
Table 2. System configuration
Module typeLocationDescription
Drive carrier modules 2U front panelAll drive slots must hold either a drive carrier or blank drive carrier module. Empty slots
are not allowed. At least one disk must be installed.
DDIC5U front panel
drawers
Power cooling
modules
Power supply unit
modules
Fan cooling modules5U rear panelFive FCMs provide airflow circulation, maintaining all system components below the
Controller modules
and IOMs
2U rear panelTwo PCMs provide full power redundancy, allowing the system to continue to operate
5U rear panelTwo PSUs provide full power redundancy, allowing the system to continue to operate
Rear panel
Maximum 84 disks are installed (42 disks per drawer). Minimum 28 disks are required.
Follow the drawer population rules in Populating drawers with DDICs.
while a faulty PCM is replaced.
while a faulty PSU is replaced.
maximum temperature allowed.
•One or two controller modules may be installed in 2U12 and 2U24 enclosures.
•Two controller modules must be installed in 5U84 enclosures.
•Two IOMs must be installed in 2U12, 2U24, and 5U84 enclosures.
Preparing for installation
NOTE:
• 2U enclosures are delivered with CRUs and all drive carrier modules installed.
• 5U84 enclosures are delivered with CRUs installed; however, DDICs must be installed during system setup.
• 5U84 enclosures require 200–240VAC for operation. See the
CAUTION: Lifting enclosures:
• A 2U enclosure, including all its component parts, is too heavy for one person to lift and install into the rack cabinet.
• A 5U enclosure, which is delivered without DDICs installed, requires two people to lift it from the box. A mechanical
Make sure that you wear an effective antistatic wrist or ankle strap and follow conventional ESD precautions when touching modules and
components. Do not touch the midplane, motherboard, or module connectors. See Safety guidelines for important preparation
requirements and handling procedures to use during product installation.
Preparing the site and host server
Before beginning the enclosure installation, verify that the site where you plan to install your storage system has the following:
•Each redundant power supply module requires power from an independent source or a rack power distribution unit with
Uninterruptible Power Supply (UPS). 2U enclosures use standard AC power and the 5U84 enclosure requires high-line (high-voltage)
AC power.
•A host computer configured with the appropriate software, BIOS, and drives. Contact your supplier for the correct software
configurations.
Enclosure configurations:
Environmental requirements
PowerVault ME4 Series Storage System Owner’s Manual
Two people are required to safely move a 2U enclosure.
lift is required to hoist the enclosure for positioning in the rack.
for detailed information
topic in the
Dell EMC
10
Before you begin
Before installing the enclosure, verify the existence of the following:
•Depending upon the controller module: SAS, Fibre Channel (FC), or iSCSI HBA and appropriate switches (if used)
•Qualified cable options for host connection
•One power cord per PCM or PSU
•Rail kit (for rack installation)
Contact your supplier for a list of qualified accessories for use with the enclosure. The accessories box contains the power cords and other
accessories.
Required tools
The following tools are required to install an ME4 Series enclosure:
•Phillips screwdriver
•Torx T20 bit for locks and select CRU replacement
Requirements for rackmount installation
You can install the enclosure in an industry standard 19-inch cabinet capable of holding 2U form factors.
NOTE:
•Minimum depth: 707 mm (27.83") from rack posts to maximum extremity of enclosure (includes rear panel cabling and cable bend
radii).
•Weight:
•Up to 32 kg (71 lb), dependent upon configuration, per 2U enclosure.
•Up to 128 kg (282 lb), dependent upon configuration, per 5U enclosure.
•The rack should cause a maximum back pressure of 5 pascals (0.5 mm water gauge).
•Before you begin, ensure that you have adequate clearance in front of the rack for installing the rails.
See the
Dell EMC PowerVault ME4 Series Owner's Manual
for front and rear panel product views.
Disk drive module
The ME4 Series Storage System supports different disk drive modules for use in 2U and 5U84 enclosures.
•The disk drive modules that are used in 2U enclosures are referred to as drive carrier modules.
•The disk drive modules that are used in 5U84 enclosures are referred to as Disk Drive in Carrier (DDIC) modules.
Drive carrier module in 2U chassis
The drive carrier module consists of a disk drive that is installed in a carrier module.
•Each 2U12 drive slot holds a single low profile 1.0 in. high, 3.5 in. form factor disk drive in its carrier. The disk drives are horizontal. A
2.5" to 3.5" carrier adapter is available to accommodate 2.5" disk drives.
•Each 2U24 drive slot holds a single low profile 5/8 inch high, 2.5 in. form factor disk drive in its carrier. The disk drives are vertical.
The carriers have mounting locations for:
•Direct dock SAS drives.
A sheet steel carrier holds each drive, which provides thermal conduction, radio frequency, and electro-magnetic induction protection, and
physically protects the drive.
The front cap also has an ergonomic handle which gives the following functions:
•Secure location of the carrier into and out of drive slots.
•Positive spring-loading of the drive/midplane connector.
The carrier can use this interface:
•Dual path direct dock Serial Attached SCSI.
The following figures display the supported drive carrier modules:
Before you begin
11
Figure 3. Dual path LFF 3.5" drive carrier module
Figure 4. Dual path SFF 2.5" drive carrier module
Figure 5. 2.5" to 3.5" hybrid drive carrier adapter
Drive status indicators
Green and amber LEDs on the front of each drive carrier module indicate disk drive status.
Blank drive carrier modules
Blank drive carrier modules, also known as drive blanks, are provided in 3.5" (2U12) and 2.5" (2U24) form factors. They must be installed
in empty disk slots to create a balanced air flow.
Each disk drive is installed in a DDIC that enables secure insertion of the disk drive into the drawer with the appropriate SAS carrier
transition card.
The DDIC features a slide latch button with directional arrow. The slide latch enables you to install and secure the DDIC into the disk slot
within the drawer. The slide latch also enables you to disengage the DDIC from its slot, and remove it from the drawer. The DDIC has a
single Drive Fault LED, which illuminates amber when the disk drive has a fault.
The following figure shows a DDIC with a 3.5" disk drive:
Figure 7. 3.5" disk drive in a DDIC
The following figure shows a DDIC with a hybrid drive carrier adapter and a 2.5" disk drive:
Before you begin
13
Figure 8. 2.5" drive in a 3.5" DDIC with a hybrid drive carrier adapter
Populating drawers with DDICs
The 5U84 enclosure does not ship with DDICs installed. Before populating drawers with DDICs, ensure that you adhere to the following
guidelines:
•The minimum number of disks that are supported by the enclosure is 28, 14 in each drawer.
•DDICs must be added to disk slots in complete rows (14 disks at a time).
•Beginning at the front of each drawer, install DDICs consecutively by number, and alternately between the top drawer and the bottom
drawer. For example, install first at slots 0–13 in the top drawer, and then 42–55 in the bottom drawer. After that, install slots 14–27,
and so on.
•The number of populated rows must not differ by more than one row between the top and bottom drawers.
•Hard disk drives (HDD) and solid-state drives (SDD) can be mixed in the same drawer.
•HDDs installed in the same row should have the same rotational speed.
•DDICs holding 3.5" disks can be intermixed with DDICs holding 2.5" disks in the enclosure. However, each row should be populated
with disks of the same form factor (all 3.5" disks or 2.5" disks).
The following figure shows a drawer that is fully populated with DDICs:
•See Figure 7. 3.5" disk drive in a DDIC for the DDIC holding the 3.5" disk
•See Figure 8. 2.5" drive in a 3.5" DDIC with a hybrid drive carrier adapter for the DDIC holding the 2.5" disk with 3.5" adapter
Figure 9. 5U84 enclosure drawer fully populated with DDICs
14
Before you begin
2
Mount the enclosures in the rack
This section describes how to unpack the ME4 Series Storage System equipment, prepare for installation, and safely mount the enclosures
into the rack.
Topics:
•Rackmount rail kit
•Install the 2U enclosure
•Install the 5U84 enclosure
•Connect optional expansion enclosures
Rackmount rail kit
Rack mounting rails are available for use in 19-inch rack cabinets.
The rails have been designed and tested for the maximum enclosure weight. Multiple enclosures may be installed without loss of space in
the rack. Use of other mounting hardware may cause some loss of rack space. Contact Dell EMC to ensure that suitable mounting rails are
available for the rack you plan to use.
Install the 2U enclosure
The 2U enclosure is delivered with the disks installed.
Refer to Figure 10. Secure brackets to the rail (left hand rail shown for 2U) when following the rail installation instructions.
1. Remove the rack mounting rail kit from the accessories box, and examine for damage.
2. Use the following procedure to attach the rail kit brackets to the rack post:
a. Set each location pin at the rear of the rail into a rear rack post hole.
b. Use the supplied washers and screws to attach the bracket to the rear rack post. Leave the screws loose.
c. Extend the rail to fit between the front and rear rack posts.
d. Attach the bracket to the front rack post using the washers and screws supplied. Leave the screws loose.
e. Tighten the two clamping screws located along the rear section of the rack bracket.
f. Repeat the previous steps for the companion rail.
3. Install the enclosure into the rack:
a. Lift the enclosure and align it with the installed rack rails, taking care to ensure that the enclosure remains level.
b. Carefully insert the chassis slides into the rack rails and push fully in.
c. Tighten the mounting screws in the rear rail kit brackets.
d. Remove the enclosure until it reaches the hard stops—approximately 400 mm (15.75")—tighten the mounting screws in the front
rail kit bracket.
e. Return the enclosure to the home position.
Mount the enclosures in the rack15
Figure 10. Secure brackets to the rail (left hand rail shown for 2U)
1. Front rack post (square hole)2. Rail location pins (quantity 2 per rail)
9. Left rail position locking screw10. 2U enclosure fastening screw (C)
11. Key: Rail kit fasteners used in rack-mount installation
Install the 2U enclosure front bezel
Install the bezel if it was included with the enclosure.
While holding the bezel in your hands, face the front panel of the 2U12 or 2U24 enclosure.
1. Hook the right end of the bezel onto the right ear cover of the storage system.
Figure 11. Attach the bezel to the front of the 2U enclosure
2. Insert the left end of the bezel into the securing slot until the release latch snaps into place.
3. Secure the bezel with the keylock as shown in the detail view in Figure 11. Attach the bezel to the front of the 2U enclosure.
NOTE:
To remove the bezel from the 2U enclosure front panel, reverse the order of the previous steps.
Install the 5U84 enclosure
The 5U84 enclosure is delivered without the disks installed.
NOTE:
CRUs to decrease the enclosure weight.
The adjustment range of the rail kit from the front post to the rear post is 660 mm–840 mm. This range suits a one-meter deep rack
within Rack Specification IEC 60297.
16
Due to the weight of the enclosure, install it into the rack without DDICs installed, and remove the rear panel
Mount the enclosures in the rack
1. To facilitate access, remove the door from the rack.
2. Ensure that the preassembled rails are at their shortest length.
NOTE: See the reference label on the rail.
Locate the rail location pins inside the front of the rack, and extend the length of the rail assembly to position the rear location pins.
3.
Ensure that the pins are fully inserted in the square or round holes in the rack posts.
Figure 12. Secure brackets to the rail (left hand rail shown for 5U84 enclosure)
ItemDescriptionItemDescription
1Fastening screws (A)8Front rack post (square hole)
2Left rail9Middle slide locking screws
3Rear rack post (square hole)105U84 chassis section shown for reference
4Clamping screw (B)11Fastening screw (C)
5Clamping screw (B)12Key: Rail kit fasteners used in rackmount installation
4. Fully tighten all clamping screws and middle slide locking screws.
5. Ensure the four rear spacer clips (not shown) are fitted to the edge of the rack post.
6. Slide the enclosure until fully seated on its rails.
7. Fasten the front of the enclosure using the enclosure fastening screws (x4) as shown in Figure 12. Secure brackets to the rail (left
hand rail shown for 5U84 enclosure).
8. Fix the rear of the enclosure to the sliding bracket with the rear enclosure fastening screws.
CAUTION:
the enclosure from the rack.
Reinsert the rear panel modules and install the DDICs into the drawers. See the instructions in the Dell EMC PowerVault ME4 Series
Storage System Owner’s Manual.
•Installing a controller module
•Installing an IOM
•Installing a fan cooling module
•Installing a PSU
•Installing a DDIC
Once the enclosure is installed in the rack, dispose of the lifting straps. The straps cannot be used to remove
Connect optional expansion enclosures
ME4 Series controller enclosures support 2U12, 2U24, and 5U84 expansion enclosures. 2U12 and 2U24 expansion enclosures can be
intermixed, however 2U expansion enclosures cannot be intermixed with 5U84 expansion enclosures in the same storage system.
NOTE:
connecting the expansion enclosures.
To add expansion enclosures to an existing storage system, power down the controller enclosure before
Mount the enclosures in the rack17
•ME4 Series 2U controller enclosures support up to ten 2U enclosures (including the controller enclosure), or a maximum of 240 disk
drives.
•ME4 Series 5U controller enclosures support up to four 5U enclosures (including the controller enclosure), or a maximum of 336 disk
drives.
•ME4 Series expansion enclosures are equipped with dual-IOMs. These expansion enclosures cannot be cabled to a controller enclosure
equipped with a single IOM.
•The enclosures support reverse SAS cabling for adding expansion enclosures. Reverse cabling enables any drive enclosure to fail—or
be removed—while maintaining access to other enclosures. Fault tolerance and performance requirements determine whether to
optimize the configuration for high availability or high performance when cabling.
Cable requirements for expansion enclosures
ME4 Series supports 2U12, 2U24, and 5U84 form factors, each of which can be configured as a controller enclosure or an expansion
enclosure. Key enclosure characteristics include:
NOTE: To add expansion enclosures to an existing storage system, power down the controller enclosure before
connecting the expansion enclosures.
•When connecting SAS cables to IOMs, use only supported HD mini-SAS x4 cables.
•Qualified HD mini-SAS to HD mini-SAS 0.5 m (1.64 ft.) cables are used to connect cascaded enclosures in the rack.
•The maximum enclosure cable length that is allowed in any configuration is 2 m (6.56 ft.).
•When adding more than two expansion enclosures, you may need to purchase additional cables, depending upon the number of
enclosures and cabling method used.
•You may need to order additional or longer cables when reverse-cabling a fault-tolerant configuration.
Per common convention in cabling diagrams, the controller enclosure is shown atop the stack of connected expansion enclosures. In
reality, you can invert the order of the stack for optimal weight and placement stability in the rack. The schematic representation of
cabling remains unchanged. See Mount the enclosures in the rack for more detail.
When connecting multiple expansion enclosures to an expansion enclosure, use reverse cabling to ensure the highest level of fault
tolerance.
The ME4 Series identifies controller modules and IOMs by enclosure ID and IOM ID. In the following figure, the controller modules are
identified as 0A and 0B, the IOMs in the first expansion enclosure are identified as 1A and 1B, and so on. Controller module 0A is connected
to IOM 1A, with a chain of connections cascading down (blue). Controller module 0B is connected to the lower IOM (9B), of the last
expansion enclosure, with connections moving in the opposite direction (green). Reverse cabling enables any expansion enclosure to fail—
or be removed—while maintaining access to other enclosures.
NOTE:
Figure 13. Cabling connections between a 2U controller enclosure and 2U expansion enclosures shows the maximum cabling configuration
for a 2U controller enclosure with 2U expansion enclosures.
The cabling diagrams show only relevant details such as module face plate outlines and expansion ports.
18
Mount the enclosures in the rack
Figure 13. Cabling connections between a 2U controller enclosure and 2U expansion enclosures
Controller module A (0A)2. Controller module B (0B)
1.
3. IOM (1A)4. IOM (1B)
5. IOM (2A)6. IOM (2B)
7. IOM (3A)8. IOM (3B)
9. IOM (9A)10. IOM (9B)
Figure 14. Cabling connections between a 5U controller enclosure and 5U expansion enclosures shows the maximum cabling configuration
for a 5U84 controller enclosure with 5U84 expansion enclosures (four enclosures including the controller enclosure).
Figure 14. Cabling connections between a 5U controller enclosure and 5U expansion enclosures
1.
Controller module A (0A)2. Controller module B (0B)
3. IOM (1A)4. IOM (1B)
5. IOM (2A)6. IOM (2B)
7. IOM (3A)8. IOM (3B)
Figure 15. Cabling connections between a 2U controller enclosure and 5U84 expansion enclosures shows the maximum cabling
configuration for a 2U controller enclosure with 5U84 expansion enclosures (four enclosures including the controller enclosure).
Mount the enclosures in the rack
19
Figure 15. Cabling connections between a 2U controller enclosure and 5U84 expansion enclosures
1. Controller module A (0A)2. Controller module B (0B)
3. IOM (1A)4. IOM (1B)
5. IOM (2A)6. IOM (2B)
7. IOM (3A)8. IOM (3B)
Label the back-end cables
Make sure to label the back-end SAS cables that connect the controller enclosure and the expansion enclosures.
20
Mount the enclosures in the rack
Connect to the management network
Perform the following steps to connect a controller enclosure to the management network:
1. Connect an Ethernet cable to the network port on each controller module.
2. Connect the other end of each Ethernet cable to a network that your management host can access, preferably on the same subnet.
NOTE: If you connect the iSCSI and management ports to the same physical switches, Dell EMC recommends using
separate VLANs.
3
Figure 16. Connect a 2U controller enclosure to the management network
Controller module in slot A2. Controller module in slot B
1.
3. Switch4. SAN
Figure 17. Connect a 5U controller enclosure to the management network
1.
Controller module in slot A2. Controller module in slot B
3. Switch4. SAN
NOTE: See also the topic about configuring network ports on controller modules in the
Storage System Administrator’s Guide
.
Dell EMC PowerVault ME4 Series
Connect to the management network21
4
Cable host servers to the storage system
This section describes the different ways that host servers can be connected to a storage system.
Topics:
•Cabling considerations
•Connecting the enclosure to hosts
•Host connection
Cabling considerations
Host interface ports on ME4 Series controller enclosures can connect to respective hosts using direct-attach or switch-attach methods.
Another important cabling consideration is cabling controller enclosures to enable the replication feature. The FC and iSCSI product
models support replication, but SAS product models do not support replication. See Cabling for replication.
Use only Dell EMC cables for host connections:
•Qualified 16 Gb FC SFP+ transceivers and cable options
•Qualified 10 GbE iSCSI SFP+ transceivers and cable options
•Qualified 10Gbase-T cable options
•Qualified 12 Gb mini-SAS HD cable options
Connecting the enclosure to hosts
A host identifies an external port to which the storage system is attached. The external port may be a port in an I/O adapter (such as an
FC HBA) in a server. Cable connections vary depending on configuration. This section describes host interface protocols supported by
ME4 Series controller enclosures, while showing a few common cabling configurations. ME4 Series controllers use Unified LUN
Presentation (ULP), which enables a host to access mapped volumes through any controller host port.
ULP can show all LUNs through all host ports on both controllers, and the interconnect information is managed by the controller firmware.
ULP appears to the host as an active-active storage system, allowing the host to select any available path to access the LUN, regardless
of disk group ownership.
CNC technology
The ME4 Series FC/iSCSI models use Converged Network Controller (CNC) technology.
The CNC technology enables you to select the host interface protocols to use on the storage system. The small form-factor pluggable
(SFP+) connectors that are used in CNC ports are further described in the following subsections:
NOTE:
• Controller modules are not always shipped with preinstalled SFP+ transceivers. You might need to install SFP
transceivers into the controller modules. Within your product kit, locate the qualified SFP+ transceivers and install
them into the CNC ports. See SFP+ transceiver for FC/iSCSI ports.
• Use the ME Storage Manager to set the host interface protocol for CNC ports using qualified SFP+ transceivers.
ME4 Series models ship with CNC ports configured for FC. When connecting CNC ports to iSCSI hosts, you must
configure these ports for iSCSI.
CNC ports used for host connection
ME4 Series SFP+ based controllers ship with CNC ports that are configured for FC.
If you must change the CNC port mode, you can do so using the ME Storage Manager.
22Cable host servers to the storage system
Alternatively, the ME4 Series enables you to set the CNC ports to use FC and iSCSI protocols in combination. When configuring a
combination of host interface protocols, host ports 0 and 1 must be configured for FC, and host ports 2 and 3 must be configured for
iSCSI. The CNC ports must use qualified SFP+ connectors and cables for the selected host interface protocol. For more information, see
SFP+ transceiver for FC/iSCSI ports.
Fibre Channel protocol
ME4 Series controller enclosures support controller modules with CNC host interface ports.
Using qualified FC SFP+ transceiver/cable options, these CNC ports can be configured to support Fibre Channel protocol in either four or
two CNC ports. Supported data rates are 8 Gb/sec or 16 Gb/s.
The controllers support Fibre Channel Arbitrated Loop (public or private) or point-to-point topologies. Loop protocol can be used in a
physical loop or for direct connection between two devices. Point-to-point protocol is used to connect to a fabric switch. Point-to-point
protocol can also be used for direct connection, and it is the only option supporting direct connection at 16 Gb/s.
The Fibre Channel ports are used for:
•Connecting to FC hosts directly, or through a switch used for the FC traffic.
•Connecting two storage systems through a switch for replication. See Cabling for replication.
The first option requires that the host computer must support FC and optionally, multipath I/O.
Use the ME Storage Manager to set FC port speed and options. See the topic about configuring host ports in the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide. You can also use CLI commands to perform these actions:
•Use the set host-parameters CLI command to set FC port options.
•Use the show ports CLI command to view information about host ports.
iSCSI protocol
ME4 Series controller enclosures support controller modules with CNC host interface ports.
CNC ports can be configured to support iSCSI protocol in either four or two CNC ports. The CNC ports support 10 GbE but do not
support 1 GbE.
The 10 GbE iSCSI ports are used for:
•Connecting to 10 GbE iSCSI hosts directly, or through a switch used for the 10 GbE iSCSI traffic.
•Connecting two storage systems through a switch for replication.
The first option requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
See the topic about configuring CHAP in the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide.
Use the ME Storage Manager to set iSCSI port options. See the topic about configuring host ports in the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide. You can also use CLI commands to perform these actions:
•Use the set host-parameters CLI command to set iSCSI port options.
•Use the show ports CLI command to view information about host ports.
iSCSI settings
The host should be cabled to two different Ethernet switches for redundancy.
If you are using switches with mixed traffic (LAN/iSCSI), then a VLAN should be created to isolate iSCSI traffic from the rest of the
switch traffic.
Example iSCSI port address assignments
The following figure and the supporting tables provide example iSCSI port address assignments featuring two redundant switches and two
IPv4 subnets:
NOTE:
For each callout number, read across the table row for the addresses in the data path.
Cable host servers to the storage system23
Figure 18. Two subnet switch example (IPv4)
Table 3. Two subnet switch example
No.DeviceIP AddressSubnet
1A0192.68.10.20010
2A1192.68.11.21011
3A2192.68.10.22010
4A3192.68.11.23011
5B0192.68.10.20510
6B1192.68.11.21511
7B2192.68.10.22510
8B3192.68.11.23511
9Switch AN/AN/A
10Switch BN/AN/A
11Host server 1, Port 0192.68.10.2010
12Host server 1, Port 1192.68.11.2011
13Host server 2, Port 0192.68.10.2110
14Host server 2, Port 1192.68.11.2111
To enable CHAP, see the topic about configuring CHAP in the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide.
SAS protocol
ME4 Series SAS models use 12 Gb/s host interface protocol and qualified cable options for host connection.
12Gb HD mini-SAS host ports
ME4 Series 12 Gb SAS controller enclosures support two controller modules. The 12 Gb/s SAS controller module provides four SFF-8644
HD mini-SAS host ports. These host ports support data rates up to 12 Gb/s. HD mini-SAS host ports are used for attachment to SAS
hosts directly. The host computer must support SAS and optionally, multipath I/O. Use a qualified cable option when connecting to a host.
24
Cable host servers to the storage system
Host connection
ME4 Series controller enclosures support up to eight direct-connect server connections, four per controller module.
Connect appropriate cables from the server HBAs to the controller module host ports as described in the following sections.
16 Gb Fibre Channel host connection
To connect controller modules supporting FC host interface ports to a server HBA or switch, using the controller CNC ports, select a
qualified FC SFP+ transceiver. For information about configuring HBAs, see the Fibre Channel topics under Attach host servers.
Use the cabling diagrams to connect the host servers to the switches. See the Dell EMC Storage Support Matrix for supported Fibre
Channel HBAs.
•Install and connect each FC HBA to a switch that is connected to the host ports on the two controllers that are shown in Figure 26.
Connecting hosts: ME4 Series 2U switch-attached – two servers, two switches and Figure 27. Connecting hosts: ME4 Series 5U
switch-attached – two servers, two switches.
•In hybrid examples, one server and switch manage FC traffic, and the other server and switch manage iSCSI traffic.
•For FC, each initiator must be zoned with a single host port or multiple host ports only (single initiator, multi-target of the same kind).
Connecting host servers directly to the storage system is also supported.
Qualified options support cable lengths of 1 m (3.28'), 2 m (6.56'), 5 m (16.40'), 15 m (49.21'), 30 m (98.43'), and 50 m (164.04') for OM4
multimode optical cables and OM3 multimode FC cables. A 0.5 m (1.64') cable length is also supported for OM3. In addition to providing
host connection, these cables are used for connecting two storage systems through a switch, to facilitate use of the optional replication
feature.
iSCSI host connection
To connect controller modules supporting 10 GbE iSCSI host interface ports to a server HBA or switch, using the controller CNC ports,
select a qualified 10 GbE SFP+ transceiver. For information about configuring iSCSI initiators/HBAs, see the iSCSI topics under
host servers.
Use the cabling diagrams to connect the host servers to the switches.
•Install and connect each Ethernet NIC to a switch that is connected to the host ports on the two controllers that are shown in Figure
26. Connecting hosts: ME4 Series 2U switch-attached – two servers, two switches and Figure 27. Connecting hosts: ME4 Series 5U
switch-attached – two servers, two switches.
•In hybrid examples, one server and switch manage iSCSI traffic, and the other server and switch manage FC traffic.
Connecting host servers directly to the storage system is also supported.
Attach
12 Gb HD mini-SAS host connection
To connect controller modules supporting HD mini-SAS host interface ports to a server HBA, using the controller’s SFF-8644 dual HD
mini-SAS host ports, select a qualified HD mini-SAS cable option. For information about configuring SAS HBAs, see the SAS topics under
Attach host servers. Use the cabling diagrams to connect the host servers.
A qualified SFF-8644 to SFF-8644 cable option is used for connecting to a 12Gb/s enabled host; whereas a qualified SFF-8644 to
SFF-8088 cable option is used for connecting to a 6 Gb/s host. Qualified SFF-8644 to SFF-8644 options support cable lengths of 0.5 m
(1.64'), 1 m (3.28'), 2 m (6.56'), and 4 m (13.12'). Qualified SFF-8644 to SFF-8088 options support cable lengths of 1 m (3.28'), 2 m
(6.56'), 3 m (9.84'), and 4 m (13.12').
10Gbase-T host connection
To connect controller modules with 10Gbase-T iSCSI host interface ports to a server HBA or switch, select a qualified 10Gbase-T cable
option.
For information about configuring network adapters and iSCSI HBAs, see the iSCSI topics under Attach host servers. See also, the cabling
instructions in
iSCSI host connection.
Cable host servers to the storage system
25
Connecting direct attach configurations
A dual-controller configuration improves application availability. If a controller failure occurs, the affected controller fails over to the healthy
partner controller with little interruption to data flow.
A failed controller can be replaced without the need to shut down the storage system.
NOTE: In the following examples, a single diagram represents CNC, SAS, and 10Gbase-T host connections for ME4
Series controller enclosures. The location and sizes of the host ports are similar. Blue cables show controller A paths
and green cables show controller B paths for host connection.
Single-controller module configurations
A single controller module configuration does not provide redundancy if a controller module fails.
This configuration is intended only for environments where high availability is not required. If the controller module fails, the host loses
access to the storage data until failure recovery actions are completed.
NOTE:
Figure 19. Connecting hosts: ME4 Series 2U direct attach – one server, one HBA, single path
Server2. Controller module in slot A
1.
3. Controller module blank in slot B
NOTE: If the ME4 Series 2U controller enclosure is configured with a single controller module, the controller module
must be installed in the upper slot. A controller module blank must be installed in the lower slot. This configuration is
required to enable sufficient air flow through the enclosure during operation.
Expansion enclosures are not supported in a single controller module configuration.
Dual-controller module configurations
A dual-controller module configuration improves application availability.
If a controller module failure occurs, the affected controller module fails over to the partner controller module with little interruption to
data flow. A failed controller module can be replaced without the need to shut down the storage system.
In a dual-controller module system, hosts use LUN-identifying information from both controller modules to determine the data paths are
available to a volume. Assuming MPIO software is installed, a host can use any available data path to access a volume that is owned by
either controller module. The path providing the best performance is through the host ports on the controller module that owns the
volume . Both controller modules share one set of 1,024 LUNs (0-1,023) for use in mapping volumes to hosts.
A switch-attached solution—or SAN—places a switch between the servers and the controller enclosures within the storage system.
Using switches, a SAN shares a storage system among multiple servers, reducing the number of storage systems required for a particular
environment. Using switches increases the number of servers that can be connected to the storage system.
NOTE:
• See the recommended switch-attached examples for host connection in the
• See Figure 18. Two subnet switch example (IPv4) for an example showing host port and controller port addressing on
Figure 26. Connecting hosts: ME4 Series 2U switch-attached – two servers, two switches
Server 12. Server 2
1.
3. Switch A4. Switch B
5. Controller module A6. Controller module B
About switch-attached configurations:
ME4 Series Storage System
an IPv4 network.
document that is provided with your controller enclosure.
Setting Up Your Dell EMC PowerVault
28Cable host servers to the storage system
Figure 27. Connecting hosts: ME4 Series 5U switch-attached – two servers, two switches
1. Server 12. Server 2
3. Switch A4. Switch B
5. Controller module A6. Controller module B
Label the front-end cables
Make sure to label the front-end cables to identify the controller module and host interface port to which each cable connects.
Cable host servers to the storage system
29
5
Connect power cables and power on the
storage system
Before powering on the enclosure system, ensure that all modules are firmly seated in their correct slots.
Verify that you have successfully completed the Installation checklist instructions. Once you have completed steps 1–7, you can access
the management interfaces using your web-browser to complete the system setup.
Topics:
•Power cable connection
Power cable connection
Connect a power cable from each PCM or PSU on the enclosure rear panel to the PDU (power distribution unit) as shown in the following
figures:
Figure 28. Typical AC power cable connection from PDU to PCM (2U)
Controller enclosure with redundant PCMs2. Redundant PCM to PDU (AC UPS shown) connection
1.
Figure 29. Typical AC power cable connection from PDU to PSU (5U)
1.
Controller enclosure with redundant PSUs2. Redundant PSU to PDU (AC UPS shown) connection
NOTE: The power cables must be connected to at least two separate and independent power supplies to ensure
redundancy. When the storage system is ready for operation, ensure that each PCM or PSU power switch is set to the
On position. See also Powering on).
CAUTION: Always remove the power connections before you remove the PCM (2U) or PSU (5U84) from the enclosure.
30Connect power cables and power on the storage system
Testing enclosure connections
See Powering on. Once the power-on sequence succeeds, the storage system is ready to be connected as described in Connecting the
enclosure to hosts.
Grounding checks
The enclosure system must be connected to a power source that has a safety electrical grounding connection.
CAUTION: If more than one enclosure goes in a rack, the importance of the grounding connection to the rack increases
because the rack has a larger Grounding Leakage Current (Touch Current). Examine the grounding connection to the
rack before power on. An electrical engineer who is qualified to the appropriate local and national standards must do the
examination.
Powering on
CAUTION: Do not operate the enclosure system until the ambient temperature is within the specified operating range
that is described in the system specifications section of the
Manual
conditions before they are used with production data for I/O.
•With 2U enclosures, power on the storage system by connecting the power cables from the PCMs to the PDU, and moving the power
switch on each PCM to the On position. See Figure 28. Typical AC power cable connection from PDU to PCM (2U).
The System Power LED on the 2U Ops panel should be lit green when the enclosure power is activated.
•With 5U84 enclosures, power on the storage system by connecting the power cables from the PSUs to the PDU, and moving the
power switch on each PSU to the On position. See Figure 29. Typical AC power cable connection from PDU to PSU (5U).
The Power on/Standby LED on the 5U84 Ops panel should be lit green when the enclosure power is activated.
•When powering up, ensure to power up the enclosures and associated data host in the following order:
•Drive enclosures first – Ensures that the disks in the drive enclosure have enough time to completely spin up before being scanned
•Controller enclosure next – Depending upon the number and type of disks in the system, it may take several minutes for the
•Data host last (if powered off for maintenance purposes).
When powering off, reverse the order of steps that are used for powering on.
. If the drive modules have been recently installed, ensure that they have had time to adjust to the environmental
by the controller modules within the controller enclosure. The LEDs blink while the enclosures power up. After the LEDs stop
blinking – if the LEDs on the front and back of the enclosure are not amber – the power-on sequence is complete, and no faults
have been detected.
system to become ready.
Dell EMC PowerVault ME4 Series Storage System Owner’s
NOTE:
If main power is lost for any reason, the system automatically restarts when power is restored.
Enclosure Ops panels
•See 2U enclosure Ops panel for details pertaining to 2U Ops panel LEDs and related fault conditions.
•See 5U enclosure Ops panel for details pertaining to 5U84 Ops panel LEDs and related fault conditions.
Guidelines for powering enclosures on and off
•Remove the AC cord before inserting or removing a PCM (2U) or PSU (5U84).
•Move the PCM or PSU switch to the Off position before connecting or disconnecting the AC power cable.
•Allow 15 seconds between powering off and powering on the PCM or PSU.
•Allow 15 seconds before powering on one PSU or PCM in the system, and powering off another PCM or PSU.
•Never power off a PCM or PSU while any amber LED is lit on the partner PCM or PSU.
•A 5U84 enclosure must be left in a power on state for 30 seconds following resumption from standby before the enclosure can be
placed into standby again.
•Although the enclosure supports standby, the expansion module shuts off completely during standby and cannot receive a user
command to power back on. An AC power cycle is the only method to return the 5U84 to full power from standby.
Connect power cables and power on the storage system
31
6
Perform system and storage setup
Record storage system information
Use the System Information Worksheet to record the information that you need to install the ME4 Series storage system.
Using guided setup
Upon completing the hardware installation, use ME Storage Manager to configure, provision, monitor, and manage the storage system.
When first accessing the MESM, perform a firmware update before configuring your system. After the firmware update is complete, use
the guided setup to verify the web browser requirements and then access the
Web browser requirements and setup
The MESM web interface requires Mozilla Firefox 57 or later, Google Chrome 57 or later, Microsoft Internet Explorer 10 or 11, or Apple
Safari 10.1 or later.
NOTE:
•To see the help window, you must enable pop-up windows.
•To optimize the display, use a color monitor and set its color quality to the highest setting.
•Do not use the Back, Forward, Reload, or Refresh buttons in the browser. The MESM has a single page for which content changes as
you perform tasks and automatically updates to show current data.
•To navigate past the Sign In page (with a valid user account):
•Verify that cookies are allowed for the IP address of each controller network port.
•For Internet Explorer, set the local-intranet security option on the browser to medium or medium-low.
•For Internet Explorer, add each network IP address for each controller as a trusted site.
•For HTTPS, ensure that Internet Explorer is set to use TLS 1.2.
You cannot view MESM help content if you are using the Microsoft Edge browser that ships with Windows 10.
MESM.
Access the ME Storage Manager
Do not turn on more than one unconfigured controller enclosure at a time to avoid IP conflicts.
1. Temporarily set the management host NIC to a 10.0.0.x address or to the same IPv6 subnet to enable communication with the storage
system.
2. In a supported web browser:
•Type https://10.0.0.2 to access controller module A on an IPv4 network.
•Type https://fd6e:23ce:fed3:19d1::1 to access controller module A on an IPv6 network.
3. If the storage system is running G275 firmware, sign in to the ME Storage Manager using the user name manage and password !
manage.
If the storage system is running G280 firmware:
a. Click Get Started.
b. Read the Commercial Terms of Sale and End User License Agreement, and click Accept.
c. Specify a new user name and password for the system, and click Apply and Continue.
The Welcome panel that is displayed provides options to set up and provision your system.
NOTE:
using the CLI port and serial cable.
If you are unable to use the 10.0.0.x network to configure the system, see Setting network port IP addresses
32Perform system and storage setup
Update firmware
After powering on the storage system for the first time, verify that the controller modules, expansion modules, and disk drives are using
the current firmware release.
NOTE: Expansion module firmware is updated automatically with controller module updates.
1. Using the ME Storage Manager, select Action > Update Firmware in the System topic.
The Update Firmware panel opens. The Update Controller Modules tab shows versions of firmware components that are installed in
each controller module.
2. Locate firmware updates at www.dell.com/support. If newer versions of the firmware are available, download the bundle file or
relevant firmware component files.
3. Click Browse, select the firmware bundle file or component file to install, and then click OK.
When the update is complete, the system restarts.
Use guided setup in the ME Storage Manager Welcome
panel
The Welcome panel provides options for you to quickly set up your system by guiding you through the configuration and provisioning
process.
With guided setup, you must first configure your system settings by accessing the System Settings panel and completing all required
options. After these options are complete, you can provision your system by accessing the Storage Setup panel and the Host Setup panel
and completing the wizards.
The Welcome panel also displays the health of the system. If the health of the system is degraded or faulty, you can click System Information to access the System topic. In the System topic, you can view information about each enclosure, including its physical
components, in front, rear, and tabular views.
If the system detects that it has only one controller, its health shows as degraded. If you are operating the system with a single controller,
acknowledge this message in the panel.
If you installed two controllers, click System Information to diagnose the problem. If the system health is degraded, you can still to
configure and provision the system. However, if the health of the system is bad, you cannot configure and provision the system until you
resolve the problem affecting system health.
To use guided setup:
1. From the Welcome panel, click System Settings.
2. Choose options to configure your system.
NOTE:
3. Save your settings and exit System Settings to return to the Welcome panel.
4. Click Storage Setup to access the Storage Setup wizard and follow the prompts to begin provisioning your system by creating disk
groups and pools. For more information about using the Storage Setup wizard, see Configuring storage setup.
5. Save your settings and exit Storage Setup to return to the Welcome panel.
6. Click Host Setup to access the Host Setup wizard and follow the prompts to continue provisioning your system by attaching hosts.
For more information, see Host system requirements.
Tabs with a red asterisk next to them contain required settings.
Configuring system settings
The System Settings panel provides options for you to quickly configure your system.
Navigate the options by clicking the tabs on the left side of the panel. Tabs with a red asterisk next to them are required. To apply and
save changes, click
At a minimum, Dell EMC recommends that you change the following settings:
•Network tab—Configure controller network ports, for example, management ports
•Notifications tab—Setting system notification settings
•Ports tab—Changing host port settings, if applicable
Apply. To apply changes and close the panel, click Apply and Close.
Perform system and storage setup
33
Network tab—Configure controller network ports
You can manually set static IP address parameters for network ports or you can specify that IP addresses be set automatically. IP
addresses can be set automatically using DHCP for IPv4 or Auto for IPv6, which uses DHCPv6 and/or SLAAC.
NOTE: If you used the default 10.0.0.2/10.0.0.3 addresses to access the guided setup, consider changing those IPv4
addresses to avoid an IP conflict if you have more than one ME4 Series array on your network.
When setting IP values, you can choose either IPv4 or IPv6 formatting for each controller. You can also set the addressing mode and IP
version differently for each controller and use them concurrently. For example, you could set IPv4 on controller A to Manual to enable
static IP address, and IPv6 on controller B to Auto to enable automatic IP address.
When using DHCP mode, the system obtains values for the network port IP address, subnet mask, and gateway from a DHCP server if
one is available. If a DHCP server is unavailable, current address is unchanged. You must have some means of determining what addresses
have been assigned, such as the list of bindings on the DHCP server. When using Auto mode, addresses are retrieved from both DHCP
and Stateless address auto-configuration (SLAAC). DNS settings are also automatically retrieved from the network.
Each controller has the following factory-default IP settings:
•IP address source: Manual
•Controller A IP address: 10.0.0.2
•Controller B IP address: 10.0.0.3
•IP subnet mask: 255.255.255.0
•Gateway IP address: 10.0.0.1
When DHCP is enabled in the storage system, the following initial values are set and remain set until the system can contact a DHCP
server for new addresses:
•Controller IP addresses: 169.254.x.x (where the value of x.x is the lowest 16 bits of the controller serial number)
•IP subnet mask: 255.255.0.0
•Gateway IP address: 10.0.0.0
169.254.x.x addresses (including gateway 169.254.0.1) are on a private subnet that is reserved for unconfigured systems and the
addresses are not routable. This prevents the DHCP server from reassigning the addresses and possibly causing a conflict where two
controllers have the same IP address. As soon as possible, change these IP values to proper values for your network.
For IPv6, when Manual mode is enabled you can enter up to four static IP addresses for each controller. When Auto is enabled, the
following initial values are set and remain set until the system can contact a DHCPv6 and/or SLAAC server for new addresses:
•Controller A IP address: fd6e:23ce:fed3:19d1::1
•Controller B IP address: fd6e:23ce:fed3:19d1::2
•Gateway IP address: fd6e:23ce:fed3:19d1::3
CAUTION:
are applied in the confirmation step.
Changing IP settings can cause management hosts to lose access to the storage system after the changes
Set IPv4 addresses for network ports
Perform the following steps to set IPv4 addresses for the network ports:
1. Perform one of the following to access Network options:
•In the Home topic, select Action > System Settings, and then click the Network tab.
•In the System topic, select Action > System Settings, and then click the Network tab.
2. Select the IPv4 tab.
3. Select the type of IP address to use for each controller. Choose Source > manual to enter static IP addresses or choose Source > DHCP to have the system automatically obtain IP addresses from a DHCP server.
4. If you chose manual, enter the unique IP address, IP mask, and gateway address for each controller, and then record the IP addresses
that you entered.
NOTE:
169.254.255.3, 169.254.255.4, and 127.0.0.1. Because these addresses are routable, do not use them anywhere in
your network.
5. Perform one of the following:
•To save your settings and continue configuring your system, click Apply.
•To save your settings and close the panel, click Apply and Close.
A confirmation panel is displayed.
The following IP addresses are reserved for internal use by the storage system: 169.254.255.1, 169.254.255.2,
34
Perform system and storage setup
6. Click OK to continue.
If you chose DHCP and the controllers successfully obtained IP addresses from the DHCP server, the new IP addresses are displayed.
Record the new addresses and sign out to use the new IP address to access the MESM.
Set IPv6 values for network ports
Perform the following steps to set IPv6 addresses for the network ports:
1. Perform one of the following to access network options:
•In the Home topic, select Action > System Settings, and then click the Network tab.
•In the System topic, select Action > System Settings, and then click the Network tab.
2. Select the IPv6 tab. IPv6 uses 128-bit addresses.
3. Select the type of IP address to use for each controller. Choose Source > manual to enter up to four static IP addresses for each
controller, or choose Source > auto to have the system automatically obtain values.
4. If you chose manual, perform the following:
•Enter the unique IP address, gateway value, and address for each controller.
•Record the IP address that you entered.
•Click Add.
5. Click Add Address to continue adding up to four IP addresses.
NOTE: The following IP addresses are reserved for internal use by the storage system: 169.254.255.1, 169.254.255.2,
169.254.255.3, 169.254.255.4, and 127.0.0.1. Because these addresses are routable, do not use them anywhere in
your network.
6. Perform one of the following:
•To save your settings and continue configuring your system, click Apply.
•To save your settings and close the panel, click Apply and Close.
A confirmation panel is displayed.
7. Click Yes to save your changes. Otherwise, click No.
8. Sign out to use the new IP addresses to access the MESM.
Notifications tab—Setting system notification settings
Dell EMC recommends enabling at least one notification service to monitor the system.
Send email notifications
Perform the following steps to enable email notifications:
1. In the Welcome panel, select System Settings, and then click the Notifications tab.
2. Select the Email tab and ensure that the SMTP Server and SMTP Domain options are set.
4. If email notification is enabled, select the minimum severity for which the system should send email notifications: Critical (only); Error
(and Critical); Warning (and Error and Critical); Resolved (and Error, Critical, and Warning); Informational (all).
5. If email notification is enabled, in one or more of the Email Address fields enter an email address to which the system should send
notifications. Each email address must use the format user-name@domain-name. Each email address can have a maximum of 320
bytes. For example: Admin@mydomain.com or IT-team@mydomain.com.
6. Perform one of the following:
•To save your settings and continue configuring your system, click Apply.
•To save your settings and close the panel, click Apply and Close.
A confirmation panel is displayed.
7. Click OK to save your changes. Otherwise, click Cancel.
Perform system and storage setup
35
Test notification settings
Perform the following steps to test notifications:
1. Configure your system to receive trap and email notifications.
2. Click Send Test Event. A test notification is sent to each configured trap host and email address.
3. Verify that the test notification reached each configured email address.
NOTE: If there was an error in sending a test notification, event 611 is displayed in the confirmation.
Ports tab—Changing host port settings
You can configure controller host-interface settings for ports except for systems with a 4-port SAS controller module or 10Gbase-T iSCSI
controller module.
To enable the system to communicate with hosts, you must configure the host-interface options on the system.
For a system with a 4-port SAS controller module or 10Gbase-T iSCSI controller module, there are no host-interface options.
For a system with 4-port SFP+ controller modules (CNC), all host ports ship from the factory in Fibre Channel (FC) mode. However, the
ports can be configured as a combination of FC or iSCSI ports. FC ports support use of qualified 16 Gb/s SFP transceivers. You can set
FC ports to auto-negotiate the link speed or to use a specific link speed. iSCSI ports support use of qualified 10 Gb/s SFP transceivers.
For information about setting host parameters such as FC port topology, and the host-port mode, see the Dell EMC PowerVault ME4 Series Storage System CLI Reference Guide.
NOTE:
If the current settings are correct, port configuration is optional.
Configure FC ports
Perform the following steps to configure FC ports:
1. Perform one of the following to access the options in the Ports tab:
•In the Home topic, select Action > System Settings, then click Ports.
•In the System topic, select Action > System Settings, then click Ports.
•In the Welcome panel, select System Settings, and then click the Ports tab
2. From the Host Post Mode list, select FC.
3. From the Port Settings tab, set the port-specific options:
•Set the Speed option to the proper value to communicate with the host, or to auto, which auto-negotiates the proper link speed. A
speed mismatch prevents communication between the port and host. Set a speed only if you want to force the port to use a
known speed.
•Set The FC Connection Mode to either point-to-point or auto:
•point-to-point: Fibre Channel point-to-point.
•auto: Automatically sets the mode that is based on the detected connection type.
NOTE:
4. Perform one of the following:
•To save your settings and continue configuring your system, click Apply.
•To save your settings and close the panel, click Apply and Close.
A confirmation panel is displayed.
5. Click Yes to save your changes. Otherwise, click No.
Configure iSCSI ports
Perform the following steps to configure iSCSI ports:
1. In the Welcome panel, select System Settings, and then click the Notifications tab.
2. From the Host Post Mode list, select iSCSI.
3. From the Port Settings tab, set the port-specific options:
•IP Address. For IPv4 or IPv6, the port IP address. For corresponding ports in each controller, assign one port to one subnet and the
other port to a second subnet. Ensure that each iSCSI host port in the storage system is assigned a different IP address. For
example, in a system using IPv4:
36
Perform system and storage setup
•Controller A port 2: 10.10.10.100
•Controller A port 3: 10.11.10.120
•Controller B port 2: 10.10.10.110
•Controller B port 3: 10.11.10.130
•Netmask: For IPv4, subnet mask for assigned port IP address.
•Gateway: For IPv4, gateway IP address for assigned port IP address.
•Default Router: For IPv6, default router for assigned port IP address.
4. In the Advanced Settings section of the panel, set the options that apply to all iSCSI ports:
Table 4. Options for iSCSI ports
Enable Authentication (CHAP)Enables or disables use of Challenge Handshake Authentication Protocol. Enabling or disabling
CHAP in this panel updates the setting in the Configure CHAP panel (available in the Hosts topic by
selecting Action > Configure CHAP. CHAP is disabled by default.
Link Speed
Enable Jumbo FramesEnables or disables support for jumbo frames. Allowing for 100 bytes of overhead, a normal frame
iSCSI IP VersionSpecifies whether IP values use Internet Protocol version 4 (IPv4) or version 6 (IPv6) format. IPv4
Enable iSNSEnables or disables registration with a specified Internet Storage Name Service server, which
iSNS AddressSpecifies the IP address of an iSNS server.
Alternate iSNS AddressSpecifies the IP address of an alternate iSNS server, which can be on a different subnet.
•auto—Auto-negotiates the proper speed.
•1 Gb/s—Forces the speed to 1 Gbit/sec, overriding a downshift that can occur during autonegotiation with 1 Gb/sec HBAs. This setting does not apply to 10 Gb/sec SFPs.
can contain a 1400-byte payload whereas a jumbo frame can contain a maximum 8900-byte
payload for larger data transfers.
NOTE: Use of jumbo frames can succeed only if jumbo-frame support is enabled on all
CAUTION: Changing IP settings can cause data hosts to lose access to the storage
system.
5. Perform one of the following:
•To save your settings and continue configuring your system, click Apply.
•To save your settings and close the panel, click Apply and Close.
A confirmation panel is displayed.
6. Click Yes to save your changes. Otherwise, click No.
Configure two ports as FC and two ports as iSCSI per controller
Perform the following steps on each controller to configure two ports as FC and two ports as iSCSI:
1. In the Welcome panel, select System Settings, and then click the Ports tab.
2. From the Host Post Mode list, select FC-and-iSCSI.
NOTE:
3. From the Port Settings tab, set the FC port-specific options:
•Set the Speed option to the proper value to communicate with the host, or to auto, which auto-negotiates the proper link speed.
A speed mismatch prevents communication between the port and host. Set a speed only if you want to force the port to use a
known speed.
•Set the FC Connection Mode to either point-to-point or auto:
•point-to-point: Fibre Channel point-to-point.
•auto: Automatically sets the mode that is based on the detected connection type.
4. Set the port-specific options:
Ports 0 and 1 are FC ports. Ports 2 and 3 are iSCSI ports.
Perform system and storage setup
37
Table 5. Port-specific options
IP AddressFor IPv4 or IPv6, the port IP address. For corresponding ports in each controller, assign one port to one
subnet and the other port to a second subnet. Ensure that each iSCSI host port in the storage system is
assigned a different IP address. For example, in a system using IPv4:
•Controller A port 2: 10.10.10.100
•Controller A port 3: 10.11.10.120
•Controller B port 2: 10.10.10.110
•Controller B port 3: 10.11.10.130
NetmaskFor IPv4, subnet mask for assigned port IP address.
GatewayFor IPv4, gateway IP address for assigned port IP address.
Default RouterFor IPv6, default router for assigned port IP address.
5. In the Advanced Settings section of the panel, set the options that apply to all iSCSI ports:
•Enable Authentication (CHAP). Enables or disables use of Challenge Handshake Authentication Protocol. Enabling or disabling
CHAP in this panel updates the setting in the Configure CHAP panel (available in the Hosts topic by selecting Action > Configure CHAP. CHAP is disabled by default.
•Link Speed.
•auto—Auto-negotiates the proper speed.
•1 Gb/s— This setting does not apply to 10 Gb/sec HBAs.
•Enable Jumbo Frames: Enables or disables support for jumbo frames. Allowing for 100 bytes of overhead, a normal frame can
contain a 1400-byte payload whereas a jumbo frame can contain a maximum 8900-byte payload for larger data transfers.
NOTE:
data path.
•iSCSI IP Version: Specifies whether IP values use Internet Protocol version 4 (IPv4) or version 6 (IPv6) format. IPv4 uses 32-bit
addresses. IPv6 uses 128-bit addresses.
•Enable iSNS: Enables or disables registration with a specified Internet Storage Name Service server, which provides name-to-IPaddress mapping.
•iSNS Address: Specifies the IP address of an iSNS server.
•Alternate iSNS Address: Specifies the IP address of an alternate iSNS server, which can be on a different subnet.
CAUTION:
Use of jumbo frames can succeed only if jumbo-frame support is enabled on all network components in the
Changing IP settings can cause data hosts to lose access to the storage system.
6. Perform one of the following:
•To save your settings and continue configuring your system, click Apply.
•To save your settings and close the panel, click Apply and Close.
A confirmation panel is displayed.
7. Click OK to save your changes. Otherwise, click Yes.
Configuring storage setup
The Storage Setup wizard guides you through each step of creating disk groups and pools in preparation for attaching hosts and volumes.
NOTE:
Access the Storage Setup wizard from the Welcome panel or by choosing Action > Storage Setup. When you access the wizard, you
must select the storage type for your environment. After selecting a storage type, you are guided through the steps to create disk groups
and pools. The panels that are displayed and the options within them are dependent upon:
•Whether you select a virtual or linear storage type
•Whether the system is brand new (all disks are empty and available and no pools have been created)
•Whether the system has any pools
•Whether you are experienced with storage provisioning and want to set up your disk groups in a certain way
On-screen directions guide you through the provisioning process.
Select the storage type
When you first access the wizard, you are prompted to select the type of storage to use for your environment.
Read through the options and make your selection, and then click Next to proceed.
You can cancel the wizard at any time, but the changes that are made in completed steps are saved.
38
Perform system and storage setup
•Virtual storage supports the following features:
•Tiering
•Snapshots
•Replication
•Thin provisioning
•One pool per installed RAID controller and up to 16 disk groups per pool
•Maximum 1 PB usable capacity per pool with large pools feature enabled
•RAID levels 1, 5, 6, 10, and ADAPT
•Adding individual disks to increase RAID capacity is only supported for ADAPT disk groups
•Capacity can be increased by adding additional RAID disk groups
•Page size is static (4 MB)
•SSD read cache
•Global and/or dynamic hot spares
•Linear storage supports the following features:
•Up to 32 pools per installed RAID controller and one disk group per pool
•Adding individual disks to increase RAID capacity is supported for RAID 0, 3, 5, 6, 10, 50, and ADAPT disk groups
•Configurable chunk size per disk group
•Global, dedicated, and/or dynamic hot spares
NOTE:
NOTE: After you create a disk group using one storage type, the system will use that storage type for additional disk
groups. To switch to the other storage type, you must first remove all disk groups.
Dell EMC recommends using virtual storage.
Creating disk groups and pools
The panel that is displayed when creating disk groups and pools is dependent upon whether you are operating in a virtual storage
environment or a linear storage environment.
Virtual storage environments
If you are operating in a virtual storage environment, the system scans all available disks, recommends one optimal storage configuration,
and displays the suggested disk group layout within the panel.
In a virtual storage environment, the storage system automatically groups disk groups by pool and tier. The disk groups also include a
description of the total size and number of disks to be provisioned, including the configuration of spares and unused disks.
If the system is unable to determine a valid storage configuration, the wizard lists the reasons why and provides directions on how to
achieve a proper configuration. If the system is unhealthy, an error is displayed along with a description of how to fix it. Follow the
recommendations in the wizard to correct the errors, then click Rescan to view the optimized configuration.
For a system with no pools provisioned, if you are satisfied with the recommended configuration, click Create Pools to provision the
system as displayed in the panel and move on to attaching hosts. For a system that contains a pool, if you are satisfied with the
recommended configuration, click Expand Pools to provision the system as displayed in the panel.
If your environment requires a unique setup, click Go To Advanced Configuration to access the Create Advanced Pools panel. Select
Add Disk Group and follow the instructions to manually create disk groups one disk at a time. Select Manage Spares and follow the
instructions to manually select global spares.
Linear storage environments
If you are operating in a linear storage environment, the Create Advanced Pools panel opens.
Select Add Disk Groups and follow the instructions to manually create disk groups one at a time. Select Manage Spares and follow the
instructions to manually select global spares. Click the icon for more information about options presented.
Open the guided disk group and pool creation wizard
Perform the following steps to open the disk group and pool creation wizard:
1. Access Storage Setup by performing one of the following actions:
•From the Welcome panel, click Storage Setup.
•From the Home topic, click Action > Storage Setup.
2. Follow the on-screen directions to provision your system.
Perform system and storage setup
39
Perform host setup
Host system requirements
Hosts connected to ME4 Series controller enclosures must meet the following requirements:
Depending on your system configuration, host operating systems may require that multipathing is supported.
If fault tolerance is required, then multipathing software may be required. Host-based multipath software should be used in any
configuration where two logical paths between the host and any storage volume may exist simultaneously. This includes most
configurations where there are multiple connections to the host or multiple connections between a switch and the storage.
Use native Microsoft MPIO support with Windows Server 2012 R2, Windows Server 2016, and Windows Server 2019. Perform the
installation using either the Server Manager or the mpclaim CLI tool.
About multipath configuration
ME4 Series storage systems comply with the SCSI-3 standard for Asymmetrical Logical Unit Access (ALUA).
ALUA-compliant storage systems provide optimal and non-optimal path information to the host during device discovery. To implement
ALUA, you must configure your servers to use multipath I/O (MPIO).
7
Attach host servers
NOTE:
• Refer to the Dell EMC Storage Support Matrix for a list of supported HBAs or iSCSI network adapters.
• Configure only one host at a time.
For more information, see the topics about initiators, hosts, and host groups, and attaching hosts and volumes in the Dell EMC
PowerVault ME4 Series Storage System Administrator’s Guide.
About these ME4 Series storage system setup tasks:
Windows hosts
Ensure that the HBAs or network adapters are installed, the drivers are installed, and the latest supported BIOS and firmware are installed.
Fibre Channel host server configuration for Windows
Server
The following sections describe how to configure Fibre Channel host servers running Windows Server:
Prerequisites
•Complete the ME Storage Manager guided system and storage setup process.
•Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful
deployment.
Attach FC hosts to the storage system
Perform the following steps to attach FC hosts to the storage system:
1. Ensure that all HBAs have the latest supported firmware and drivers as described on Dell.com/support. For a list of supported FC
HBAs, see the Dell EMC ME4 Series Storage System Support Matrix on Dell.com/support.
40Perform host setup
2. Use the FC cabling diagrams to cable the hosts to the storage system either by using switches or connecting the hosts directly to the
storage system.
3. Install MPIO on the FC hosts:
a) Open the Server Manager.
b) Click Add Roles and Features, then click Next until you reach the Features page.
c) Select Multipath IO.
d) Click Next, click Install, click Close, and then reboot the host server.
4. Identify and document FC HBA WWNs:
a) Open a Windows PowerShell console.
b) Type Get-InitiatorPort and press Enter.
c) Locate and record the FC HBA WWNs. The WWNs are needed to map volumes to the hosts.
5. If the hosts are connected to the storage system using FC switches, implement zoning to isolate traffic for each HBA:
NOTE:
a) Use the FC switch management interface to create a zone for each server HBA. Each zone must contain only one HBA WWN and
all the storage port WWNs.
b) Repeat for each FC switch.
Skip this step if hosts are directly connected to the storage system.
NOTE:
The ME4 Series storage systems support single initiator/multiple target zones.
Register hosts and create volumes
Perform the following steps to register hosts and create volumes using the ME Storage Manager:
1. Log in to the ME Storage Manager.
2. Access the Host Setup wizard:
•From the Welcome screen, click Host Setup.
•From the Home topic, click Action > Host Setup.
3. Confirm that you have met the listed prerequisites, then click Next.
4. Type a host name in the Host Name field.
5. Using the information documented in step 4 of Attach FC hosts to the storage system, select the FC initiators for the host you are
configuring, then click Next.
6. Group hosts together with other hosts in a cluster.
a) For cluster configurations, group hosts together so that all hosts within the group share the same storage.
•If this host is the first host in the cluster, select Create a new host group, type a name for the host group, and click Next.
•If this host is being added to a host group that exists, select Add to existing host group, select the group from the dropdown list, and click Next.
b) For stand-alone hosts, select the Do not group this host option, then click Next.
7. On the Attach Volumes page, specify the name, size, and pool for each volume, and click Next.
To add a volume, click Add Row. To remove a volume, click Remove.
NOTE:
Dell EMC recommends that you update the name with the hostname to better identify the volumes.
8. On the Summary page, review the host configuration settings, and click Configure Host.
Enable MPIO for the volumes on the Windows server
Perform the following steps to enable MPIO for the volumes on the Windows server:
1. Open the Server Manager.
2. Select Tools > MPIO.
3. Click the Discover Multi-Paths tab.
4. Select DellEMC ME4 in the Device Hardware Id list.
If DellEMC ME4 is not listed in the Device Hardware Id list:
a) Ensure that there is more than one connection to a volume for multipathing.
b) Ensure that Dell EMC ME4 is not already listed in the Devices list on the MPIO Devices tab.
5. Click Add and click Yes to reboot the Windows server.
Perform host setup
41
Format volumes on the Windows server
Perform the following steps to format a volume on a Windows server:
1. Open Server Manager.
2. Select Tools > Computer Management.
3. Right-click on Disk Management and select Rescan Disks.
4. Right-click on the new disk and select Online.
5. Right-click on the new disk again select Initialize Disk.
The Initialize Disk dialog box opens.
6. Select the partition style for the disk and click OK.
7. Right-click on the unallocated space, select the type of volume to create, and follow the steps in the wizard to create the volume.
iSCSI host server configuration for Windows Server
These instructions document IPv4 configuration with dual switch subnet for network redundancy and failover. These instructions do not
cover IPv6 configuration:
Prerequisites
•Complete the ME Storage Manager guided setup process and storage setup process.
•Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful
deployment.
•Complete a planning worksheet with the iSCSI network IP addresses to be used, per the example in the following table:
Table 6. Example worksheet for host server with dual port iSCSI NICs
ManagementIP
Server Management10.10.96.46
ME4024Controller A Management10.10.96.128
ME4024 Controller B Management10.10.96.129
Subnet 1
Server iSCSI NIC 1172.1.96.46
ME4024 controller A port 0172.1.100.128
ME4024 controller B port 0172.1.200.129
ME4024 controller A port 2172.1.102.128
ME4024 controller B port 2172.1.202.129
Subnet Mask255.255.0.0
Subnet 2
Server iSCSI NIC 1172.2.96.46
ME4024 controller A port 1172.2.101.128
ME4024 controller B port 1172.2.201.129
ME4024 controller A port 3172.2.103.128
ME4024 controller B port 3172.3.203.129
Subnet Mask255.255.0.0
NOTE: The following instructions document IPv4 configurations with a dual switch subnet for network redundancy and
failover. It does not cover IPv6 configuration.
Attach iSCSI hosts to the storage system
Perform the following steps to attach iSCSI hosts to the storage system:
42
Perform host setup
1. Ensure that all network adapters have the latest supported firmware and drivers as described on Dell.com/support.
NOTE: The Dell EMC PowerVault ME4 Series storage system supports only software iSCSI adapters.
2. Use the iSCSI cabling diagrams to connect the hosts to the storage system either by using switches or connecting the hosts directly
to the storage system.
3. Install MPIO on the iSCSI hosts:
a. Open Server Manager.
b. Click Manage > Add Roles and Features.
c. Click Next until you reach the Features page.
d. Select Multipath IO.
e. Click Next, click Install, and click Close.
f. Reboot the Windows server.
Assign IP addresses for each network adapter connecting to the iSCSI network
Perform the following steps to assign IP addresses for the network adapter that connects to the iSCSI network:
CAUTION: IP addresses must match the subnets for each network. Make sure that you assign the correct IP addresses
to the NICs. Assigning IP addresses to the wrong ports can cause connectivity issues.
NOTE: If using jumbo frames, they must be enabled and configured on all devices in the data path, adapter ports,
switches, and storage system.
1. From the Network and Sharing Center, click Change adapter settings.
2. Right-click on the network adapter, then select Properties.
3. Select Internet Protocol Version 4, then click Properties.
4. Select the Use the following IP address radio button and type the corresponding IP addresses recorded in the planning worksheet
described in the Prerequisites section (ex: 172.1.96.46).
5. Set the netmask .
6. Configure a gateway if appropriate.
7. Click OK and Close. The settings are applied to the selected adapter.
8. Repeat steps 1-7 for each of the required iSCSI interfaces (NIC 1 and NIC 2 in Table 6. Example worksheet for host server with dual
port iSCSI NICs).
9. From the command prompt, ping each of the controller IP addresses to verify host connectivity before proceeding. If ping is not
successful, verify connections and the appropriate IP/subnet agreement between interfaces.
Configure the iSCSI Initiator
Perform the following steps to configure the iSCSI Initiator on the host:
If you are running the iSCSI initiator for the first time, click Yes when prompted to have it start automatically when the server reboots.
3. Click the Discovery tab, then click Discover Portal. The Discover Target Protocol dialog box opens.
4. Using the planning worksheet that you created in the Prerequisites section, type the IP address of a port on controller A that is on the
first subnet and click
5. Repeat steps 3-4 to add the IP address of a port on the second subnet that is from controller B .
6. Click the Targets tab, select a discovered target, and click Connect.
7. Select the Enable multi-path check box and click Advanced. The Advanced Settings dialog box opens.
•Select Microsoft iSCSI initiator from the Local adapter drop-down menu..
•Select the IP address of NIC 1 from the Initiator IP drop-down menu.
•Select the first IP listed in the same subnet from the Target portal IP drop-down menu.
•Click OK twice to return to the iSCSI Initiator Properties dialog box.
8. Repeat steps 6-7 for the NIC to establish a connection to each port on the subnet.
NOTE:
9. Repeat steps 3-8 for the NIC 2, connecting it to the targets on the second subnet.
NOTE:
you can view specific information the selected path.
10. Click the Configuration tab and record the initiator name in the Initiator Name field. The initiator name is needed to map volumes to
the host.
Step 10 is required for multi-path configurations.
After all connections are made, you can click the Favorite Targets tab to see each path. If you click Details,
OK.
Perform host setup
43
11. Click OK to close the iSCSI Initiator Properties dialog box.
Register hosts and create volumes
Perform the following steps to register hosts and create volumes using the ME Storage Manager:
1. Log in to the ME Storage Manager.
2. Access the Host Setup wizard:
•From the Welcome screen, click Host Setup.
•From the Home topic, select Action > Host Setup.
3. Confirm that you have met the listed prerequisites, then click Next.
4. Type a host name in the Host Name field.
5. Using the information from step 10 of the Configure the iSCSI Initiator, select the iSCSI initiators for the host you are configuring, then
Next.
click
6. Group hosts together with other hosts in a cluster.
•For cluster configurations, group hosts together so that all hosts within the group share the same storage.
•If this host is the first host in the cluster, select Create a new host group, type a name for the host group, and click Next.
•If this host is being added to a host group that exists, select Add to existing host group, select the group from the dropdown list, and click Next.
•For stand-alone hosts, select the Do not group this host option, and click Next.
7. On the Attach Volumes page, specify the name, size, and pool for each volume, and click Next.
To add a volume, click Add Row. To remove a volume, click Remove.
NOTE:
8. On the Summary page, review the host configuration settings, and click Configure Host.
Dell EMC recommends that you update the volume name with the hostname to better identify the volumes.
Enable MPIO for the volumes on the Windows server
Perform the following steps to enable MPIO for the volumes on the Windows server:
1. Open Server Manager.
2. Select Tools > MPIO.
3. Click the Discover Multi-Paths tab.
4. Select DellEMC ME4 in the Device Hardware Id list.
If DellEMC ME4 is not listed in the Device Hardware Id list:
a. Ensure that there is more than one connection to a volume for multipathing.
b. Ensure that Dell EMC ME4 is not already listed in the Devices list on the MPIO Devices tab.
5. Click Add and click Yes to reboot the Windows server.
Format volumes on the Windows server
Perform the following steps to format a volume on a Windows server:
1. Open Server Manager.
2. Select Tools > Computer Management.
3. Right-click Disk Management and select Rescan Disks.
4. Right-click on the new disk and select Online.
5. Right-click on the new disk again select Initialize Disk. The Initialize Disk dialog box opens.
6. Select the partition style for the disk and click OK.
7. Right-click on the unallocated space, select the type of volume to create, and follow the steps in the wizard to create the volume.
Update the iSCSI initiator
Perform the following steps to configure all available volumes and devices on a Windows server:
1. Open Server Manager.
2. Click Tools > iSCSI initiator.
3. Click the Volumes and Devices tab.
4. Click Auto Configure.
44
Perform host setup
5. Click OK to close the iSCSI Initiator Properties window.
SAS host server configuration for Windows Server
The following sections describe how to configure SAS host servers running Windows Server:
Prerequisites
•Complete the ME Storage Manager guided system and storage setup process.
•Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful
deployment.
Attach SAS hosts to the storage system
Perform the following steps to attach SAS hosts to the storage system:
1. Ensure that all HBAs have the latest supported firmware and driversas described on Dell.com/support. For a list of supported SAS
HBAs, see the Dell EMC ME4 Series Storage System Support Matrix on Dell.com/support.
2. Use the SAS cabling diagrams to cable the hosts directly to the storage system.
3. Install MPIO on the SAS hosts:
a. 1. Open Server Manager.
2. Click Manage > Add Roles and Features.
3. Click Next until you reach the Features page.
4. Select Multipath I/O.
5. Click Next, click Install, and click Close.
6. Reboot the Windows server.
4. Identify and document the SAS HBA WWNs:
a. Open a Windows PowerShell console.
b. Type Get-InitiatorPort and press Enter.
c. Locate and record the SAS HBA WWNs . The WWNs are needed to map volumes to the server.
Register hosts and create volumes
Perform the following steps to register hosts and create volumes using the ME Storage Manager:
1. Log in to the ME Storage Manager.
2. Access the Host Setup wizard:
•From the Welcome screen, click Host Setup.
•From the Home topic, click Action > Host Setup.
3. Confirm that you have met the listed prerequisites, then click Next.
4. Type a host name in the Host Name field.
5. Using the information documented in step 4 of Attach SAS hosts to the storage system, select the SAS initiators for the host you are
configuring, then click Next.
6. Group hosts together with other hosts in a cluster.
•For cluster configurations, group hosts together so that all hosts within the group share the same storage.
•If this host is the first host in the cluster, select Create a new host group, type a name for the host group, and click Next.
•If this host is being added to a host group that exists, select Add to existing host group, select the group from the dropdown list, and click Next.
•For stand-alone hosts, select the Do not group this host option, and click Next.
7. On the Attach Volumes page, specify the name, size, and pool for each volume, and click Next.
To add a volume, click Add Row. To remove a volume, click Remove.
NOTE:
8. On the Summary page, review the host configuration settings, and click Configure Host.
Dell EMC recommends that you update the volume name with the hostname to better identify the volumes.
Enable MPIO for the volumes on the Windows server
Perform the following steps to enable MPIO for the volumes on the Windows server:
Perform host setup
45
1. Open Server Manager.
2. Select Tools > MPIO.
3. Click the Discover Multi-Paths tab.
4. Select DellEMC ME4 in the Device Hardware Id list.
If DellEMC ME4 is not listed in the Device Hardware Id list:
a. Ensure that there is more than one connection to a volume for multipathing.
b. Ensure that Dell EMC ME4 is not already listed in the Devices list on the MPIO Devices tab.
5. Click Add and click Yes to reboot the Windows server.
Format volumes on Windows server
Perform the following steps to format a volume on a Windows server:
1. Open Server Manager.
2. Select Tools > Computer Management.
3. Right-click on Disk Management and select Rescan Disks.
4. Right-click on the new disk and select Online.
5. Right-click on the new disk again select Initialize Disk. The Initialize Disk dialog box opens.
6. Select the partition style for the disk and click OK.
7. Right-click on the unallocated space, select the type of volume to create, and follow the steps in the wizard to create the volume.
Linux hosts
Ensure that the HBAs or network adapters are installed, the drivers are installed, and the latest supported BIOS is installed.
Fibre Channel host server configuration for Linux
The following sections describe how to configure Fibre Channel host servers running Linux:
Prerequisites
•Complete the ME Storage Manager guided system and storage setup process.
•Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful
deployment.
•Administrative or privileged user permissions are required to make system-level changes. These steps assume root level access and
that all required software packages are already installed (for example, DM Multipath).
Attach hosts to the storage system
Perform the following steps to attach Fibre Channel hosts to the storage system:
1. Ensure that all HBAs have the latest supported firmware and driversas described on the Dell Support portal . For a list of supported
standard FC HBAs, see the Dell EMC PowerVault ME4 Seriesstorage Matrix on the Dell website. For OEMs, contact your hardware
provider.
2. Use the FC cabling diagrams to cable the host servers either by using switches or attaching them directly to the storage system.
3. Identify Fibre Channel WWNs to connect to the storage system by doing the following:
a) Open a terminal session.
b) Run the ls –l /sys/class/fc_host command.
c) Run the more /sys/class/fc_host/host?/port_name command and replace the ? with the host numbers that are
supplied in the data output.
d) Record the WWN numeric name.
4. If the hosts are connected to the storage system by FC switches, implement zoning to isolate traffic for each HBA. Skip this step if
hosts are directly connected to the storage system.
a) Use the FC switch management interface to create a zone for each server HBA. Each zone must contain only one HBA WWN and
all the storage port WWNs.
b) Repeat for each FC switch.
NOTE:
46Perform host setup
The ME4 Series storage systems support single initiator/multiple target zones.
Register the host and create and map volumes
Perform the following steps to register the hosts, create volumes, and map volumes:
1. Log in to the ME Storage Manager.
2. Access the Host Setup wizard by doing one of the following:
•From the Welcome screen, click Host Setup.
•From the Home topic, click Action > Host Setup.
3. Confirm that you have met the listed prerequisites, then click Next.
4. Enter a hostname.
5. Using the information from step 3 of Attach hosts to the storage system to identify the correct initiators, select the FC initiators for
the host you are configuring, then click Next.
6. Group hosts together with other hosts.
a) For cluster configurations, use the “Host groups” setting to group hosts in a cluster.
•If this host is the first host in the cluster, select Create a new host group, then provide a name and click Next.
•If this host is being added to a host group that exists, select Add to existing host group. Select the group from the dropdown list, then click Next.
b) For stand-alone hosts, select the Do not group this host option, then click Next.
7. On the Attach volumes page, use the options to change the volume name and size, select the pool for the volume, and add or delete
volumes. Click Next.
NOTE: Dell EMC recommends that you update the name of the volume with the hostname to better identify the
volumes.
8. On the Summary page, review the changes, then click Configure Host.
•Click Yes to return to the Select Host page of the wizard, or choose No to close the wizard.
•Click Previous to go back and to make changes to the settings.
Enable and configure DM Multipath
Perform the following steps to enable and configure DM multipath:
NOTE:
basic setup to enable DM Multipath to the storage system. It is assumed that DM Multipath packages are installed.
For RHEL 7 / SLES 12:
1. Run the multipath –t command to list the DM Multipath status.
2. If no configuration exists, use the information that is listed from running the command in step 1 to copy a default template to the
directory
3. If the DM multipath kernel driver is not loaded:
a) Run the systemctl enable multipathd command to enable the service to run automatically.
b) Run the systemctl start multipathd command to start the service.
4. Run the multipath command to load storage devices along with the configuration file.
5. Run the multipath –l command to list the Dell EMC PowerVault ME4 Series storage devices as configured under DM Multipath.
Safeguard and blacklist internal server disk drives from multipath configuration files. These steps are meant as a
/etc.
Create a file system on ME4 Series volumes
Perform the following steps to configure a simple XFS file system to mount as a volume:
For RHEL 7 / SLES 12:
1. From the multipath -l command output, identify the device multipath to target creating a file system. In this example, the first
time that multipath is configured, the first device is /dev/mapper/mpatha, correlating to sg block devices /dev/sdb and /dev/sdd.
NOTE:
also identifies block devices per controller.
2. Run the mkfs.xfs /dev/mapper/mpatha command to create an xfs type file system.
3. Run the mkdir /mnt/VolA command to create a mount point for this file system with a referenced name, such as VolA.
4. Run themount /dev/mapper/mpatha /mnt/VolA command to mount the file system.
Run the lsscsi command to list all SCSI devices from the Controller/Target/Bus/LUN map. This command
Perform host setup
47
5. Begin using the file system as any other directory to host applications or file services.
6. Repeat steps 1–5 for each provisioned volume in the ME Storage Manager. For example, to /dev/mapper/mpathb, correlating to
sg block devices /dev/sdc and /dev/sde .
iSCSI host server configuration for Linux
The following sections describe how to configure iSCSI host servers running Linux:
Prerequisites
•Complete the ME Storage Manager guided system and storage setup process.
•Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful
deployment.
•Administrative or privileged user permissions are required to make system-level changes. The following sections assume root level
access and that all required software packages are already installed, for example, iSCSI-initiator and DM Multipath.
•Complete a planning worksheet with the iSCSI network IP addresses to be used, per the example in the following table.
Table 7. Example worksheet for single host server with dual port iSCSI NICs
ManagementIP
Server Management10.10.96.46
ME4024 Controller A Management10.10.96.128
ME4024 Controller B Management10.10.96.129
Subnet 1
Server iSCSI NIC 1172.1.96.46
ME4024 controller A port 0172.1.100.128
ME4024 controller B port 0172.1.200.129
ME4024 controller A port 2172.1.102.128
ME4024 controller B port 2172.1.202.129
Subnet Mask255.255.0.0
Subnet 2
Server iSCSI NIC 1172.2.96.46
ME4024 controller A port 1172.2.101.128
ME4024 controller B port 1172.2.201.129
ME4024 controller A port 3172.2.103.128
ME4024 controller B port 3172.3.203.129
Subnet Mask255.255.0.0
The following instructions document IPv4 configurations with a dual switch subnet for network redundancy and failover. It does not cover
IPv6 configuration.
Attach hosts to the storage system
1. Ensure that all network adapters have the latest supported firmware and drivers as described on the Dell Support portal.
2. Use the iSCSI cabling diagrams to cable the host servers to the switches or directly to the storage system.
Assign IP addresses for each network adapter connecting to the
iSCSI network
CAUTION:
to the network adapters. Assigning IP addresses to the wrong ports can cause connectivity issues.
The IP addresses must match the subnets for each network, so ensure that you correctly assign IP addresses
48Perform host setup
NOTE: If using jumbo frames, they must be enabled and configured on all devices in the data path, adapter ports,
switches, and storage system.
For RHEL 7
1. From the server terminal or console, run the nmtui command to access the NIC configuration tool (NetworkManager TUI).
2. Select Edit a connection to display a list of the Ethernet interfaces installed.
3. Select the iSCSI NIC that you want to assign an IP address to.
4. Change the IPv4 Configuration option to Manual.
5. Using the planning worksheet that you created in the “Prerequisites” section, provide the subnet mask by entering the NIC IP address
using the format x.x.x.x/16. For example: 172.1.96.46/16
6. Configure a gateway, if appropriate.
7. Select IGNORE for the IPv6 Configuration.
8. Check Automatically connect to start the NIC when the system boots.
9. Select OK to exit Edit connection.
10. Select Back to return to the main menu.
11. Select Quit to exit NetworkManager TUI.
12. Ping the new network interface and associated storage host ports to ensure IP connectivity.
13. Repeat steps 1-12 for each NIC you are assigning IP addresses to.
For SLES 12
1. From the server terminal or console, run the yast command to access the YaST Control Center.
2. Select System > Network Settings.
3. Select the iSCSI NIC that you want to assign an IP address to, then select Edit.
4. Select Statically Assigned IP Address.
5. Using the planning worksheet that you created in the “Prerequisites” section, enter the NIC IP address. For example: 172.1.96.46
6. Using the planning worksheet that you created in the “Prerequisites” section, enter the NIC subnet mask. For example: 255.255.0.0.
7. Select Next.
8. Ping the new network interface and associated storage host ports to ensure IP connectivity.
9. Repeat steps 1-8 for each NIC you are assigning IP addresses to (NIC1 and NIC2 in the planning worksheet you created in the
“Prerequisites” section).
10. Select OK to exit network settings.
11. Select OK to exit YaST.
Configure the iSCSI initiators to connect to the ME4 Series storage system
For RHEL 7
1. From the server terminal or console, run the following iscsiadm command to discover targets (port A0):
b) Repeat the login for each controller host port using the discovery command output in step 1.
c) Reboot the host to ensure that all targets are automatically connected.
For SLES 12
1. From the server terminal or console, use the yast command to access YaST Control Center.
2. Select Network Service > iSCSI Initiator.
3. On the Service tab, select When Booting.
4. Select the Connected Targets tab.
Perform host setup
49
5. Select Add. The iSCSI Initiator Discovery screen displays.
6. Using the Example worksheet for single host server with dual port iSCSI NICs you created earlier, enter the IP address for port A0 in
the IP Address field, then click Next. For example: 172.1.100.128.
7. Select Connect.
8. On iSCSI Initiator Discovery screen, select the next adapter and then select Connect.
9. When prompted, select Continue to bypass the warning message, “Warning target with TargetName is already connected”.
10. Select Startup to Automatic, then click Next.
11. Repeat steps 2-10 for all remaining adapters.
12. Once the targets are connected, click Next > Quit to exit YaST.
13. Reboot the host to ensure that all targets are automatically connected.
Register the host and create and map volumes
1. Log in to the ME Storage Manager.
2. Access the Host Setup wizard by doing one of the following:
•From the Welcome screen, click Host Setup.
•From the Home topic, click Action > Host Setup.
3. Confirm that you have met the listed prerequisites, then click Next.
4. Enter a hostname.
5. Using the information from step 12 of the For SLES 12 to identify the correct initiators, select the iSCSI initiators for the host you are
configuring, then click
6. Group hosts together with other hosts.
a) For cluster configurations, use the Host groups setting to group hosts in a cluster.
•If this host is the first host in the cluster, select create a new host group, then provide a name and click Next.
•If this host is being added to a host group that exists, select Add to existing host group. Select the group from the dropdown list, then click Next.
b) For Stand-alone hosts, select the Do not group this host option, then click Next.
7. On the Attach volumes page, use the options to change the volume name and size (by default two 100GB volumes are created);
select the pool where the volume will reside; and add or delete volumes. Click Next.
Next.
NOTE:
8. On the Summary page, review the changes made, then click Configure Host.
•Click Previous to go back and change the settings.
•Click Yes to return to the Select Host page of the wizard, or choose No to close the wizard.
Dell EMC recommends that you update the name with the hostname to better identify the volumes.
Enable and configure DM Multipath
NOTE:
meant as a basic setup to enable DM Multipath to the storage system. It is assumed that DM Multipath packages are
installed.
For RHEL 7 / SLES 12:
1. Run the multipath –t command to list the DM Multipath status.
2. If no configuration currently exists, use the command information displayed in step 1 to copy a default template to the directory /etc.
3. If the DM multipath kernel driver is not loaded:
a) Run the systemctl enable multipathd command to enable the service to run automatically.
b) Run the systemctl start multipathd command to start the service.
4. Run the multipath command to load storage devices in conjunction with the configuration file.
5. Run the multipath –l command to list the Dell EMC PowerVault ME4 Series storage devices as configured under DM Multipath.
Be sure to safeguard and blacklist internal server disk drives from multipath configuration files. These steps are
Create a file system on ME4 Series volumes
NOTE:
system.
The following steps are to configure a simple XFS file system to mount as a volume from the ME4 Series storage
50Perform host setup
For RHEL 7 / SLES 12:
1. From the multipath –l command output above, identify the device multipath to target creating a file system. In this example, the first
time multipath is configured, the first device will be /dev/mapper/mpatha, correlating to sg block devices /dev/sdb and /dev/sdd.
NOTE: Run the lsscsi command to list all SCSI devices from the Controller/Target/Bus/LUN map. This also
identifies block devices per controller.
Run the mkfs.xfs/dev/mapper/mpatha command to create an xfs type file system.
2.
3. Run the mkdir/mnt/VolA command to create a new mount point for this file system with a referenced name, such as VolA.
4. Run the mount /dev/mapper/mpatha /mnt/VolA command to mount the file system.
5. Begin using the file system as any other directory to host applications or file services.
6. Repeat steps 1-5 for other provisioned volumes from the ME Storage Manager. For example, to /dev/mapper/mpathb, correlating
sg block devices/dev/sdc and/dev/sde.
to
SAS host server configuration for Linux
The following sections describe how to configure SAS host servers running Linux:
•Complete the ME Storage Manager guided system and storage setup process.
•Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning will ensure a successful
deployment.
•Administrative or privileged user permissions are required to make system-level changes. These steps assume root level access and
that all required software packages are already installed (for example, DM Multipath).
Attach SAS hosts to the storage system
Perform the following steps to attach SAS hosts to the storage system:
1. Ensure that all HBAs have the latest supported firmware and drivers as described on the Dell Support web site. For a list of supported
SAS HBAs, see the Dell EMC ME4 Series Storage System Support Matrix on the Dell Support web site.
2. Use the SAS cabling diagrams to cable the host servers directly to the storage system.
3. Identify SAS HBA initiators to connect to the storage system by doing the following:
a. Open a terminal session.
b. Run the dmesg|grep scsi|grep slot command.
c. Record the WWN numeric name.
Register the host and create and map volumes
1. Log in to the ME Storage Manager.
2. Access the Host Setup wizard by doing one of the following:
•From the Welcome screen, click Host Setup.
•From the Home topic, click Action > Host Setup.
3. Confirm that you have met the listed prerequisites, then click Next.
4. Enter a hostname.
5. Using the information from step 3 of Attach SAS hosts to the storage system to identify the correct initiators, select the SAS initiators
for the host you are configuring, then click Next.
6. Group hosts together with other hosts.
a. For cluster configurations, use the Host groups setting to group hosts in a cluster.
•If this host is the first host in the cluster, select create a new host group, then provide a name and click Next.
•If this host is being added to a host group that exists, select Add to existing host group. Select the group from the dropdown list, then click Next.
b. For stand-alone hosts, select the Do not group this host option, then click Next.
7. On the Attach volumes page, use the options to change the volume name and size (by default two 100GB volumes are created);
select the pool for the volume; and add or delete volumes. Click Next.
NOTE:
8. On the Summary page, review the change, then click Configure Host.
•Click Previous to go back and changes the settings.
•Click Yes to return to the Select Host page of the wizard, or choose No to close the wizard.
Dell EMC recommends that you update the name with the hostname to better identify the volumes.
Perform host setup
51
Enable and configure DM Multipathing
NOTE: Safeguard and blacklist internal server disk drives from multipathing configuration files. These steps are meant
as a basic setup to enable DM Multipathing to the storage system. It is assumed that DM Multipathing packages are
installed.
For RHEL 7 / SLES 12:
Run the multipath –t command to list the DM Multipathing status.
1.
2. If no configuration exists, use the command information that is listed in step 1 to copy a default template to the directory /etc.
3. If the DM multipathing kernel driver is not loaded:
a. Run the systemctl enable multipathd command to enable the service to run automatically.
b. Run the systemctl start multipathd command to start the service.
4. Run the multipath command to load storage devices along with the configuration file.
5. Run the multipath –l command to list the ME4 Series storage devices as configured under DM Multipathing.
Create a file system on ME4 Series volumes
NOTE: The following steps are to configure a simple XFS file system to mount as a volume from theME4 Series Storage
system.
For RHEL 7 / SLES 12:
1. From the multipath –l command output, identify the device multipathing to target creating a file system. In this example, the first
time that multipathing is configured, the first device is /
and /dev/sdd.
NOTE:
also identifies block devices per controller.
2. Run the mkfs.xfs/dev/mapper/mpatha command to create an xfs type file system.
3. Run the mkdir/mnt/VolA command to create a mount point for this file system with a referenced name, such as VolA.
4. Run the mount /dev/mapper/mpatha /mnt/VolA command to mount the file system.
5. Begin using the file system as any other directory to host applications or file services.
6. Repeat steps 1-5 for other provisioned volumes from the ME Storage Manager. For example, to /dev/mapper/mpathb, correlating
to sg block devices /dev/sdc and /dev/sde.
Run the lsscsi command to list all SCSI devices from the Controller/Target/Bus/LUN map. This command
dev/mapper/mpatha, correlating to sg block devices /dev/sdb
VMware ESXi hosts
Ensure that the HBAs or network adapters are installed and the latest supported BIOS is installed.
Fibre Channel host server configuration for VMware ESXi
The following sections describe how to configure Fibre Channel host servers running VMware ESXi:
Prerequisites
•Complete the ME Storage Manager guided system and storage setup process.
•Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful
deployment.
•Install the required version of the VMware ESXi operating system and configure it on the host.
Attach hosts to the storage system
Perform the following steps to attach Fibre Channel hosts to the storage system:
1. Ensure that all HBAs have the latest supported firmware and driversas described on the Dell Support portal . For a list of supported
standard FC HBAs,see the Dell EMC ME4 Storage Matrix on the Dell website. For OEMs, contact your hardware provider.
2. Use the FC cabling diagrams to cable the host servers either by using switches or attaching them directly to the storage system.
3. Login to the VMware vCenter Server and add the newly configured ESXi host to the appropriate datacenter.
4. On the Configure tab, select Storage > Storage Adapters.
5. Verify that the required FC storage adapters are listed, then record the HBA’s WWN as listed under Properties.
52
Perform host setup
6. If the hosts are connected to the storage system by FC switches, implement zoning to isolate traffic for each HBA by doing the
following (skip this step if hosts are directly connected to the storage system):
a) Use the FC switch management interface to create a zone for each server HBA. Each zone must contain only one HBA WWN and
all the storage port WWNs.
b) Repeat sub-step a for each FC switch.
NOTE: The Dell EMC PowerVault ME4 Series storage systems support single initiator/multiple target zones.
Register the host and create and map volumes
Perform the following steps to register a Fibre Channel host, create volumes, and map volumes storage system:
1. Log in to the ME Storage Manager.
2. Access the Host Setup wizard by doing one of the following:
•From the Welcome screen, click Host Setup.
•From the Home topic, click Action > Host Setup.
3. Confirm that you have met the listed prerequisites, then click Next.
4. Enter a hostname.
5. Using the information from step 5 of Attach hosts to the storage system to identify the correct initiators, select the FC initiators for
the host you are configuring, then click Next.
6. Group hosts together with other hosts.
a) For cluster configurations, use the Host groups setting to group hosts in a cluster.
•If this host is the first host in the cluster, select Create a new host group, then provide a name and click Next.
•If this host is being added to a host group that exists, select Add to existing host group. Select the group from the
dropdown list, then click
b) For stand-alone hosts, select the Do not group this host option, then click Next.
7. On the Attach volumes page, use the options to change the volume name and size, select the pool for the volume, and add or delete
volumes. Click Next.
NOTE:
volumes.
8. On the Summary page, review the changes made, then click Configure Host.
•Click Yes to return to the Select Host page of the wizard, or choose No to close the wizard.
•Click Previous to go back and make changes to the settings.
Dell EMC recommends that you update the name of the volume with the hostname to better identify the
Next.
Enable Multipath on the FC volumes
1. Log in to the VMware vCenter Server, then click on the ESXi host added.
2. On the Configure tab, select Storage Devices.
3. Perform a rescan of the storage devices.
4. Select the FC Disk (Dell EMC Fibre Channel Disk) created in the Register the host and create and map volumes procedure, then select
the Properties tab below the screen.
5. Scroll down to select the Edit Multipathing option, then select Round Robin (VMware) from the drop down list.
6. Click OK.
7. Follow Steps 4-6 for all volumes presented from the Dell EMC PowerVault ME4 Series Storage system to ESXi host.
VMware Volume rescan and datastore creation
Perform the following steps to rescan storage and create a datastore:
1. Log in to the VMware vCenter Server, then click the configured ESXi host.
2. On the Configure tab, select Storage Adapters, then select the software FC adapter HBA and click the Rescan option.
3. Click OK on the Rescan Storage dialog box.
After a successful rescan, the volumes that are displayed in the Register the host and create and map volumes
section are visible.
4. To create a VMware datastore file system on the volume presented from ME4 Series storage system.
a) On the Configure tab, select Datastore > Create new datastore (a cylinder with + sign).
b) Select VMFS as the type on New Datastore screen, then click Next.
Perform host setup
53
c) Enter a name for new datastore, then select right volume/Lun, and click Next.
d) Select VMFS6 as the VMFS version of datastore, then click OK.
e) On the Partition configuration page, select the default value that is shown, then click Next.
f) Click Finish to complete the creation of the new datastore.
g) After the datastore is created, the new datastore is visible on the Datastores tab.
iSCSI host server configuration for VMware ESXi
The following sections describe how to configure iSCSI host servers running VMware ESXi:
Prerequisites
•Complete the ME Storage Manager guided system and storage setup process.
•Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful
deployment.
•Install the required version of the VMware ESXi operating system and configure it on the host.
•Complete a planning worksheet with the iSCSI network IP addresses to be used, per the example in the following table.
•Complete a planning worksheet with the iSCSI network IP addresses to be used, per the example in the following table.
Table 8. Example worksheet for single host server with dual port iSCSI NICs
ManagementIP
Server Management10.10.96.46
ME4024 Controller A Management10.10.96.128
ME4024 Controller B Management10.10.96.129
Subnet 1
Server iSCSI NIC 1172.1.96.46
ME4024 controller A port 0172.1.100.128
ME4024 controller B port 0172.1.200.129
ME4024 controller A port 2172.1.102.128
ME4024 controller B port 2172.1.202.129
Subnet Mask255.255.0.0
Subnet 2
Server iSCSI NIC 1172.2.96.46
ME4024 controller A port 1172.2.101.128
ME4024 controller B port 1172.2.201.129
ME4024 controller A port 3172.2.103.128
ME4024 controller B port 3172.3.203.129
Subnet Mask255.255.0.0
Attach hosts to the storage system
Perform the following steps to attach a host to the storage system:
1. Ensure that all network adapters have the latest supported firmware and drivers as described on the Dell Support portal.
NOTE:
2. Use the iSCSI cabling diagrams to cable the host servers either by using switches or attaching them directly to the storage system
using one-to-one mode. Record the two different IP address ranges for each storage system controller. For example: 172.2.15.x,
172.3.20.x .
3. If the host servers are connected to the storage system by iSCSI switches, configure the switches to use two different IP address
ranges/subnets.
54
The Dell EMC PowerVault ME4 Series storage system supports only software iSCSI adapters.
Perform host setup
Configuring the switches with two different IP address ranges/subnets enables high availability.
Configure the VMware ESXi VMkernel
Perform the following steps to configure the VMware ESXi VMkernel:
1. From the VMWare VSphere web client, click Configure > Networking > Physical adapters.
2. Locate and document the device name for the NICs used for iSCSI traffic.
3. Click the VMkernel adapters, then click the plus (+) icon to create a VMkernel adapter.
4. On the Select Connection Type page, select VMkernel Network Adapter > Next.
5. On the Select Target Device page, select New standard switch > Next.
6. On the Create Standard Switch page, click the plus (+) icon, then select vmnic > OK to connect to the subnet defined in step 4 of
the “Attach hosts to the storage system” procedure.
7. Click Next.
8. Provide a network label, then update the port properties.
9. On the IPv4 settings page, select Static IP and assign an IP using your planning worksheet.
10. Click Next.
11. On the Ready to complete page, review the settings and then click Finish.
12. Repeat steps 1–11 for each NIC to use for iSCSI traffic.
NOTE: If you are using jumbo frames, they must be enabled and configured on all devices in the data path, adapter
ports, switches, and storage system.
Configure the software iSCSI Adapter on the ESXi host
Perform the following steps to configure a software iSCSI adapter on the ESXI host:
NOTE:
•Update the controller firmware to the latest version posted on Dell.com/support before connecting the ESXi host to the ME4 Series
storage system.
OR
•Run the following ESX CLI command on every ESXi host before connecting it to the ME4 Series storage system:
esxcli system settings advanced set --int-value 0 -option /VMFS3 /HardwareAcceleratedLocking
1. Log in to the VMware vCenter Server.
2. On the Configure tab, select Storage > Storage Adapters.
3. Click the plus (+) icon, then select software iSCSI adapter > OK. The adapter is added to the list of available storage adapters.
4. Select the newly added iSCSI adapter, then click Targets > Add.
5. Enter the iSCSI IP address that is assigned to the iSCSI host port of storage controller A, then click OK.
6. Repeat steps 4-5 for the iSCSI host port of storage controller B.
7. If multiple VMkernels are used on the same subnet, configure the network port binding:
a) On the software iSCSI adapter, click the Network Port Binding tab, then click the plus (+) icon to add the virtual network port
b) Select the VMKernel adapters that are created in the Configure the VMware ESXi VMkernel procedure, then click OK.
c) Select Rescan of storage adapters.
If you plan to use VMware ESXi with 10GBase-T controllers, you must perform
to bind with the iSCSI adapter.
NOTE:
created in the
If each of the VMkernels used for iSCSI are on separate subnets, skip this step.
This step is required to establish a link between the iSCSI Adapter and the Vmkernel adapters that are
Configure the VMware ESXi VMkernel
procedure.
one
of the following tasks:
Register the host and create and map volumes
Perform the following steps to register the host, create volumes, and map volumes:
1. Log in to the Storage Manager.
2. Access the Host Setup wizard by doing one of the following:
•From the Welcome screen, click Host Setup.
Perform host setup
55
•From the Home topic, click Action > Host Setup.
3. Confirm that you have met the listed prerequisites, then click Next.
4. Enter a hostname.
5. the information from step 5 of the Attach hosts to the storage system procedure to identify the correct initiators, select the FC
initiators for the host you are configuring, then
6. Group hosts together with other hosts.
a) For cluster configurations, use the “Host groups” setting to group hosts in a cluster.
•If is the first host in the cluster, select Create a new host group, then provide a name and click Next.
•If this host is to be part of a host group that exists, select Add to existing host group. Select the group from the dropdown
list, then click
b) For stand-alone hosts, select the Do not group this host option, then click Next.
7. On the Attach volumes page, use the options to change the volume name and size. Select the pool for the volume and add or delete
volumes. Click Next.
Next.
Next.
NOTE:
8. On the Summary page, review the changes, then click Configure Host.
•Click Yes to return to the Select Host page of the wizard, or choose No to close the wizard.
•Click Previous to go back and make changes to the settings.
Dell EMC recommended that you update the name with the hostname to better identify the volumes.
VMware Volume rescan and datastore creation
Perform the following steps to rescan volumes and create datastores:
1. Log in to the VMware vCenter Server, then click the ESXi host that was configured in step 5 of Attach SAS hosts to the storage
system.
2. On the Configure tab, select Storage > Storage Adapters, then select the software iSCSI adapter HBA and click the Rescan
option.
3. Click OK on the Rescan Storage dialog box.
After a successful rescan, the volumes that are displayed in the Register the host and create and map volumes section are visible.
4. Create VMware datastore file system on the volume presented from ME4 Series storage system.
a) On the Configure tab, select Datastore > Create new datastore (a cylinder with + sign).
b) Select VMFS as the type on New Datastore screen, then click Next.
c) Enter a name for new datastore, then select right volume/Lun, and click Next.
d) Select VMFS6 as the VMFS version of datastore, then click OK.
e) On the Partition configuration page, select the default value that is shown, then click Next.
f) Click Finish to complete the creation of the new datastore.
SAS host server configuration for VMware ESXi
The following sections describe how to configure SAS host servers running VMware ESXi:
Prerequisites
•Complete the ME Storage Manager guided system and storage setup process.
•Refer to the cabling diagrams within this guide before attaching a host to the storage system; careful planning ensures a successful
deployment.
•Install the required version of the ESXi operating system and configure it on the host.
Attach SAS hosts to the storage system
Perform the following steps to attach a SAS host to the storage system:
1. Ensure that all HBAs have the latest supported firmware and drivers as described on the Dell Support portal. For a list of supported
standard SAS HBAs,
2. Use the SAS cabling diagrams to cable the host servers either by using switches or attaching them directly to the storage system.
3. Log in to the VMware vCenter Server and add the newly configured ESXi host to the Datacenter.
4. On the Configure tab, select Storage > Storage Adapters.
5. Verify that the required SAS storage adapters are listed, then record the HBA WWNs as listed under Properties.
56
Perform host setup
see the Dell EMC ME4 Support Matrix on the Dell website. For OEMs, contact your hardware provider.
NOTE: SAS HBAs have two ports. The World Wide Port Name (WWPN) for port 0 ends in zero and the WWPN for
port 1 ends in one.
Register the host and create and map volumes
Perform the following steps to register a host, create volumes, and map volumes:
1. Log in to the ME Storage Manager.
2. Access the Host Setup wizard by doing one of the following:
•From the Welcome screen, click Host Setup.
•From the Home topic, click Action > Host Setup.
3. Confirm that you have met the listed prerequisites, then click Next.
4. Enter a hostname.
5. Select the SAS initiators for the host you are configuring, then click Next.
Use the information from step 5 of the Attach SAS hosts to the storage system procedure to identify the correct SAS initiators.
6. Group hosts together with other hosts.
a) For cluster configurations, use the “Host groups” setting to group hosts in a cluster.
•If this host is the first host in the cluster, select Create a new host group, then provide a name and click Next.
•If this host is being added to a host group that exists, select Add to existing host group. Select the group from the dropdown list, then click Next.
b) For stand-alone hosts, select the Do not group this host option, then click Next.
7. On the Attach volumes page, use the options to change the volume name and size, select the pool for the volume, and add or delete
volumes. Click
Next.
NOTE:
8. On the Summary page, review the changes, then click Configure Host.
•Click Yes to return to the Select Host page of the wizard, or choose No to close the wizard.
•Click Previous to go back and change the settings.
Dell EMC recommends that you update the volume name with the hostname to better identify the volumes.
Enable multipathing on SAS volumes
Perform the following steps to enable multipathing on SAS volumes:
1. Log in to the VMware vCenter Server, then click the ESXi host.
2. On the Configure tab, select Storage > Storage Adapters.
3. Select the SAS HBA and click Rescan Storage.
The Rescan Storage dialog box opens.
4. Click OK.
5. Select the Dell EMC disk that was added to the ESXi host in Register the host and create and map volumes.
6. Click the Properties tab that is located below the selected disk.
8. Select a multipathing policy for the volume from the Path selection policy drop-down list and click OK.
NOTE:
with one SAS HBA that has a single path to both controllers. If the host has two SAS HBAs (for example, the host
has two paths to each controller), Dell EMC recommends that you change the multipathing policy to Round Robin
(VMware).
9. Repeat steps 5–8 for each SAS volume that is attached to the ESXi host.
The VMware multipathing policy defaults to Most Recently Used (VMware). Use the default policy for a host
VMware volume rescan and datastore creation
Perform the following steps to rescan volumes and create datastores:
1. Log in to the VMware vCenter Server, then click the ESXi host.
2. On the Configure tab, select Storage > Storage Adapters.
3. Select the SAS HBA and click Rescan Storage.
The Rescan Storage dialog box opens.
Perform host setup
57
4. Click OK.
5. Create a VMware datastore file system on the ME4 Series volume.
a) From the Actions menu, select Datastore > New Datastore.
b) Select VMFS as the type on New Datastore screen, then click Next.
c) Enter a name for the datastore, select right volume/Lun, and then click Next.
d) Select VMFS6 as the VMFS version of datastore, then click OK.
e) Select a partition configuration, then click Next.
f) Click Finish.
Enable Multipathing on iSCSI volumes
Perform the following steps to enable Multipathing on iSCSI volumes:
1. Log in to the VMware vCenter Server, then click the ESXi host added.
2. On the Configure tab, select Storage Devices.
3. Perform a rescan of the storage devices.
4. Select the iSCSI disk (Dell EMC iSCSI disk) created in the Register the host and create and map volumes procedure, then select the
Properties tab below the screen.
5. Scroll down to select the Edit Multipathing option, then select Round Robin (VMware) from the drop-down list.
6. Click OK.
7. Repeat steps 4–6 for all the volumes that are presented from the Dell EMC PowerVault ME4 Series storage system to the ESXi host.
Citrix XenServer hosts
Ensure that the HBAs or network adapters are installed and the latest supported BIOS is installed.
To perform host setup on Citrix XenServer hosts connected to Dell EMC PowerVault ME4 Series storage systems, see the Citrix
XenServer documentation on
https://docs.citrix.com/en-us/xenserver.
58
Perform host setup
8
Troubleshooting and problem solving
These procedures are intended to be used only during initial configuration for verifying that hardware setup is successful. They are not
intended to be used as troubleshooting procedures for configured systems using production data and I/O.
NOTE: For further troubleshooting help after setup, and when data is present, see Dell.com/support.
Topics:
•Locate the service tag
•Operators (Ops) panel LEDs
•Initial start-up problems
Locate the service tag
ME4 Series storage systems are identified by a unique Service Tag and Express Service Code.
The Service Tag and Express Service Code can be found on the front of the system by pulling out the information tag. Alternatively, the
information might be on a sticker on the back of the storage system chassis. This information is used to route support calls to appropriate
personnel.
Operators (Ops) panel LEDs
Each ME4 Series enclosure features an Operators (Ops) panel located on the chassis left ear flange. This section describes the Ops panel
for 2U and 5U enclosures.
2U enclosure Ops panel
The front of the enclosure has an Ops panel that is located on the left ear flange of the 2U chassis.
The Ops panel is a part of the enclosure chassis, but is not replaceable on-site.
The Ops panel provides the functions that are shown in the following figure and listed in Table 9. Ops panel functions—2U enclosure front
panel.
Figure 30. Ops panel LEDs—2U enclosure front panel
Table 9. Ops panel functions—2U enclosure front panel
No.IndicatorStatus
1System powerConstant green: at least one PCM is supplying power
Off: system not operating regardless of AC present
2Status/Health
Constant blue: system is powered on and controller is ready
Troubleshooting and problem solving59
No.IndicatorStatus
Blinking blue (2 Hz): Enclosure management is busy
Blinking blue (0.25 Hz): system ID locator is activated
Off: Normal state
System power LED (green)
LED displays green when system power is available. LED is off when system is not operating.
Status/Health LED (blue/amber)
LED illuminates constant blue when the system is powered on and functioning normally. The LED blinks blue when enclosure management
is busy, for example, when booting or performing a firmware update. The LEDs help you identify which component is causing the fault.
LED illuminates constant amber when experiencing a system hardware fault which could be associated with a Fault LED on a controller
module, IOM, or PCM. LED illuminates blinking amber when experiencing a logical fault.
Unit identification display (green)
The UID is a dual seven-segment display that shows the numerical position of the enclosure in the cabling sequence. The UID is also called
the enclosure ID.
NOTE:
The controller enclosure ID is 0.
Identity LED (blue)
When activated, the Identity LED blinks at a rate of 1 second on, 1 second off to locate the chassis within a data center. The locate
function can be enabled or disabled through SES. Pressing the button changes the state of the LED.
NOTE:
The enclosure ID cannot be set using the Identity button.
5U enclosure Ops panel
The front of the enclosure has an Ops panel that is located on the left ear flange of the 5U chassis.
The Ops panel is part of the enclosure chassis, but is not replaceable on-site.
The Ops panel provides the functions that are shown in the following figure and listed in Table 10. Ops panel functions – 5U enclosure
front panel.
Figure 31. Ops panel LEDs—5U enclosure front panel
60
Troubleshooting and problem solving
Table 10. Ops panel functions – 5U enclosure front panel
3Module FaultConstant or blinking amber: fault present
4Logical StatusConstant or blinking amber: fault present
5Top Drawer FaultConstant or blinking amber: fault present in drive, cable, or sideplane
6Bottom Drawer FaultConstant or blinking amber: fault present in drive, cable, or sideplane
Constant green: positive indication
Constant amber: system in standby (not operational)
Unit identification display
The UID is a dual seven-segment display that shows the numerical position of the enclosure in the cabling sequence. The UID is also called
the enclosure ID.
NOTE:
The controller enclosure ID is 0.
System Power On/Standby LED (green/amber)
LED is amber when only the standby power is available (non-operational). LED is green when system power is available (operational).
Module Fault LED (amber)
LED turns amber when experiencing a system hardware fault. The module fault LED helps you identify the component causing the fault.
The module fault LED may be associated with a Fault LED on a controller module, IOM, PSU, FCM, DDIC, or drawer.
Logical Status LED (amber)
This LED indicates a change of status or fault from something other than the enclosure management system. The logical status LED may
be initiated from the controller module or an external HBA. The indication is typically associated with a DDIC and LEDs at each disk
position within the drawer, which help to identify the DDIC affected.
Drawer Fault LEDs (amber)
This LED indicates a disk, cable, or sideplane fault in the drawer indicate: Top (Drawer 0) or Bottom (Drawer 1).
Initial start-up problems
The following sections describe how to troubleshoot initial start-up problems:
LEDs
LED colors are used consistently throughout the enclosure and its components for indicating status:
•Green: good or positive indication
•Blinking green/amber: non-critical condition
•Amber: critical fault
Host-side connection troubleshooting featuring 10Gbase-T
and SAS host ports
The following procedure applies to ME4 Series controller enclosures employing external connectors in the host interface ports:
1. Stop all I/O to the storage system. See “Stopping I/O” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
Troubleshooting and problem solving
61
2. Check the host activity LED.
If there is activity, stop all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
•Solid – Cache contains data yet to be written to the disk.
•Blinking – Cache data is being written to CompactFlash.
•Flashing at 1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
•Off – Cache is clean (no unwritten data).
4. Reseat the host cable and inspect for damage.
Is the host link status LED on?
•Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again, clean the connections to
ensure that a dirty connector is not interfering with the data path.
•No – Proceed to the next step.
5. Move the host cable to a port with a known good link status.
This step isolates the problem to the external data path (host cable and host-side devices) or to the controller module port.
Is the host link status LED on?
•Yes – You now know that the host cable and host-side devices are functioning properly. Return the cable to the original port. If the
link status LED remains off, you have isolated the fault to the controller module port. Replace the controller module.
•No – Proceed to the next step.
6. Verify that the switch, if any, is operating properly. If possible, test with another port.
7. Verify that the HBA is fully seated, and that the PCI slot is powered on and operational.
8. Replace the HBA with a known good HBA, or move the host side cable to a known good HBA.
Is the host link status LED on?
•Yes – You have isolated the fault to the HBA. Replace the HBA.
•No – It is likely that the controller module needs to be replaced.
9. Move the host cable back to its original port.
Is the host link status LED on?
•No – The controller module port has failed. Replace the controller module.
•Yes – Monitor the connection. It may be an intermittent problem, which can occur with damaged cables and HBAs.
Isolating a controller module expansion port connection
fault
During normal operation, when a controller module expansion port is connected to a drive enclosure, the expansion port status LED is
green. If the expansion port LED is off, the link is down. Use the following procedure to isolate the fault:
NOTE:
troubleshooting process.
1. Stop all I/O to the storage system. See “Stopping I/O” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
2. Check the host activity LED.
If there is activity, stop all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
•Solid – Cache contains data yet to be written to the disk.
•Blinking – Cache data is being written to CompactFlash.
•Flashing at 1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
•Off – Cache is clean (no unwritten data).
4. Reseat the expansion cable, and inspect it for damage.
Is the expansion port status LED on?
•Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again, clean the connections to
•No – Proceed to the next step.
5. Move the expansion cable to a port on the controller enclosure with a known good link status.
Do not perform more than one step at a time. Changing more than one variable at a time can complicate the
ensure that a dirty connector is not interfering with the data path.
62
Troubleshooting and problem solving
This step isolates the problem to the expansion cable or to the controller module expansion port.
Is the expansion port status LED on?
Yes – You now know that the expansion cable is good. Return the cable to the original port. If the expansion port status LED remains
off, you have isolated the fault to the controller module expansion port. Replace the controller module.
No – Proceed to the next step.
6. Move the expansion cable back to the original port on the controller enclosure.
7. Move the expansion cable on the drive enclosure to a known good expansion port on the drive enclosure.
Is the expansion port status LED on?
•Yes – You have isolated the problem to the expansion enclosure port. Replace the expansion module.
•No – Proceed to the next step.
8. Replace the cable with a known good cable, ensuring the cable is attached to the original ports.
Is the host link status LED on?
•Yes – Replace the original cable. The fault has been isolated.
•No – It is likely that the controller module must be replaced.
2U enclosure LEDs
Use the LEDs on the 2U enclosure to help troubleshoot initial start-up problems.
PCM LEDs (580 W)
Under normal conditions, the PCM OK LEDs are a constant green.
Table 11. PCM LED states
PCM OK (Green)Fan Fail (Amber)AC Fail (Amber)DC Fail(Amber)Status
OffOffOffOffNo AC power on any PCM
OffOffOnOnNo AC power on this PCM only
OnOffOffOffAC present; PCM working correctly
OnOffOffOnPCM fan speed is outside acceptable limits
OffOnOffOffPCM fan has failed
OffOnOnOnPCM fault (over temperature, over voltage,
over current)
OffBlinkingBlinkingBlinkingPCM firmware download is in progress
Ops panel LEDs
The Ops panel displays the aggregated status of all the modules. See also 2U enclosure Ops panel.
Table 12. Ops panel LED states
System
Power
(Green/
Amber)
OnOffOffX5 V standby power present, overall
OnOnOnOnOps panel power on (5 s) test state
OnOffOffXPower on, all functions good
OnOnXXPCM fault LEDs, fan
OnOnXXSBB module LEDsAny SBB module fault
OnOnXXNo module LEDsEnclosure logical fault
Module Fault
(Amber)
Identity
(Blue)
LED displayAssociated LEDs/
Alarms
fault LEDs
Status
power has failed or switched off
Any PCM fault, fan fault, over or under
temperature
Troubleshooting and problem solving63
System
Power
(Green/
Amber)
OnBlinkXXModule status LED on
OnBlinkXXPCM fault LEDs, fan
X= Disregard
Actions:
•If the Ops panel Module Fault LED is on, check the module LEDs on the enclosure rear panel to narrow the fault to a CRU, a
connection, or both.
•Check the event log for specific information regarding the fault, and follow any Recommended Actions.
•If installing a controller module or IOM CRU:
•Remove and reinstall the controller module or IOM per the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
•Check the event log for errors.
•If the CRU Fault LED is on, a fault condition is detected.
•Restart this controller from the partner controller using the ME Storage Manager or CLI.
•If the restart does not resolve the fault, remove the controller module or IOM and reinsert it.
•If the previous actions do not resolve the fault, contact Dell EMC for assistance.
Module Fault
(Amber)
Identity
(Blue)
XBlinkEnclosure identification or invalid ID
LED displayAssociated LEDs/
Alarms
SBB module
fault LEDs
Status
Unknown (invalid or mixed) SBB module
type is installed, I2C bus failure (interSBB communications). EBOD VPD
configuration error
Unknown (invalid or mixed) PCM type is
installed or I2C bus failure (PCM
communications)
selected
Disk drive carrier module LEDs
A green LED and amber LED mounted on the front of each drive carrier module display the disk drive status.
•In normal operation, the green LED is on and flickers as the drive operates.
•In normal operation, the amber LED is:
•Off if there is no drive present.
•Off as the drive operates.
•On if there is a drive fault.
Figure 32. LEDs: Drive carrier LEDs (SFF and LFF modules)
5U enclosure LEDs
Use the LEDs on the 5U enclosure to help troubleshoot initial start-up problems.
NOTE:
behavior does not indicate a fault unless LEDs remain lit after several seconds.
64Troubleshooting and problem solving
When the 5U84 enclosure is powered on, all LEDs are lit for a short period to ensure that they are working. This
PSU LEDs
The following table describes the LED states for the PSU:
Table 13. PSU LED states
CRU Fail (Amber)AC Missing
(Amber)
OnOffOffNo AC power to either PSU
OnOnOffPSU present, but not supplying power or PSU alert state. (typically
OffOffOnMains AC present, switch on. This PSU is providing power.
OffOffBlinkingAC power present, PSU in standby (other PSU is providing power).
BlinkingBlinkingOffPSU firmware download in progress
OffOnOffAC power missing, PSU in standby (other PSU is providing power).
OnOnOnFirmware has lost communication with the PSU module.
On--OffPSU has failed. Follow the procedure in “Replacing a PSU” in the
Power (Green)Status
due to critical temperature)
Dell EMC PowerVault ME4 Series Storage System Owner’s
Manual.
Fan cooling module LEDs
The following table describes the LEDs on the Fan Cooling Module (FCM) faceplate:
Table 14. FCM LED descriptions
LEDStatus/description
Module OKConstant green indicates that the FCM is working correctly. Off indicates that the fan module has
failed. Follow the procedure in “Replacing an FCM” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
Fan FaultAmber indicates that the fan module has failed. Follow the procedure in “Replacing an FCM” in the
Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
Ops panel LEDs
The Ops panel displays the aggregated status of all the modules.
Table 15. Ops panel LED descriptions
LEDStatus/description
Unit ID displayUsually shows the ID number for the enclosure, but can be used for other purposes, for example,
blinking to locate enclosure.
Power On/StandbyAmber if the system is in standby. Green if the system has full power.
Module FaultAmber indicates a fault in a controller module, IOM, PSU, or FCM. Check the drawer LEDs for
indication of a disk fault. See also Drawer Fault LEDs (amber).
Logical statusAmber indicates a fault from something other than firmware (usually a disk, an HBA, or an internal or
external RAID controller). Check the drawer LEDs for indication of a disk fault. See also Drawer LEDs.
Drawer 0 FaultAmber indicates a disk, cable, or sideplane fault in drawer 0. Open the drawer and check DDICs for
faults.
Drawer 1 FaultAmber indicates a disk, cable, or sideplane fault in drawer 1. Open the drawer and check DDICs for
faults.
Drawer LEDs
The following table describes the LEDs on the drawers:
Troubleshooting and problem solving
65
Table 16. Drawer LED descriptions
LEDStatus/description
Sideplane OK/Power GoodGreen if the sideplane card is working and there are no power problems.
Drawer FaultAmber if a drawer component has failed. If the failed component is a disk, the LED on the failed DDIC
lights amber. Follow the procedure in “Replacing a DDIC” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual. If the disks are OK, contact your service provider to identify the
cause of the failure, and resolve the problem.
Logical FaultAmber (solid) indicates a disk fault. Amber (blinking) indicates that one or more storage systems are in
an impacted state.
Cable FaultAmber indicates the cabling between the drawer and the back of the enclosure has failed. Contact
your service provider to resolve the problem.
Activity Bar GraphDisplays the amount of data I/O from zero segments lit (no I/O) to all six segments lit (maximum I/O).
DDIC LED
The DDIC supports LFF 3.5" and SFF 2.5" disks as shown in Figure 7. 3.5" disk drive in a DDIC and Figure 8. 2.5" drive in a 3.5" DDIC with
a hybrid drive carrier adapter). The following figure shows the top panel of the DDIC as viewed when the disk is aligned for insertion into a
drawer slot.
Figure 33. LEDs: DDIC – 5U enclosure disk slot in drawer
Slide latch (slides left)2. Latch button (shown in the locked position)
1.
3. Drive Fault LED
Table 17. DDIC LED descriptions
Fault LED (Amber)Status/description*
OffOff (disk module/enclosure)
OffNot present
Blinking: 1 s on/1 s offIdentify
Any links down: OnDrive link (PHY lane) down
OnFault (leftover/failed/locked-out)
OffAvailable
OffStorage system: Initializing
OffStorage system: Fault-tolerant
OffStorage system: Degraded (non-critical)
Blinking: 3 s on/1 s offStorage system: Degraded (critical)
OffStorage system: Quarantined
Blinking: 3 s on/1 s offStorage system: Offline (dequarantined)
OffStorage system: Reconstruction
OffProcessing I/O (whether from host or internal activity)
*If multiple conditions occur simultaneously, the LED state behaves as indicated in the previous table.
Each DDIC has a single Drive Fault LED. If the Drive Fault LED is lit amber, a disk fault is indicated. If a disk failure occurs, follow the
procedure in “Replacing a DDIC” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
66
Troubleshooting and problem solving
Controller module or IOM LEDs
•For information about controller module LEDs, see Controller module LEDs.
•For information about expansion module LEDs, see IOM LEDs.
Temperature sensors
Temperature sensors throughout the enclosure and its components monitor the thermal health of the storage system. Exceeding the
limits of critical values causes a notification to occur.
Module LEDs
Module LEDs pertain to controller modules and IOMs.
Controller module LEDs
Use the controller module LEDs on the face plate to monitor the status of a controller module.
Table 18. Controller module LED states
CRU OK (Green) CRU Fault
(Amber)
OnOffController module OK
OffOnController module fault – see “Replacing a controller module” in the
BlinkingSystem is booting
Actions:
•If the CRU OK LED is blinking, wait for the system to boot.
•If the CRU OK LED is off, and the controller module is powered on, the module has failed.
•Check that the controller module is fully inserted and latched in place, and that the enclosure is powered on.
•Check the event log for specific information regarding the failure.
•If the CRU Fault LED is on, a fault condition is detected.
•Restart the controller module from the partner controller module using the ME Storage Manager or CLI.
•If the restart does not resolve the fault, remove the controller module and reinsert it.
•If the previous actions do not resolve the fault, contact your supplier for assistance. Controller module replacement may be necessary.
External host port
activity (Green)
OffNo external host port connection
OnExternal host port connection – no activity
BlinkingExternal host port connection – activity
Status
Dell EMC PowerVault ME4 Series Storage System Owner’s
Manual
IOM LEDs
Use the IOM LEDs on the face plate to monitor the status of an IOM .
Table 19. IOM LED states
CRU OK (Green) CRU Fault
(Amber)
OnOffController module OK
OffOnIOM module fault – see “Replacing an IOM” in the Dell EMC
External host port
activity (Green)
OffNo external host port connection
OnHD mini-SAS port connection – no activity
BlinkingHD mini-SAS port connection – activity
Status
PowerVault ME4 Series Storage System Owner’s Manual
Troubleshooting and problem solving67
CRU OK (Green) CRU Fault
(Amber)
BlinkingEBOD VPD error
Actions:
•If the CRU OK LED is off, and the IOM is powered on, the module has failed.
•Check that the IOM is fully inserted and latched in place, and that the enclosure is powered on.
•Check the event log for specific information regarding the failure.
•If the CRU Fault LED is on, a fault condition is detected.
•Restart this IOM using the ME Storage Manager or CLI.
•If the restart does not resolve the fault, remove the IOM and reinsert it.
•If the previous actions do not resolve the fault, contact your supplier for assistance. IOM replacement may be necessary.
External host port
activity (Green)
Status
Troubleshooting 2U enclosures
Common problems that may occur with your 2U enclosure system.
The Module Fault LED on the Ops panel, described in Figure 30. Ops panel LEDs—2U enclosure front panel, lights amber to indicate a
fault for the problems listed in the following table:
NOTE:
Table 20. 2U alarm conditions
StatusSeverityAlarm
PCM alert – loss of DC power from a single PCMFault – loss of redundancyS1
Ops panel communication error (I2C)Fault – criticalS1
RAID errorFault – criticalS1
SBB interface module faultFault – criticalS1
SBB interface module removedWarningNone
Drive power control faultWarning – no loss of disk powerS1
Drive power control faultFault – critical – loss of disk powerS1
Drive removedWarningNone
Insufficient power availableWarningNone
All alarms also report through SES.
For details about replacing modules, see the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
NOTE:
events, and to determine any necessary recommended actions.
68Troubleshooting and problem solving
Use the ME Storage Manager to monitor the storage system event logs for information about enclosure-related
PCM faults
SymptomCauseRecommended action
Ops panel Module Fault LED is amber
Fan Fail LED is illuminated on PCM
1. See 2U enclosure Ops panel for visual reference of Ops panel LEDs.
2. See PCM LEDs (580 W) for visual reference of PCM LEDs.
1
2
Any power faultVerify that AC mains connections to PCM
are live
Fan failureReplace PCM
Thermal monitoring and control
The storage enclosure uses extensive thermal monitoring and takes several actions to ensure that component temperatures are kept low,
and to also minimize acoustic noise. Air flow is from the front to back of the enclosure.
SymptomCauseRecommended action
If the ambient air is below 25ºC
(77ºF), and the fans increase in
speed, some restriction on
airflow may be causing the
internal temperature to rise.
NOTE: This symptom is
not a fault condition.
The first stage in the thermal
control process is for the fans to
automatically increase in speed
when a thermal threshold is
reached. This condition may be
caused higher ambient
temperatures in the local
environment, and may be normal
condition.
NOTE: The threshold
changes according to the
number of disks and
power supplies fitted.
1. Check the installation for any airflow restrictions at either the
front or back of the enclosure. A minimum gap of 25 mm (1") at
the front and 50 mm (2") at the rear is recommended.
2. Check for restrictions due to dust build-up. Clean as
appropriate.
3. Check for excessive recirculation of heated air from rear to
front. Use of the enclosure in a fully enclosed rack is not
recommended.
4. Verify that all blank modules are in place.
5. Reduce the ambient temperature.
Thermal alarm
Symptom
1. Ops panel Module Fault LED
is amber.
2. Fan Fail LED is illuminated on
one or more PCMs.
CauseRecommended action
Internal temperature exceeds a
preset threshold for the
enclosure.
1. Verify that the local ambient environment temperature is within
the acceptable range. See the technical specifications in the
Dell EMC PowerVault ME4 Series Storage System Owner’s
Manual.
2. Check the installation for any airflow restrictions at either the
front or back of the enclosure. A minimum gap of 25 mm (1") at
the front and 50 mm (2") at the rear is recommended.
3. Check for restrictions due to dust build-up. Clean as
appropriate.
4. Check for excessive recirculation of heated air from rear to
front. Use of the enclosure in a fully enclosed rack is not
recommended.
5. If possible, shut down the enclosure and investigate the
problem before continuing.
Troubleshooting 5U enclosures
Common problems that may occur with your 5U enclosure system.
The Module Fault LED on the Ops panel, described in Figure 31. Ops panel LEDs—5U enclosure front panel, lights amber to indicate a fault
for the problems listed in the following table:
NOTE:
All alarms also report through SES.
Troubleshooting and problem solving69
Table 21. 5U alarm conditions
StatusSeverity
PSU alert – loss of DC power from a single PSUFault – loss of redundancy
Cooling module fan failureFault – loss of redundancy
Ops panel communication error (I2C)Fault – critical
RAID errorFault – critical
SBB I/O module faultFault – critical
SBB I/O module removedWarning
Drive power control faultWarning – no loss of drive power
Drive power control faultFault – critical – loss of drive power
Insufficient power availableWarning
For details about replacing modules, see the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
NOTE:
events, and to determine any necessary recommended actions.
Use the ME Storage Manager to monitor the storage system event logs for information about enclosure-related
Thermal considerations
NOTE:
• Exceeding the limits of critical values activates the over-temperature alarm.
• For information about 5U84 enclosure alarm notification, see Table 21. 5U alarm conditions.
Thermal sensors in the 5U84 enclosure and its components monitor the thermal health of the storage system.
Fault isolation methodology
ME4 Series Storage Systems provide many ways to isolate faults. This section presents the basic methodology that is used to locate faults
within a storage system, and to identify the pertinent CRUs affected.
As noted in Using guided setup, use the ME Storage Manager to configure and provision the system upon completing the hardware
installation. Configure and enable event notification to be notified when a problem occurs that is at or above the configured severity. See
the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide for more information.
When you receive an event notification, follow the recommended actions in the notification message to resolve the problem.
Fault isolation methodology basic steps
•Gather fault information, including using system LEDs as described in Gather fault information.
•Determine where in the system the fault is occurring as described in Determine where the fault is occurring.
•Review event logs as described in Review the event logs.
•If required, isolate the fault to a data path component or configuration as described in Isolate the fault.
Cabling systems to enable use of the replication feature—to replicate volumes—is another important fault isolation consideration
pertaining to initial system installation. See Host ports and replication and Isolating replication faults for more information about
troubleshooting during initial setup.
70
Troubleshooting and problem solving
Options available for performing basic steps
When performing fault isolation and troubleshooting steps, select the option or options that best suit your site environment.
Use of any option is not mutually exclusive to the use of another option. You can use the ME Storage Manager to check the health icons/
values for the system, or to examine a problem component. If you discover a problem, either the ME Storage Manager or the CLI provides
recommended-action text online. Options for performing basic steps are listed according to frequency of use:
•Use the ME Storage Manager
•Use the CLI
•Monitor event notification
•View the enclosure LEDs
Use the ME Storage Manager
The ME Storage Manager uses health icons to show OK, Degraded, Fault, or Unknown status for the system and its components. The ME
Storage Manager enables you to monitor the health of the system and its components. If any component has a problem, the system
health is in a Degraded, Fault, or Unknown state. Use the ME Storage Manager to find each component that has a problem. Follow actions
in the Recommendation field for the component to resolve the problem.
Use the CLI
As an alternative to using the ME Storage Manager, you can run the show system CLI command to view the health of the system and its
components. If any component has a problem, the system health is in a Degraded, Fault, or Unknown state, and those components are
listed as Unhealthy Components. Follow the recommended actions in the component Health Recommendation field to resolve the
problem.
Monitor event notification
With event notification configured and enabled, you can view event logs to monitor the health of the system and its components. If a
message tells you to check whether an event has been logged, or to view information about an event, use the ME Storage Manager or the
CLI. Using the ME Storage Manager, view the event log and then click the event message to see detail about that event. Using the CLI,
run the show events detail command to see the detail for an event.
View the enclosure LEDs
You can view the LEDs on the hardware to identify component status. If a problem prevents access to the ME Storage Manager or the
CLI, viewing the enclosure LEDs is the only option available. However, monitoring/management is often done at a management console
using storage management interfaces, rather than relying on line-of-sight to LEDs of racked hardware components.
Performing basic steps
You can use any of the available options that are described in the previous sections to perform the basic steps comprising the fault
isolation methodology.
Gather fault information
When a fault occurs, gather as much information as possible. Doing so helps determine the correct action that is needed to remedy the
fault.
Begin by reviewing the reported fault:
•Is the fault related to an internal data path or an external data path?
•Is the fault related to a hardware component such as a disk drive module, controller module, or power supply unit?
By isolating the fault to one of the components within the storage system, you are able determine the necessary corrective action more
quickly.
Determine where the fault is occurring
When a fault occurs, the Module Fault LED illuminates. Check the LEDs on the back of the enclosure to narrow the fault to a CRU,
connection, or both. The LEDs also help you identify the location of a CRU reporting a fault.
Troubleshooting and problem solving
71
Use the ME Storage Manager to verify any faults found while viewing the LEDs. If the LEDs cannot be viewed due to the location of the
system, use the ME Storage Manager to determine where the fault is occurring . This web-application provides you with a visual
representation of the system and where the fault is occurring. The ME Storage Manager also provides more detailed information about
CRUs, data, and faults.
Review the event logs
The event logs record all system events. Each event has a numeric code that identifies the type of event that occurred, and has one of
the following severities:
•Critical – A failure occurred that may cause a controller to shut down. Correct the problem immediately.
•Error – A failure occurred that may affect data integrity or system stability. Correct the problem as soon as possible.
•Warning – A problem occurred that may affect system stability, but not data integrity. Evaluate the problem and correct it if
necessary.
•Informational – A configuration or state change occurred, or a problem occurred that the system corrected. No immediate action is
required.
The event logs record all system events. Review the logs to identify fault and cause of the failure. For example, a host could lose
connectivity to a disk group if a user changes channel settings without taking the storage resources that are assigned to it into
consideration. In addition, the type of fault can help you isolate the problem to either hardware or software.
Isolate the fault
Occasionally, it might become necessary to isolate a fault. This is true with data paths, due to the number of components comprising the
data path. For example, if a host-side data error occurs, it could be caused by any of the components in the data path: controller module,
cable, or data host.
If the enclosure does not initialize
It may take up to two minutes for all enclosures to initialize.
If an enclosure does not initialize:
•Perform a rescan
•Power cycle the system
•Make sure that the power cord is properly connected, and check the power source to which it is connected
•Check the event log for errors
Correcting enclosure IDs
When installing a system with expansion enclosures attached, the enclosure IDs might not agree with the physical cabling order. This issue
occurs if the controller was previously attached to enclosures in a different configuration, and the controller attempts to preserve the
previous enclosure IDs.
To correct this condition, ensure that both controllers are up, and perform a rescan using the ME Storage Manager or the CLI. The rescan
reorders the enclosures, but it can take up to two minutes to correct the enclosure IDs.
NOTE:
a controller failure, a manual rescan does not reorder the expansion enclosure IDs.
•To perform a rescan using the ME Storage Manager:
a) Verify that both controllers are operating normally.
b) In the System tab, click Action, and select Rescan Disk Channels.
•To perform a rescan using the CLI, type the following command:
rescan
Reordering expansion enclosure IDs only applies to dual-controller mode. If only one controller is available, due to
Host I/O
When troubleshooting disk drive and connectivity faults, stop I/O to the affected disk groups from all hosts as a data protection
precaution.
As an extra data protection precaution, it is helpful to conduct regularly scheduled backups of your data. See “Stopping I/O” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
72
Troubleshooting and problem solving
Dealing with hardware faults
Make sure that you have a replacement module of the same type before removing any faulty module. See “Module removal and
replacement” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
NOTE: If the enclosure system is powered up and you remove any module, replace it immediately. If the system is used
with any modules missing for more than a few seconds, the enclosures can overheat, causing power failure and potential
data loss. Such action can invalidate the product warranty.
NOTE: Observe applicable/conventional ESD precautions when handling modules and components, as described in
Electrical safety. Avoid contact with midplane components, module connectors, leads, pins, and exposed circuitry.
Isolating a host-side connection fault
During normal operation, when a controller module host port is connected to a data host, the port host link status/link activity LED is
green. If there is I/O activity, the host activity LED blinks green. If data hosts are having trouble accessing the storage system, but you
cannot locate a specific fault or access the event logs, use the following procedures. These procedures require scheduled downtime.
NOTE: Do not perform more than one step at a time. Changing more than one variable at a time can complicate the
The following procedure applies to controller enclosures with small form factor pluggable (SFP+) transceiver connectors in 8/16 Gb/s FC
or 10 GbE iSCSI host interface ports.
In this procedure, SFP+ transceiver and host cable is used to refer to any qualified SFP+ transceiver supporting CNC ports
used for I/O or replication.
NOTE:
a time to see if performance improves.
When experiencing difficulty diagnosing performance problems, consider swapping out one SFP+ transceiver at
1. Stop all I/O to the storage system. See “Stopping I/O” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
2. Check the host link status/link activity LED.
If there is activity, stop all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
•Solid – Cache contains data yet to be written to the disk.
•Blinking – Cache data is being written to CompactFlash in the controller module.
•Flashing at 1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
•Off – Cache is clean (no unwritten data).
4. Remove the SFP+ transceiver and host cable and inspect for damage.
5. Reseat the SFP+ transceiver and host cable.
Is the host link status/link activity LED on?
•Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again, clean the connections to
ensure that a dirty connector is not interfering with the data path.
•No – Proceed to the next step.
6. Move the SFP+ transceiver and host cable to a port with a known good link status.
This step isolates the problem to the external data path (SFP+ transceiver, host cable, and host-side devices) or to the controller
module port.
Is the host link status/link activity LED on?
•Yes – You now know that the SFP+ transceiver, host cable, and host-side devices are functioning properly. Return the cable to
the original port. If the link status LED remains off, you have isolated the fault to the controller module port. Replace the controller
module.
•No – Proceed to the next step.
7. Swap the SFP+ transceiver with the known good one.
Is the host link status/link activity LED on?
•Yes – You have isolated the fault to the SFP+ transceiver. Replace the SFP+ transceiver.
•No – Proceed to the next step.
Troubleshooting and problem solving
73
8. Reinsert the original SFP+ transceiver and swap the cable with a known good one.
Is the host link status/link activity LED on?
•Yes – You have isolated the fault to the cable. Replace the cable.
•No – Proceed to the next step.
9. Verify that the switch, if any, is operating properly. If possible, test with another port.
10. Verify that the HBA is fully seated, and that the PCI slot is powered on and operational.
11. Replace the HBA with a known good HBA, or move the host side cable and SFP+ transceiver to a known good HBA.
Is the host link status/link activity LED on?
•Yes – You have isolated the fault to the HBA. Replace the HBA.
•No – It is likely that the controller module needs to be replaced.
12. Move the cable and SFP+ transceiver back to its original port.
Is the host link status/link activity LED on?
•Yes – Monitor the connection for a period of time. It may be an intermittent problem, which can occur with damaged SFP+
transceivers, cables, and HBAs.
•No – The controller module port has failed. Replace the controller module.
Host-side connection troubleshooting featuring 10Gbase-T and SAS
host ports
The following procedure applies to ME4 Series controller enclosures employing external connectors in the host interface ports.
The external connectors include 10Gbase-T connectors in iSCSI host ports and 12 Gb SFF-8644 connectors in the HD mini-SAS host
ports.
1. Halt all I/O to the storage system. See “Stopping I/O” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
2. Check the host activity LED.
If there is activity, stop all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
•Solid – Cache contains data yet to be written to the disk.
•Blinking – Cache data is being written to CompactFlash in the controller module.
•Flashing at 1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
•Off – Cache is clean (no unwritten data).
4. Remove the host cable and inspect for damage.
5. Reseat the host cable.
Is the host link status LED on?
•Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again, clean the connections to
ensure that a dirty connector is not interfering with the data path.
•No – Proceed to the next step.
6. Move the host cable to a port with a known good link status.
This step isolates the problem to the external data path (host cable and host-side devices) or to the controller module port.
Is the host link status LED on?
•Yes – You now know that the host cable and host-side devices are functioning properly. Return the cable to the original port. If the
link status LED remains off, you have isolated the fault to the controller module port. Replace the controller module.
•No – Proceed to the next step.
7. Verify that the switch, if any, is operating properly. If possible, test with another port.
8. Verify that the HBA is fully seated, and that the PCI slot is powered on and operational.
9. Replace the HBA with a known good HBA, or move the host side cable to a known good HBA.
Is the host link status LED on?
•Yes – You have isolated the fault to the HBA. Replace the HBA.
•No – It is likely that the controller module needs to be replaced.
10. Move the host cable back to its original port.
Is the host link status LED on?
74
Troubleshooting and problem solving
•Yes – Monitor the connection for a period of time. It may be an intermittent problem, which can occur with damaged cables and
HBAs.
•No – The controller module port has failed. Replace the controller module.
Isolating a controller module expansion port connection fault
During normal operation, when a controller module expansion port is connected to an expansion enclosure, the expansion port status LED
is green. If the expansion port LED is off, the link is down.
Use the following procedure to isolate the fault. This procedure requires scheduled downtime.
NOTE: Do not perform more than one step at a time. Changing more than one variable at a time can complicate the
troubleshooting process.
1. Halt all I/O to the storage system. See “Stopping I/O” in the Dell EMC PowerVault ME4 Series Storage System Owner’s Manual.
2. Check the host activity LED.
If there is activity, stop all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
•Solid – Cache contains data yet to be written to the disk.
•Blinking – Cache data is being written to CompactFlash in the controller module.
•Flashing at 1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
•Off – Cache is clean (no unwritten data).
4. Remove expansion cable and inspect for damage.
5. Reseat the expansion cable.
Is the expansion port status LED on?
•Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again, clean the connections to
ensure that a dirty connector is not interfering with the data path.
•No – Proceed to the next step.
6. Move the expansion cable to a port on the controller enclosure with a known good link status.
This step isolates the problem to the expansion cable or to the controller module expansion port.
Is the expansion port status LED on?
•Yes – You now know that the expansion cable is good. Return the cable to the original port. If the expansion port status LED
remains off, you have isolated the fault to the controller module expansion port. Replace the controller module.
•No – Proceed to the next step.
7. Move the expansion cable back to the original port on the controller enclosure.
8. Move the expansion cable on the expansion enclosure to a known good port on the expansion enclosure.
Is the host link status LED on?
•Yes – You have isolated the problem to the expansion enclosure port. Replace the IOM in the expansion enclosure.
•No – Proceed to the next step.
9. Replace the cable with a known good cable, ensuring the cable is attached to the original ports used by the previous cable.
Is the host link status LED on?
•Yes – Replace the original cable. The fault has been isolated.
•No – It is likely that the controller module must be replaced.
Troubleshooting and problem solving
75
A
Cabling for replication
Connecting two storage systems to replicate
volumes
The replication feature performs asynchronous replication of block-level data from a volume in a primary system to a volume in a
secondary system.
Replication creates an internal snapshot of the primary volume, and copies the changes to the data since the last replication to the
secondary system using FC or iSCSI links.
The two associated standard volumes form a replication set, and only the primary volume (source of data) can be mapped for access by a
server. Both systems must be connected through switches to the same fabric or network (no direct attach). The server accessing the
replication set is connected to the primary system. If the primary system goes offline, a connected server can access the replicated data
from the secondary system.
Systems can be cabled to support replication using CNC-based and 10Gbase-T systems on the same network, or on different networks.
NOTE:
As you consider the physical connections of your system, keep several important points in mind:
•Ensure that controllers have connectivity between systems, whether the destination system is colocated or remotely located.
•Qualified Converged Network Controller options can be used for host I/O or replication, or both.
•The storage system does not provide for specific assignment of ports for replication. However, this configuration can be accomplished
using virtual LANs for iSCSI and zones for FC, or by using physically separate infrastructure.
•For remote replication, ensure that all ports that are assigned for replication can communicate with the replication system by using the
query peer-connection CLI command. See the
•Allow enough ports for replication permits so that the system can balance the load across those ports as I/O demands rise and fall. If
controller A owns some of the volumes that are replicated and controller B owns other volumes that are replicated, then enable at
least one port for replication on each controller module. You may need to enable more than one port per controller module depending
on replication traffic load.
•For the sake of system security, do not unnecessarily expose the controller module network port to an external network connection.
Conceptual cabling examples are provided addressing cabling on the same network and cabling relative to different networks.
NOTE:
The controller module firmware must be compatible on all systems that are used for replication.
SAS systems do not support replication.
ME4 Series Storage System CLI Reference Guide for more information.
Host ports and replication
ME4 Series Storage System controller modules can use qualified 10Gbase-T connectors or CNC-based ports for replication.
CNC ports must use qualified SFP+ transceivers of the same type, or they can use a combination of qualified SFP+ transceivers
supporting different interface protocols. To use a combination of different protocols, configure host ports 0 and 1 to use FC, and configure
ports 2 and 3 to use iSCSI. FC and iSCSI ports can be used to perform host I/O or replication, or both.
NOTE:
support single controller and dual-controller configurations.
• If a partner controller module fails, the storage system fails over and runs on a single controller module until the
• In dual-controller module configurations, a controller module must be installed in each slot to ensure sufficient
76Cabling for replication
ME4 Series 5U84 enclosures support dual-controller configurations only. ME4 Series 2U controller enclosures
redundancy is restored.
airflow through the enclosure during operation. In single-controller module configurations, a controller module must
be installed in slot A, and controller module blank must be installed in slot B.
Example cabling for replication
Simplified versions of controller enclosures are used in the cabling figures to show the host ports that are used for I/O or replication.
•Replication supports FC and iSCSI host interface protocols.
•The 2U enclosure rear panel represents ME4 Series FC and iSCSI host interface ports.
•The 5U84 enclosure rear panel represents ME4 Series FC and iSCSI host interface ports.
•Host ports that are used for replication must use the same protocol (either FC or iSCSI).
•Blue cables show I/O traffic and green cables show replication traffic.
Once the CNC-based systems or 10Gbase-T systems are physically cabled, see the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide or online help for information about configuring, provisioning, and using the replication feature.
Single-controller module configuration for replication
Cabling two ME4 Series controller enclosures that are equipped with a single controller module for replication.
Multiple servers, multiple switches, one network
The following diagram shows the rear panel of two controller enclosures with I/O and replication occurring on the same network:
Figure 34. Connecting two storage systems for replication – multiple servers, multiple switches, one network
2U controller enclosures2. Two switches (I/O)
1.
3. Connection to host servers4. Switch (Replication)
For optimal protection, use multiple switches for host I/O and replication.
•Connect two ports from the controller module in the left storage enclosure to the left switch.
•Connect two ports from the controller module in the right storage enclosure to the right switch.
•Connect two ports from the controller modules in each enclosure to the middle switch.
Use multiple switches to avoid a single point of failure inherent to using a single switch, and to physically isolate replication traffic from I/O
traffic.
Dual-controller module configuration for replication
Cabling two ME4 Series controller enclosures that are equipped with dual-controller modules for replication.
Multiple servers, one switch, one network
Figure 35. Connecting two ME4 Series 2U storage systems for replication – multiple servers, one switch, and one network shows the rear
panel of two 2U enclosures with I/O and replication occurring on the same network. Figure 36. Connecting two ME4 Series 5U storage
systems for replication – multiple servers, one switch, and one network shows the rear panel of two 5U84 enclosures with I/O and
replication occurring on the same network.
In the configuration, Virtual Local Area Network (VLAN) and zoning could be employed to provide separate networks for iSCSI and FC.
Create a VLAN or zone for I/O and a VLAN or zone for replication to isolate I/O traffic from replication traffic. Either configuration would
be displayed physically as a single network, while logically, either configuration would function as multiple networks.
Cabling for replication
77
Figure 35. Connecting two ME4 Series 2U storage systems for replication – multiple servers, one switch, and one network
Multiple servers, multiple switches, and one network
Figure 37. Connecting two ME4 Series 2U storage systems for replication – multiple servers, multiple switches, one network shows the
rear panel of two 2U enclosures with I/O and replication occurring on the same network. Figure 38. Connecting two ME4 Series 5U
storage systems for replication – multiple servers, multiple switches, one network shows the rear panel of two 5U enclosures with I/O and
replication occurring on the same network.
For optimal protection, use multiple switches for host I/O and replication.
•Connect two ports from each controller module in the left storage enclosure to the left switch.
•Connect two ports from each controller module in the right storage enclosure to the right switch.
•Connect two ports from the controller modules in each enclosure to the middle switch.
Use multiple switches to avoid a single point of failure inherent to using a single switch, and to physically isolate replication traffic from I/O
traffic.
Figure 37. Connecting two ME4 Series 2U storage systems for replication – multiple servers, multiple switches, one network
1.
2U controller enclosures2. Two switches (I/O)
3. Connection to host servers4. Switch (Replication)
78Cabling for replication
Figure 38. Connecting two ME4 Series 5U storage systems for replication – multiple servers, multiple switches, one network
1. 5U controller enclosures2. Two switches (I/O)
3. Connection to host servers4. Switch (Replication)
Multiple servers, multiple switches, and two networks
Figure 39. Connecting two ME4 Series 2U storage systems for replication– multiple servers, multiple switches, two networks shows the
rear panel of two 2U enclosures with I/O and replication occurring on different networks.
storage systems for replication – multiple servers, multiple switches, two networks shows the rear panel of two 5U enclosures with I/O
and replication occurring on different networks.
•The switch that is on the left supports I/O traffic to local network A.
•The switch that is on the right supports I/O traffic to remote network B.
•The Ethernet WAN in the middle supports replication traffic.
If there is a failure at either the local network or the remote network, you can fail over to the available network.
The following figures represent two branch offices that are cabled for disaster recovery and backup:
Figure 40. Connecting two ME4 Series 5U
Figure 39. Connecting two ME4 Series 2U storage systems for replication– multiple servers, multiple switches, two
networks
2U controller enclosures2. Two switches (I/O)
1.
3. Connection to host servers (network A)4. Connection to host servers (network B)
5. Ethernet WAN
Cabling for replication79
Figure 40. Connecting two ME4 Series 5U storage systems for replication – multiple servers, multiple switches, two
networks
1. 5U controller enclosures2. Two switches (I/O)
3. Connection to host servers (network A)4. Connection to host servers (network B)
5. Ethernet WAN
Isolating replication faults
Replication is a disaster-recovery feature that performs asynchronous replication of block-level data from a volume in a primary storage
system to a volume in a secondary storage system.
The replication feature creates an internal snapshot of the primary volume, and copies changes to the data since the last replication to the
secondary system using iSCSI or FC connections. The primary volume exists in a primary pool in the primary storage system. Replication
can be completed using either the ME Storage Manager or the CLI.
Replication setup and verification
After storage systems are cabled for replication, you can use the ME Storage Manager to prepare for using the replication feature.
Alternatively, you can use SSH or telnet to access the IP address of the controller module and access the replication feature using the CLI.
Basic information for enabling the ME4 Series Storage System controller enclosures for replication supplements the troubleshooting
procedures that follow.
•Familiarize yourself with replication content provided in the Dell EMC PowerVault ME4 Series Series Storage System Administrator’s Guide.
•For virtual replication, perform the following steps to replicate an existing volume to a pool on the peer in the primary system or
secondary system:
1. Find the port address on the secondary system:
Using the CLI, run the show ports command on the secondary system.
2. Verify that ports on the secondary system can be reached from the primary system using either of the following methods:
•Run the query peer-connection CLI command on the primary system, using a port address obtained from the output of
the show ports command.
•In the ME Storage Manager Replications topic, select Action > Query Peer Connection.
3. Create a peer connection.
To create a peer connection, use the create peer-connection CLI command or in the ME Storage Manager Replications topic,
select Action > Create Peer Connection.
4. Create a virtual replication set.
To create a replication set, use the create replication-set CLI command or in the ME Storage Manager Replications
topic, select Action > Create Replication Set.
5. Replicate.
To initiate replication, use the replicate CLI command or in the ME Storage Manager Replications topic, select Action > Replicate.
•Using the ME Storage Manager, monitor the storage system event logs for information about enclosure-related events, and to
determine any necessary recommended actions
80
Cabling for replication
NOTE: These steps are a general outline of the replication setup. Refer to the following manuals for more information
about replication setup:
• See the
manage replications.
• See the
NOTE: Controller module firmware must be compatible on all systems that are used for replication.
Dell EMC PowerVault ME4 Series Series Storage System Administrator’s Guide
Dell EMC PowerVault ME4 Series Series Storage System CLI Guide
for replication commands and syntax.
for procedures to set up and
Diagnostic steps for replication setup
The tables in the following section show menu navigation for virtual replication using the ME Storage Manager:
NOTE:
Can you successfully use the replication feature?
Table 22. Diagnostics for replication setup: Using the replication feature
AnswerPossible reasonsAction
YesSystem functioning properlyNo action required.
NoCompatible firmware revision
SAS controller enclosures do not support replication.
supporting the replication feature
is not running on each system that
is used for replication.
Perform the following actions on each system used for virtual replication:
•On the System topic, select Action > Update Firmware. The Update Firmware panel opens. The Update Controller Modules tab shows firmware
versions that are installed in each controller.
•If necessary, update the controller module firmware to ensure compatibility
with the other systems.
•See the topic about updating firmware in the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide for more information about
compatible firmware.
NoInvalid cabling connection. (If
multiple enclosures are used, check
the cabling for each system.)
NoA system does not have a pool
that is configured.
Verify controller enclosure cabling:
•Verify use of proper cables.
•Verify proper cabling paths for host connections.
•Verify cabling paths between replication ports and switches are visible to one
another.
•Verify that cable connections are securely fastened.
•Inspect cables for damage and replace if necessary.
Configure each system to have a storage pool.
Can you create a replication set?
After valid cabling, and network availability, create a replication set by selecting Action > Create Replication Set from the Replications
topic.
Table 23. Diagnostics for replication setup – Creating a replication set
with iSCSI host interface ports,
replication set creation fails due to
use of CHAP.
NoUnable to create the secondary
volume (the destination volume on
If using , see the topics about configuring CHAP and working in replications
within the Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide.
•Review event logs for indicators of a specific fault in a replication data path
component. Follow any Recommended Actions.
Cabling for replication81
AnswerPossible reasonsAction
the pool to which you replicate
data from the primary volume).
•Verify valid specification of the secondary volume according to either of the
following criteria:
•A conflicting volume does not exist.
•Available free space in the pool.
NoCommunication link is down.
Review event logs for indicators of a specific fault in a host or replication data
path component.
Can you replicate a volume?
Table 24. Diagnostics for replication setup – Replicating a volume
•Determine existence of primary or secondary volumes.
•If a replication set has not been successfully created, Action > Create Replication Set on the Replications topic to create a replication.
•Review event logs (in the footer, click the events panel and select Show Event List) for indicators of a specific fault in a replication data path
component. Follow any Recommended Actions.
•Review event logs for indicators of a specific fault in a replication data path
component. Follow any Recommended Actions.
•Click in the Volumes topic, and then click a volume name in the volumes list.
Click the
metadata.
•Replications that enter the suspended state can be resumed manually (see
the
for additional information).
Replication Sets tab to display replications and associated
Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide
NoCommunication link is down.
Review event logs for indicators of a specific fault in a host or replication data
path component.
Has a replication run successfully?
Table 25. Diagnostics for replication setup: Checking for a successful replication
AnswerPossible reasonsAction
YesSystem functioning properlyNo action required.
NoLast Successful Run shows N/A
NoCommunication link is downReview event logs for indicators of a specific fault in a host or replication data
•In the Volumes topic, click the volume that is a member of the replication
set.
•Select the Replication Sets table.
•Check the Last Successful Run information.
•If the replication has not run successfully, use the ME Storage Manager to
replicate as described in the topic about working in replications in the
EMC PowerVault ME4 Series Storage System Administrator’s Guide.
path component.
Dell
82Cabling for replication
B
SFP+ transceiver for FC/iSCSI ports
This section describes how to install the small form-factor pluggable (SFP+) transceivers ordered with the ME4 Series FC/iSCSI
controller module.
Locate the SFP+ transceivers
Locate the SFP+ transceivers that shipped with the controller enclosure, which look similar to the generic SFP+ transceiver that is shown
in the following figure:
Figure 41. Install an SFP+ transceiver into the ME4 Series FC/iSCSI controller module
CNC-based controller module face2. CNC port
1.
3. SFP+ transceiver (aligned)4. Fiber-optic cable
5. SFP+ transceiver (installed)
NOTE: Refer to the label on the SFP+ transceiver to determine whether it supports the FC or iSCSI protocol.
Install an SFP+ transceiver
Perform the following steps to install an SFP+ transceiver:
NOTE:
1. Orient the SFP+ transceiver with the port and align it for insertion.
For 2U controller enclosures, the transceiver is installed either right-side up, or upside down depending upon whether it is installed into
controller module A or B,
2. If the SFP+ transceiver has a plug, remove it before installing the transceiver. Retain the plug.
3. Flip the actuator open.
4. Slide the SFP+ transceiver into the port until it locks securely into place.
5. Flip the actuator closed.
Follow the guidelines provided in Electrical safety when installing an SFP+ transceiver.
NOTE:
SFP+ transceiver into the ME4 Series FC/iSCSI controller module
The actuator on your SFP+ transceiver may look slightly different than the one shown in Figure 41. Install an
SFP+ transceiver for FC/iSCSI ports83
6. Connect a qualified fiber-optic interface cable into the duplex jack of the SFP+ transceiver.
If you do not plan to use the SFP+ transceiver immediately, reinsert the plug into the duplex jack of SFP+ transceiver to keep its optics
free of dust.
Verify component operation
View the port Link Status/Link Activity LED on the controller module face plate. A green LED indicates that the port is connected and the
link is up.
NOTE: To remove an SFP+ transceiver, perform the installation steps in reverse order relative to what is described in
Install an SFP+ transceiver.
84SFP+ transceiver for FC/iSCSI ports
System Information Worksheet
Use the system information worksheet to record the information that is needed to install the ME4 Series Storage System.
ME4 Series Storage System information
Gather and record the following information about the ME4 Series storage system network and the administrator user:
Table 26. ME4 Series Storage System network
ItemInformation
Service tag
Management IPv4 address (ME4 Series Storage System
management address)
Top controller module IPv4 address (Controller A MGMT port)
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
C
Bottom controller module IPv4 address (Controller B MGMT port)
Subnet mask
Gateway IPv4 address
Gateway IPv6 address
Domain name
DNS server address
Secondary DNS server address
Table 27. ME4 Series Storage System administrator
ItemInformation
Password for the default ME4 Series Storage System Admin user
Email address of the default ME4 Series Storage System Admin
user
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
________ :________ :________ :______::_____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
iSCSI network information
For a storage system with iSCSI front-end ports, plan and record network information for the iSCSI network.
NOTE:
Table 28. iSCSI Subnet 1
ItemInformation
Subnet mask
Gateway IPv4 address
IPv4 address for storage controller module A: port 0
IPv4 address for storage controller module B: port 0
For a storage system deployed with two Ethernet switches, Dell EMC recommends setting up separate subnets.
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
System Information Worksheet85
ItemInformation
IPv4 address for storage controller module A: port 2
_____ . _____ . _____ . _____
IPv4 address for storage controller module B: port 2
Table 29. iSCSI Subnet 2
ItemInformation
Subnet mask
Gateway IPv4 address
IPv4 address for storage controller module A: port 1
IPv4 address for storage controller module B: port 1
IPv4 address for storage controller module A: port 3
IPv4 address for storage controller module B: port 3
Gateway IPv6 address
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
________ :________ :________ :______::_____
Additional ME4 Series Storage System information
The Network Time Protocol (NTP) and Simple Mail Transfer Protocol (SMTP) server information is optional. The proxy server information
is also optional, but it may be required to complete the Discover and Configure Uninitialized text TBD wizard.
Table 30. NTP, SMTP, and Proxy servers
ItemInformation
NTP server IPv4 address
SMTP server IPv4 address
Backup NTP server IPv4 address
SMTP server login ID
SMTP server password
Proxy server IPv4 address
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
_____ . _____ . _____ . _____
Fibre Channel zoning information
For a storage system with Fibre Channel front-end ports, record the physical and virtual WWNs of the Fibre Channel ports in fabric 1 and
fabric 2. This information is displayed on the Review Front-End page of the Discover and Configure Uninitialized wizard. Use this
information to configure zoning on each Fibre Channel switch.
Table 31. WWNs in fabric 1
ItemFC switch portInformation
WWN of storage controller A: port 0
WWN of storage controller B: port 0
WWN of storage controller A: port 2
WWN of storage controller B: port 2
WWNs of server HBAs:
86System Information Worksheet
Table 32. WWNs in fabric 2
ItemFC switch portInformation
WWN of storage controller A: port 1
WWN of storage controller B: port 1
WWN of storage controller A: port 3
WWN of storage controller B: port 3
System Information Worksheet87
D
Setting network port IP addresses using the
CLI port and serial cable
You can manually change the default static IP values for each controller. You can also specify that IP values should be set automatically for
both controllers through communication with a Dynamic Host Configuration Protocol (DHCP) server.
Network ports on controller module A and controller module B are configured with the following default values:
•Network port IP address: 10.0.0.2 (controller A), 10.0.0.3 (controller B)
•IP subnet mask: 255.255.255.0
•Gateway IP address : 10.0.0.1
If the default IP addresses are not compatible with your network, you must set an IP address for each network port using the CLI. The CLI
enables you to access the system using the 3.5mm stereo plug CLI port or USB CLI port and terminal emulation software.
NOTE:
• If you are using the mini-USB CLI port and cable, see Mini-USB Device Connection.
• Windows customers should download and install the device driver as described in Obtaining the USB driver, unless
they are using Windows 10 or Windows Server 2016 and later.
• Linux customers should prepare the USB port as described in Linux drivers.
Use the CLI commands described in the following steps to set the IP address for the network port on each controller module:
NOTE:
change the IP address before changing the network configuration.
1. From your network administrator, obtain an IP address, subnet mask, and gateway address for controller A and another for controller
B.
2. Record the IP addresses so you can specify them whenever you manage the controllers using the ME Storage Manager or the CLI.
3. Connect the 3.5mm/DB9 serial cable from a computer with a serial port to the 3.5mm stereo plug CLI port on controller A.
Alternatively, connect a generic mini-USB cable (not included) from a computer to the USB CLI port on controller A .
The mini-USB connector plugs into the USB CLI port as shown in the following figure:
When new IP addresses are set, you can change them as needed using the ME Storage Manager. Be sure to
Figure 42. Connecting a USB cable to the CLI port
4. If you are using a mini-USB cable, enable the USB CLI port for communication:
NOTE:
•Unless they are using Windows 10 or Windows Server 2016 and later, download and install the USB device driver for the CLI port,
as described in Microsoft Windows drivers,
88Setting network port IP addresses using the CLI port and serial cable
Skip this step if you are using the 3.5mm/DB9 serial cable.
•On Linux a computer, enter the command syntax that is provided in Linux drivers.
5. Start a terminal emulator and configure it to use the display settings in Table 33. Terminal emulator display settings and the connection
settings in Table 34. Terminal emulator connection settings.
Table 33. Terminal emulator display settings
ParameterValue
Terminal emulation modeVT-100 or ANSI (for color support)
FontTerminal
TranslationsNone
Columns80
Table 34. Terminal emulator connection settings
ParameterValue
ConnectorCOM3 (for example)
1,2
Baud rate115,200
Data bits8
ParityNone
Stop bits1
Flow controlNone
1
Your computer configuration determines which COM port is used for the Disk Array USB Port.
2
Verify the appropriate COM port for use with the CLI.
6. In the terminal emulator, connect to controller A.
7. If necessary, press Enter to display login prompt.
8. If you are connecting to a storage system with G275 firmware that has not been deployed:
a) Type manage at the login prompt and press Enter.
b) Type !manage at the Password prompt and press Enter.
If you are connecting to a storage system with G275 firmware that has been deployed:
a) Type the user name of a user with the manage role at the login prompt and press Enter.
b) Type the password for the user at the Password prompt and press Enter.
9. If you are connecting to a storage system with G280 firmware that has not been deployed:
a) Type setup at the login prompt and press Enter.
b) Do not type anything at the Password prompt and press Enter.
If you are connecting to a storage system with G280 firmware that has been deployed:
a) Type the user name of a user with the manage role at the login prompt and press Enter.
b) Type the password for the user at the Password prompt and press Enter.
10. If you want to use DHCP to set network port IP values, enter the following command at the prompt:
set network-parameters dhcp
If you want to use custom static IP addresses, enter the following CLI command to set the values you obtained in step 1:
NOTE:
Run the command for controller A first, and then run the command for controller B.
set network-parameters ip address netmask netmask gateway gateway controller a|b
where:
•address is the IP address of the controller
•netmask is the subnet mask
•gateway is the IP address of the subnet router
•a|b specifies the controller whose network parameters you are setting
Setting network port IP addresses using the CLI port and serial cable
89
For example:
set network-parameters ip 192.168.0.10 netmask 255.255.255.0 gateway 192.168.0.1
controller a
set network-parameters ip 192.168.0.11 netmask 255.255.255.0 gateway 192.168.0.1
controller b
11. Enter the following CLI command to verify the new IP addresses:
show network-parameters
Network parameters, including the IP address, subnet mask, and gateway address are displayed for each controller.
12. In the command window on the host computer, type the following command to verify connectivity, first for controller A and then for
controller B:
ping controller-IP-address
If you cannot access your system for at least three minutes after changing the IP address, restart the controllers using the CLI.
When you restart a Management Controller, communication with it is temporarily lost until it successfully restarts. Enter the following
CLI command to restart the Management Controller in both controllers:
restart mc both
Topics:
•Mini-USB Device Connection
Mini-USB Device Connection
The following sections describe the connection to the mini-USB port:
Emulated serial port
When a computer is connected to a controller module using a mini-USB serial cable, the controller presents an emulated serial port to the
computer. The name of the emulated serial port is displayed using a customer vendor ID and product ID. Serial port configuration is
unnecessary.
NOTE:
the USB CLI port. See also Device driver/special operation mode.
Certain operating systems require a device driver or special mode of operation to enable proper functioning of
Supported host applications
The following terminal emulator applications can be used to communicate with an ME4 Series controller module:
Table 35.
ApplicationOperating system
PuTTYMicrosoft Windows (all versions)
MinicomLinux (all versions)
Supported terminal emulator applications
Command-line interface
When the computer detects a connection to the emulated serial port, the controller awaits input of characters from the computer using
the command-line interface. To see the CLI prompt, you must press Enter.
NOTE:
port is outside of the normal data paths to the controller enclosure.
90Setting network port IP addresses using the CLI port and serial cable
Directly cabling to the mini-USB port is considered an out-of-band connection. The connection to the mini-USB
Device driver/special operation mode
Certain operating systems require a device driver or special mode of operation. The following table displays the product and vendor
identification information that is required for certain operating systems:
USB identification code typeCode
USB Vendor ID0x210c
USB Product ID0xa4a7
Microsoft Windows drivers
Dell EMC provides an ME4 Series USB driver for use in Windows environments.
Obtaining the USB driver
NOTE: If you are using Windows 10 or Windows Server 2016, the operating system provides a native USB serial driver
that supports the mini-USB port. However, if you are using an older version of Windows, you should download and
install the USB driver.
1. Go to Dell.com/support and search for ME4 Series USB driver.
2. Download the ME4 Series Storage Array USB Utility file from the Dell EMC support site.
3. Follow the instructions on the download page to install the ME4 Series USB driver.
Known issues with the CLI port and mini-USB cable on Microsoft
Windows
When using the CLI port and cable for setting network port IP addresses, be aware of the following known issue on Windows:
Problem
The computer might encounter issues that prevent the terminal emulator software from reconnecting after the controller module restarts
or the USB cable is unplugged and reconnected.
Workaround
To restore a connection that stopped responding when the controller module was restarted:
1. If the connection to the mini-USB port stops responding , disconnect and quit the terminal emulator program.
a. Using Device Manager, locate the COMn port that is assigned to the mini-USB port
b. Right-click on the Disk Array USB Port (COMn) port, and select Disable device.
2. Right-click on the Disk Array USB Port (COMn) port, and select Enable device.
3. Start the terminal emulator software and connect to the COM port.
NOTE:
disabled to use the COM port.
On Windows 10 or Windows Server 2016, the XON/XOFF setting in the terminal emulator software must be
Linux drivers
Linux operating systems do not require the installation of an ME4 Series USB driver. However, certain parameters must be provided during
driver loading to enable recognition of the mini-USB port on an ME4 Series controller module.
•Type the following command to load the Linux device driver with the parameters that are required to recognize the mini-USB port: