HP 6400/8400, StorageWorks 6400, StorageWorks 8400 User Manual

HP 6400/8400 Enterprise Virtual Array User Guide
Abstract
This document describes the components and operation of the HP 6400/8400 Enterprise Virtual Array.
IMPORTANT: With the release of the P6300/P6500 EVA, the EVA family name has been rebranded to HP P6000 EVA. The
names for all existing EVA array models will not change. The rebranding change also affects related EVA software. The following product names have been rebranded:
HP P6000 Business Copy (formerly HP StorageWorks Business Copy EVA)
HP P6000 Continuous Access (formerly HP StorageWorks Continuous Access EVA)
HP P6000 Replication Solutions Manager (formerly HP StorageWorks Replication Solutions Manager)
HP P6000 SmartStart (formerly HP StorageWorks SmartStart EVA Storage)
All rebranded software continues to support all existing EVA models (EVA3000/5000, EVA4000/6000/8000, EVA4100/6100/8100, EVA4400, and EVA6400/8400).
HP Part Number: 5697-0977 Published: June 2011 Edition: 5
© Copyright 2009, 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Warranty
WARRANTY STATEMENT: To obtain a copy of the warranty for this product, see the warranty information website:
http://www.hp.com/go/storagewarranty
Acknowledgements
Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.
Java™ is a US trademark of Sun Microsystems, Inc.
UNIX® is a registered trademark of The Open Group.
Contents
1 EVA6400/8400 hardware..........................................................................9
M6412A disk enclosures............................................................................................................9
Enclosure layout...................................................................................................................9
I/O modules.....................................................................................................................10
I/O module status indicators..........................................................................................10
Fiber optic Fibre Channel cables..........................................................................................11
Copper Fibre Channel cables..............................................................................................12
Fibre Channel disk drives....................................................................................................12
Disk drive status indicators..............................................................................................12
Disk drive blank............................................................................................................13
Controller enclosures...............................................................................................................13
Operator control panel.......................................................................................................14
Status indicators............................................................................................................15
Navigation buttons........................................................................................................16
Alphanumeric display....................................................................................................16
Power supplies.......................................................................................................................16
Blower module.......................................................................................................................17
Battery module.......................................................................................................................17
HSV controller cabling............................................................................................................18
Storage system racks...............................................................................................................19
Rack configurations............................................................................................................19
Power distribution–Modular PDUs.............................................................................................20
PDUs................................................................................................................................21
PDU A.........................................................................................................................22
PDU B.........................................................................................................................22
PDMs...............................................................................................................................22
Rack AC power distribution.................................................................................................23
Rack System/E power distribution components.......................................................................24
Rack AC power distribution............................................................................................24
Moving and stabilizing a rack..................................................................................................25
2 Enterprise Virtual Array startup ..................................................................27
EVA8400 storage system connections........................................................................................27
EVA6400 storage system connections.......................................................................................28
Direct connect........................................................................................................................28
iSCSI connection configurations................................................................................................29
Fabric connect iSCSI..........................................................................................................29
Direct connect iSCSI...........................................................................................................29
Procedures for getting started...................................................................................................30
Gathering information........................................................................................................30
Host information...........................................................................................................30
Setting up a controller pair using the OCP............................................................................31
Entering the WWN.......................................................................................................31
Entering the WWN checksum.........................................................................................32
Entering the storage system password..............................................................................33
Installing HP P6000 Command View....................................................................................33
Installing optional EVA software licenses...............................................................................33
3 EVA6400/8400 operation........................................................................34
Best practices.........................................................................................................................34
Operating tips and information................................................................................................34
Reserving adequate free space............................................................................................34
Contents 3
Using FATA disk drives........................................................................................................34
Using solid state disk drives.................................................................................................34
QLogic HBA speed setting..................................................................................................34
EVA6400/8400 host port negotiates to incorrect speed.........................................................34
Creating 16 TB or greater virtual disks in Windows 2008.......................................................35
Importing Windows dynamic disk volumes............................................................................35
Losing a path to a dynamic disk..........................................................................................35
Microsoft Windows 2003 MSCS cluster installation................................................................35
Maximum LUN size............................................................................................................35
Managing unused ports......................................................................................................36
Changing the host port connectivity......................................................................................36
Failback preference setting for HSV controllers............................................................................37
Changing virtual disk failover/failback setting.......................................................................39
Implicit LUN transition.........................................................................................................39
Storage system shutdown and startup........................................................................................39
Shutting down the storage system.........................................................................................40
Starting the storage system..................................................................................................40
Saving storage system configuration data...................................................................................40
Adding disk drives to the storage system....................................................................................42
Creating disk groups..........................................................................................................43
Handling fiber optic cables......................................................................................................43
Using the OCP.......................................................................................................................43
Displaying the OCP menu tree.............................................................................................43
Displaying system information..............................................................................................45
Displaying versions system information..................................................................................45
Shutting down the system....................................................................................................45
Shutting the controller down................................................................................................46
Restarting the system..........................................................................................................46
Uninitializing the system......................................................................................................47
Password options...............................................................................................................47
Changing a password........................................................................................................47
Clearing a password..........................................................................................................48
4 Configuring application servers..................................................................49
Overview..............................................................................................................................49
Clustering..............................................................................................................................49
Multipathing..........................................................................................................................49
Installing Fibre Channel adapters..............................................................................................49
Testing connections to the EVA.................................................................................................50
Adding hosts..........................................................................................................................50
Creating and presenting virtual disks.........................................................................................50
Verifying virtual disk access from the host...................................................................................51
Configuring virtual disks from the host.......................................................................................51
HP-UX...................................................................................................................................51
Scanning the bus...............................................................................................................51
Creating volume groups on a virtual disk using vgcreate.........................................................52
IBM AIX................................................................................................................................52
Accessing IBM AIX utilities..................................................................................................52
Adding hosts.....................................................................................................................53
Creating and presenting virtual disks....................................................................................53
Verifying virtual disks from the host.......................................................................................53
Linux.....................................................................................................................................54
Driver failover mode...........................................................................................................54
Installing a Qlogic driver....................................................................................................54
Upgrading Linux components..............................................................................................55
4 Contents
Upgrading qla2x00 RPMs..............................................................................................55
Detecting third-party storage...........................................................................................55
Compiling the driver for multiple kernels...........................................................................56
Uninstalling the Linux components........................................................................................56
Using the source RPM.........................................................................................................56
Verifying virtual disks from the host.......................................................................................57
OpenVMS.............................................................................................................................57
Updating the AlphaServer console code, Integrity Server console code, and Fibre Channel FCA
firmware...........................................................................................................................57
Verifying the Fibre Channel adapter software installation........................................................57
Console LUN ID and OS unit ID...........................................................................................57
Adding OpenVMS hosts.....................................................................................................58
Scanning the bus...............................................................................................................59
Configuring virtual disks from the OpenVMS host...................................................................60
Setting preferred paths.......................................................................................................60
Oracle Solaris........................................................................................................................60
Loading the operating system and software...........................................................................60
Configuring FCAs with the Oracle SAN driver stack...............................................................60
Configuring Emulex FCAs with the lpfc driver....................................................................61
Configuring QLogic FCAs with the qla2300 driver.............................................................62
Fabric setup and zoning.....................................................................................................64
Oracle StorEdge Traffic Manager (MPxIO)/Oracle Storage Multipathing..................................64
Configuring with Veritas Volume Manager............................................................................64
Configuring virtual disks from the host...................................................................................65
Verifying virtual disks from the host..................................................................................67
Labeling and partitioning the devices...............................................................................67
VMware................................................................................................................................68
Installing or upgrading VMware .........................................................................................68
Configuring the EVA6400/8400 with VMware host servers....................................................68
Configuring an ESX server ..................................................................................................69
Loading the FCA NVRAM..............................................................................................69
Setting the multipathing policy........................................................................................69
Specifying DiskMaxLUN.................................................................................................70
Verifying connectivity.....................................................................................................70
Verifying virtual disks from the host.......................................................................................70
Configuring raw device mapping.........................................................................................71
Windows..............................................................................................................................71
Verifying virtual disk access from the host..............................................................................71
Setting the Pending Timeout value for large cluster configurations.............................................71
5 Customer replaceable units........................................................................72
Customer self repair (CSR).......................................................................................................72
Parts only warranty service..................................................................................................72
Best practices for replacing hardware components......................................................................72
Component replacement videos...........................................................................................72
Verifying component failure.................................................................................................72
Identifying the spare part....................................................................................................72
Replaceable parts...................................................................................................................73
Replacing the failed component................................................................................................75
Replacement instructions..........................................................................................................75
6 Support and other resources......................................................................76
Contacting HP........................................................................................................................76
Subscription service............................................................................................................76
Documentation feedback....................................................................................................76
Related information.................................................................................................................76
Contents 5
Documents........................................................................................................................76
HP websites......................................................................................................................76
Typographic conventions.........................................................................................................77
Rack stability..........................................................................................................................78
Customer self repair................................................................................................................78
A Regulatory compliance notices...................................................................79
Regulatory compliance identification numbers............................................................................79
Federal Communications Commission notice..............................................................................79
FCC rating label................................................................................................................79
Class A equipment........................................................................................................79
Class B equipment........................................................................................................79
Declaration of Conformity for products marked with the FCC logo, United States only.................80
Modification.....................................................................................................................80
Cables.............................................................................................................................80
Canadian notice (Avis Canadien).............................................................................................80
Class A equipment.............................................................................................................80
Class B equipment.............................................................................................................80
European Union notice............................................................................................................80
Japanese notices....................................................................................................................81
Japanese VCCI-A notice......................................................................................................81
Japanese VCCI-B notice......................................................................................................81
Japanese VCCI marking.....................................................................................................81
Japanese power cord statement...........................................................................................81
Korean notices.......................................................................................................................81
Class A equipment.............................................................................................................81
Class B equipment.............................................................................................................82
Taiwanese notices...................................................................................................................82
BSMI Class A notice...........................................................................................................82
Taiwan battery recycle statement..........................................................................................82
Turkish recycling notice............................................................................................................82
Vietnamese Information Technology and Communications compliance marking...............................82
Laser compliance notices.........................................................................................................83
English laser notice............................................................................................................83
Dutch laser notice..............................................................................................................83
French laser notice.............................................................................................................83
German laser notice...........................................................................................................84
Italian laser notice..............................................................................................................84
Japanese laser notice.........................................................................................................84
Spanish laser notice...........................................................................................................85
Recycling notices....................................................................................................................85
English recycling notice......................................................................................................85
Bulgarian recycling notice...................................................................................................86
Czech recycling notice........................................................................................................86
Danish recycling notice.......................................................................................................86
Dutch recycling notice.........................................................................................................86
Estonian recycling notice.....................................................................................................87
Finnish recycling notice.......................................................................................................87
French recycling notice.......................................................................................................87
German recycling notice.....................................................................................................87
Greek recycling notice........................................................................................................88
Hungarian recycling notice.................................................................................................88
Italian recycling notice........................................................................................................88
Latvian recycling notice.......................................................................................................88
Lithuanian recycling notice..................................................................................................89
6 Contents
Polish recycling notice.........................................................................................................89
Portuguese recycling notice.................................................................................................89
Romanian recycling notice..................................................................................................89
Slovak recycling notice.......................................................................................................90
Spanish recycling notice.....................................................................................................90
Swedish recycling notice.....................................................................................................90
Battery replacement notices.....................................................................................................90
Dutch battery notice...........................................................................................................90
French battery notice..........................................................................................................91
German battery notice........................................................................................................91
Italian battery notice..........................................................................................................92
Japanese battery notice......................................................................................................92
Spanish battery notice........................................................................................................93
B Error messages.........................................................................................94
C Controller fault management....................................................................103
Using HP P6000 Command View...........................................................................................103
GUI termination event display................................................................................................103
GUI event display............................................................................................................103
Fault management displays...............................................................................................104
Displaying Last Fault Information...................................................................................104
Displaying Detailed Information....................................................................................104
Interpreting fault management information......................................................................105
D Non-standard rack specifications..............................................................106
Rack specifications................................................................................................................106
Internal component envelope.............................................................................................106
EIA310-D standards..........................................................................................................106
EVA cabinet measures and tolerances.................................................................................106
Weights, dimensions and component CG measurements.......................................................106
Airflow and Recirculation..................................................................................................107
Component Airflow Requirements..................................................................................107
Rack Airflow Requirements...........................................................................................107
Configuration Standards...................................................................................................107
Environmental and operating specifications..............................................................................107
UPS Selection..................................................................................................................107
Shock and vibration specifications......................................................................................109
E Single Path Implementation......................................................................111
High-level solution overview...................................................................................................111
Benefits at a glance..............................................................................................................111
Installation requirements........................................................................................................112
Recommended mitigations.....................................................................................................112
Supported configurations.......................................................................................................112
General configuration components.....................................................................................112
Connecting a single path HBA server to a switch in a fabric zone..........................................112
HP-UX configuration.........................................................................................................114
Requirements..............................................................................................................114
HBA configuration.......................................................................................................114
Risks..........................................................................................................................115
Limitations..................................................................................................................115
Windows Server (32-bit) configuration................................................................................115
Requirements..............................................................................................................115
HBA configuration.......................................................................................................116
Risks..........................................................................................................................116
Limitations..................................................................................................................116
Contents 7
Windows Server (64-bit) configuration................................................................................117
Requirements..............................................................................................................117
HBA configuration.......................................................................................................117
Risks..........................................................................................................................117
Limitations..................................................................................................................117
Oracle Solaris configuration..............................................................................................118
Requirements..............................................................................................................118
HBA configuration.......................................................................................................118
Risks..........................................................................................................................119
Limitations..................................................................................................................119
Tru64 UNIX configuration.................................................................................................119
Requirements..............................................................................................................119
HBA configuration.......................................................................................................120
Risks..........................................................................................................................120
OpenVMS configuration...................................................................................................121
Requirements..............................................................................................................121
HBA configuration.......................................................................................................121
Risks..........................................................................................................................122
Limitations..................................................................................................................122
Linux (32-bit) configuration................................................................................................122
Requirements..............................................................................................................122
HBA configuration.......................................................................................................123
Risks..........................................................................................................................123
Limitations..................................................................................................................123
Linux (64-bit) configuration................................................................................................123
Requirements..............................................................................................................123
HBA configuration.......................................................................................................124
Risks..........................................................................................................................124
Limitations..................................................................................................................124
IBM AIX configuration......................................................................................................125
Requirements..............................................................................................................125
HBA configuration.......................................................................................................125
Risks..........................................................................................................................125
Limitations..................................................................................................................125
VMware configuration......................................................................................................126
Requirements..............................................................................................................126
HBA configuration.......................................................................................................126
Risks..........................................................................................................................127
Limitations..................................................................................................................127
Failure scenarios...................................................................................................................127
HP-UX.............................................................................................................................127
Windows Server .............................................................................................................128
Oracle Solaris.................................................................................................................128
OpenVMS and Tru64 UNIX..............................................................................................129
Linux..............................................................................................................................129
IBM AIX..........................................................................................................................130
VMware.........................................................................................................................130
Glossary..................................................................................................132
Index.......................................................................................................143
8 Contents
1 EVA6400/8400 hardware
The EVA6400/8400 contains the following hardware components:
HSV controllers—Contains power supplies, cache batteries, fans, and an operator control
panel (OCP)
Fibre Channel disk enclosure—Contains disk drives, power supplies, fans, midplane, and I/O
modules
Fibre Channel Arbitrated Loop cables—Provides connectivity to the HSV controllers and the
Fibre Channel disk enclosures
Rack—Several free standing racks are available
M6412A disk enclosures
The M6412A disk enclosure contains the disk drives used for data storage; a storage system contains multiple disk enclosures. The major components of the enclosure are:
12-bay enclosure
Dual-loop, Fibre Channel drive enclosure I/O modules
Copper Fibre Channel cables
Fibre Channel disk drives and drive blanks
Power supplies
Fan modules
Enclosure layout
The disk drives mount in bays in the front of the enclosure. The bays are numbered sequentially from top to bottom and left to right. A drive is referred to by its bay number (see Figure 1 (page
9)). Enclosure status indicators are located at the right of each disk. Figure 2 (page 9) shows
the front and Figure 3 (page 10) shows the rear view of the disk enclosure.
Figure 1 Disk drive bay numbering
Figure 2 Disk enclosure front view without bezel ears
2. Disk drive release1. Rack-mounting thumbscrew
4. UID push button3. Drive LEDs
5. Enclosure status LEDs
M6412A disk enclosures 9
Figure 3 Disk enclosure rear view
2. Power supply 1 status LED1. Power supply 1
4. Enclosure product number and serial number3. Fan 1
6. I/O module A5. Fan 1 status LED
8. Rear UID push button7. I/O module B
10. Fan 29. Enclosure status LEDs
12. Power supply 211. Power push button
I/O modules
Two I/O modules provide the interface between the disk enclosure and the host controllers, (Figure 4 (page 10)). For redundancy, only dual-controller, dual-loop operation is supported. Each controller is connected to both I/O modules in the disk enclosure.
Each I/O module has two ports that can transmit and receive data for bidirectional operation. Activating a port requires connecting a Fibre Channel cable to the port. The port function depends upon the loop.
Figure 4 I/O module detail
2. 4 Gb I/O ports1. Double 7–segment display: enclosure ID
4. Manufacturing diagnostic port3. Port 1 (P1), Port 2 (P2) status LEDs
5. I/O module status LEDs
I/O module status indicators
There are five status indicators on the I/O module. See Figure 4 (page 10). The status indicator states for an operational I/O module are shown in Table 1 (page 11). Table 2 (page 11) shows the status indicator states for a non-operational I/O module.
10 EVA6400/8400 hardware
Table 1 Port status LEDs
DescriptionStatus LED
Green (left)
Solid green— Active link
Flashing green—Locate, remotely asserted by application client
Amber (right)
Solid amber—Module fault, no synchronization
Flashing amber—Module fault
Table 2 I/O module status LEDs
DescriptionStatus LED
Locate
Flashing blue—Remotely asserted by application client
Module health indicator
Flashing green—I/O module powering up.
Solid green—Normal operation
Green off—Firmware malfunction
Fault indicator
Flashing amber—Warning condition (not visible when solid
amber showing)
Solid amber—Replace FRU
Amber off—Normal operation
Fiber optic Fibre Channel cables
The Enterprise Virtual Array uses orange, 50-µm, multi-mode, fiber optic cables for connection to the SAN or the host, where there is a direct connection to the host. The fiber optic cable assembly consists of two 2-m fiber optic strands and small form-factor connectors on each end. See
Figure 5 (page 12).
To ensure optimum operation, the fiber optic cable components require protection from contamination and mechanical hazards. Failure to provide this protection can cause degraded operation. Observe the following precautions when using fiber optic cables.
To avoid breaking the fiber within the cable:
Do not kink the cable
Do not use a cable bend-radius of less than 30 mm (1.18 in)
To avoid deforming, or possibly breaking the fiber within the cable, do not place heavy objects
on the cable.
To avoid contaminating the optical connectors:
Do not touch the connectors◦ ◦ Never leave the connectors exposed to the air Install a dust cover on each transceiver and fiber cable connector when they are
disconnected
If an open connector is exposed to dust, or if there is any doubt about the cleanliness of the connector, clean the connector as described in “Handling fiber optic cables” (page 43).
M6412A disk enclosures 11
Figure 5 Fiber Optic Fibre Channel cable
Copper Fibre Channel cables
The Enterprise Virtual Array uses copper Fibre Channel cables to interconnect disk shelves. The cables are available in 0.6-meter (1.97 ft.) and 2.0-meter (6.56 ft.) lengths. Copper cables provide performance comparable to fiber optic cables. Copper cable connectors differ from fiber optic small form-factor connectors (see Figure 6 (page 12)).
Figure 6 Copper Fibre Channel cable
Fibre Channel disk drives
The Fibre Channel disk drives are hot-pluggable and include the following features:
Dual-ported 4 Gbps Fibre Channel controller interface that allows up to 96 disk drives to be
supported per array controller enclosure
Compact, direct-connect design for maximum storage density and increased reliability and
signal integrity
Both online high-performance disk drives and FATA disk drives supported in a variety of
capacities and spindle speeds
Better vibration damping for improved performance
Up to 12 disk drives can be installed in a drive enclosure.
Disk drive status indicators
Two status indicators display drive operational status. Figure 7 (page 12) identifies the disk drive status indicators. Table 3 (page 13) describes them.
Figure 7 Disk status indicators
2. Green1. Bi-color (amber/blue)
12 EVA6400/8400 hardware
Table 3 Disk status indicator LED descriptions
DescriptionDrive LED
Bi-color (top)
Slow flashing blue (0.5 Hz)—Used to locate drive.
Solid amber—Drive fault.
Green (bottom)
Flashing—Drive is spinning up or down and is not ready.
Solid—Drive is ready to perform I/O operations.
Flickering—Indicates drive activity.
Disk drive blank
To maintain the proper enclosure air flow, a disk drive or a disk drive blank must be installed in each drive bay. The disk drive blank maintains proper airflow within the disk enclosure.
Controller enclosures
This section describes the major features, purpose, and function of the HSV400 and HSV450 controllers. Each Enterprise Virtual Array has a pair of these controllers. Figure 8 (page 13) shows the HSV400 controller rear view and Figure 9 (page 14) shows the HSV450 controller rear view. The front of the HSV400 and HSV450 is shown in Figure 10 (page 14).
NOTE: Some controller enclosure modules have a cache battery located behind the OCP.
Figure 8 HSV400 controller rear view
2. Unit ID1. Serial port
4. Fault indicator3. Controller health
6. DPI ports5. Power
8. Fiber ports7. Mirror ports
10. Power supply 29. Power supply 1
Controller enclosures 13
Figure 9 HSV450 controller rear view
2. Unit ID1. Serial port
4. Fault indicator3. Controller health
6. DPI ports5. Power
8. Fiber ports7. Mirror ports
10. Power supply 29. Power supply 1
Figure 10 Controller front view
2. Battery 21. Battery 1
4. Blower 23. Blower 1
6. Status indicators5. Operator Control Panel (OCP)
7. Unit ID
Operator control panel
The operator control panel (OCP) provides a direct interface to each controller. From the OCP you can display storage system status and configuration information, shut down the storage system, and manage the password.
The OCP includes a 40-character LCD alphanumeric display, six push-buttons, and five status indicators. See Figure 11 (page 15).
HP P6000 Command View is the tool you will typically use to display storage system status and configuration information or perform the tasks available from the OCP. However, if HP P6000 Command View is not available, the OCP can be used to perform these tasks.
14 EVA6400/8400 hardware
Figure 11 Controller OCP
1. Status indicators (see Table 4 (page 15)) and UID button
2. 40-character alphanumeric display
3. Left, right, top, and bottom push-buttons
4. Esc
5. Enter
Status indicators
The status indicators display the operational status of the controller. The function of each indicator is described in Table 4 (page 15). During initial setup, the status indicators might not be fully operational.
The following sections define the alphanumeric display modes, including the possible displays, the valid status indicator displays, and the pushbutton functions.
Table 4 Controller status indicators
DescriptionIndicator
When the indicator is a solid amber, it means there was a boot failure. When it flashes, the controller is inoperative. Check either HP P6000 Command View or the LCD Fault Management displays for a definition of the problem and recommended corrective action.
Fault
When the indicator is flashing green slowly, the controller is booting up. When the indicator turns to solid green, boot is successful and the controller is operating normally.
Controller
When this indicator is green, there is at least one physical link between the storage system and hosts that is active and functioning normally. When this indicator is amber,
Physical link to hosts
established
there are no links between the storage system and hosts that are active and functioning normally.
When this indicator is green, all virtual disks that are presented to hosts are healthy and functioning normally. When this indicator is amber, at least one virtual disk is not
Virtual disks presented to
hosts
functioning normally. When this indicator is off, there are no virtual disks presented to hosts and this indicates a problem with the virtual disk on the array.
When this indicator is green, the battery is working properly. When this indicator is amber, there is a battery failure.
Battery
Press to turn on (solid blue); press again to turn it off. This LED mimics the function of the UID on the back of the controller.This indicator comes on in response to a Locate command issued by HP P6000 Command View.
Unit ID
Each port on the rear of the controller has an associated status indicator located directly above it.
Table 5 (page 16) lists the port and its status description.
Controller enclosures 15
Table 5 Controller port status indicators
DescriptionPort
Fibre Channel host ports
Green—Normal operation
Amber—No signal detected
Off—No SFP1detected or the Direct Connect OCP setting is incorrect
Fibre Channel device ports
Green—Normal operation
Amber—No signal detected or the controller has failed the port
Off—No SFP1detected
Fibre Channel cache mirror ports
Green—Normal operation
Amber—No signal detected or the controller has failed the port
Off—No SFP1detected
1 On copper Fibre Channel cables, the SFP is integrated into the cable connector.
Navigation buttons
The operation of the navigation buttons is determined by the current display and location in the menu structure. Table 6 (page 16) defines the basic push button functions when navigating the menus and options.
To simplify presentation and to avoid confusion, the pushbutton reference names, regardless of labels, are left, right, top, and bottom.
Table 6 Navigation button functions
FunctionButton
Moves down through the available menus and options
Moves up through the available menus and options
Selects the displayed menu or option.
Returns to the previous menu.
Used for “No” selections and to return to the default display.Esc
Used for “Yes” selections and to progress through menu items.Enter
Alphanumeric display
The alphanumeric display uses two LCD rows, each capable of displaying up to 20 alphanumeric characters. By default, the alphanumeric display alternates between displaying the Storage System Name and the World Wide Name. An active (flashing) display, an error condition message, or a user entry (pressing a push-button) overrides the default display. When none of these conditions exist, the default display returns after approximately 10 seconds.
Power supplies
Two power supplies provide the necessary operating voltages to all controller enclosure components. If one power supply fails, the remaining supply is capable of operating the enclosure.
16 EVA6400/8400 hardware
Figure 12 Power supply
4. Status indicator (solid green on—normal operation; solid amber—failure or no power)
1. Power supply
5. Handle2. AC input connector
3. Latch
Blower module
Fan modules provide the cooling necessary to maintain the proper operating temperature within the controller enclosure. If one fan fails, the remaining fan is capable of cooling the enclosure.
Figure 13 Blower module pulled out
2. Blower 21. Blower 1
Table 7 Fan status indicators
DescriptionFault indicatorStatus indicator
Normal operation.Solid greenOn left — green
Maintenance in progress.Blinking
Amber is on or blinking, or the enclosure is powered down.
Off
Fan failure. Green will be off. (Green and amber are not on simultaneously except for a few seconds after power-up.)
OnOn right — amber
Battery module
Batteries provide backup power to maintain the contents of the controller cache when AC power is lost and the storage system has not been shutdown properly. When fully charged the batteries can sustain the cache contents for to 96 hours. Three batteries are used on the EVA8400 and two batteries are used on the EVA6400. Figure 14 (page 18) illustrates the location of the cache batteries and the battery status indicators. See Table 8 (page 18) for additional information on the status indicators.
Blower module 17
Figure 14 Battery module
2. Fault indicator1. Status indicator
4. Battery 13. Battery 0
The table below describes the battery status indicators. When a battery is first installed, the fault indicator goes on (solid) for approximately 30 seconds while the system discovers the new battery. Then, the battery status indicators display the battery status as described in the table below.
Table 8 Battery status indicators
DescriptionFault indicatorStatus indicator
Normal operation. A maintenance charge process keeps the battery fully charged.
OffOn
Battery is undergoing a full charging process. This is the indication you typically see after installing a new battery.
OffFlashing
Battery fault. The battery has failed and should be replaced.OnOff
The battery has experienced an over temperature fault.FlashingOff
Battery code is being updated. When a new battery is installed, it may be necessary for the controllers to update the code on the battery to the
Flashing (fast)Flashing (fast)
correct version. Both indicators flash rapidly for approximately 30 seconds.
Battery is undergoing a scheduled battery load test, during which the battery is discharged and then recharged to ensure it is working properly.
FlashingFlashing
During the discharge cycle, you will see this display. The load test occurs infrequently and takes several hours.
HSV controller cabling
All data cables and power cables attach to the rear of the controller. Adjacent to each data connector is a two-colored link status indicator. Table 5 (page 16) identifies the status conditions presented by these indicators.
18 EVA6400/8400 hardware
NOTE: These indicators do not indicate whether there is communication on the link, only whether
the link can transmit and receive data. The data connections are the interfaces to the disk drive enclosures or loop switches (depending
on your configuration), the other controller, and the fabric. Fiber optic cables link the controllers to the fabric, and, if an expansion cabinet is part of the configuration, link the expansion cabinet drive enclosures to the loop is in the main cabinet. Copper cables are used between the controllers (mirror port) and between the controllers and the drive enclosures or loop switches.
Storage system racks
All storage system components are mounted in a rack. Each configuration includes one enclosure holding both controllers (the controller pair), FC cables the controller and the disk enclosures. Each controller pair and all the associated drive enclosures form a single storage system.
The rack provides the capability for mounting 483 mm (19 in) wide controller and drive enclosures.
NOTE: Racks and rack-mountable components are typically described using “U” measurements.
“U” measurements are used to designate panel or enclosure heights. The “U” measurement is a standard of 41 mm (1.6 in).
The racks provide the following:
Unique frame and rail design—Allows fast assembly, easy mounting, and outstanding structural
integrity.
Thermal integrity—Front-to-back natural convection cooling is greatly enhanced by the innovative
multi-angled design of the front door.
Security provisions—The front and rear door are lockable, which prevents unauthorized entry.
Flexibility—Provides easy access to hardware components for operation monitoring.
Custom expandability—Several options allow for quick and easy expansion of the racks to
create a custom solution.
Rack configurations
Each system configuration contains several disk enclosures included in the storage system. See
Figure 15 (page 19) for a typical EVA6400/8400 rack configuration. The standard rack is the
42U HP 10000 G2 Series rack. The EVA6400/8400 is also supported with 22U, 36U, 42U 5642, and 47U racks. The 42U 5643 is a field-installed option and the 47U rack must be assembled onsite because the cabinet height creates shipping difficulties.
For more information on HP rack offerings for the EVA6400/8400, see:
http://h18004.www1.hp.com/products/servers/proliantstorage/racks/index.html
Figure 15 Storage system hardware components – back view
Storage system racks 19
Power distribution–Modular PDUs
NOTE: This section describes the most common power distribution system for EVA6400/8400s.
For information about other options, see the HP power distribution units website:
http://h18004.www1.hp.com/products/servers/proliantstorage/power-protection/pdu.html
AC power is distributed to the rack through a dual Power Distribution Unit (PDU) assembly mounted at the bottom rear of the rack. The characteristics of the fully-redundant rack power configuration are as follows:
Each PDU is connected to a separate circuit breaker-protected, 30-A AC site power source
(100–127 VAC or 220–240 VAC ±10%, 50 or 60-Hz, ±5%). The following figures illustrate the most common compatible 60-Hz and 50-Hz wall receptacles.
NEMA L5-30R receptacle, 3-wire, 30-A, 60-Hz
NEMA L6-30R receptacle, 3-wire, 30-A, 60-Hz
IEC 309 receptacle, 3-wire, 30-A, 50-Hz
The standard power configuration for any Enterprise Virtual Array rack is the fully redundant
configuration. Implementing this configuration requires: Two separate circuit breaker-protected, 30-A site power sources with a compatible wall
receptacle.
One dual PDU assembly. Each PDU connects to a different wall receptacle. Four to eight (depending on the rack) Power Distribution Modules (PDM) per rack. PDMs
are split evenly on both sides of the rack. Each set of PDMs connects to a different PDU.
Eight PDMs for 42U, 47U, and 42U 5642 racks Six PDMs for 36U racks Four PDMs for 22U racks
The drive enclosure power supplies on the left (PS 1) connect to the PDMs on the left with
a gray, 66 cm (26 in) power cord.
The drive enclosure power supplies on the right (PS 2) connect to the PDMs on the right
with a black, 66 cm (26 in) power cord.
Each controller has a left and right power supply. The left power supplies of each should
be connected to the left PDMs and the right power supplies should be connected to the right PDMs.
20 EVA6400/8400 hardware
NOTE: Drive enclosures, when purchased separately, include one 50 cm black cable and one
50 cm gray cable. The configuration provides complete power redundancy and eliminates all single points of failure
for both the AC and DC power distribution.
CAUTION: Operating the array with a single PDU will result in the following conditions:
No redundancy
Louder controllers and disk enclosures due to increased fan speed
HP P6000 Command View will continuously display a warning condition, making issue
monitoring a labor-intensive task
Although the array is capable of doing so, HP strongly recommends that an array operating with a single PDU should not:
Be put into production
Remain in this state for more than 24 hours
PDUs
Each Enterprise Virtual Array rack has either a 50- or 60-Hz, dual PDU mounted at the bottom rear of the rack. The PDU placement is back-to-back, plugs facing toward the front (Figure 16 (page
21)), with circuit breaker switches facing the back (Figure 17 (page 22)).
The standard 50-Hz PDU cable has an IEC 309, 3-wire, 30-A, 50-Hz connector.
The standard 60-Hz PDU cable has a NEMA L6-30P, 3-wire, 30-A, 60-Hz connector.
If these connectors are not compatible with the site power distribution, you must replace the PDU power cord cable connector. One option is the NEMA L5-30R receptacle, 3-wire, 30-A, 60-Hz connector.
Each of the two PDU power cables has an AC power source specific connector. The circuit breaker-controlled PDU outputs are routed to a group of four AC receptacles. The voltages are then routed to PDMs, sometimes called AC power strips, mounted on the two vertical rails in the rear of the rack.
Figure 16 Dual PDU—front view
2. Power receptacle schematic1. PDU B
4. Power cord2. PDU A
3. AC receptacles
Power distribution–Modular PDUs 21
Figure 17 Dual PDU—rear view
3. Main circuit breaker1. PDU B
4. Circuit breakers2. PDU A
PDU A
PDU A connects to AC PDM A1–A4. A PDU A failure:
Disables the power distribution circuit
Removes power from from the left side of the rack
Disables disk enclosure PS 1
Disables the left power supplies in the controllers
PDU B
PDU B connects to AC PDM B1–B4. A PDU B failure:
Disables the power distribution circuit
Removes power from the right side of the rack
Disables disk enclosure PS 2
Disables the right power supplies in the controllers
PDMs
Depending on the rack, there can be up to eight PDMs mounted in the rear of the rack:
The PDMs on the left vertical rail connect to PDU A
The PDMs on the right vertical rail connect to PDU B
22 EVA6400/8400 hardware
Each PDM has seven AC receptacles. The PDMs distribute the AC power from the PDUs to the enclosures. Two power sources exist for each controller pair and disk enclosure. If a PDU fails, the system will remain operational.
CAUTION: The AC power distribution within a rack ensures a balanced load to each PDU and
reduces the possibility of an overload condition. Changing the cabling to or from a PDM could cause an overload condition. HP supports only the AC power distributions defined in this user guide.
Figure 18 Rack PDM
1. Power receptacles
2. AC power connector
Rack AC power distribution
The power distribution in an Enterprise Virtual Array rack is the same for all variants. The site AC input voltage is routed to the dual PDU assembly mounted in the rack lower rear. Each PDU distributes AC to a maximum of four PDMs mounted on the left and right vertical rails (see
Figure 19 (page 24)).
PDMs A1 through A4 connect to receptacles A through D on PDU A. Power cords connect
these PDMs to the left power supplies on the disk enclosures and to the left power supplies on the controllers.
PDMs B1 through B4 connect to receptacles A through D on PDU B. Power cords connect
these PDMs to the right power supplies on the disk enclosures and to the right power supplies on the controllers.
Power distribution–Modular PDUs 23
NOTE: The locations of the PDUs and the PDMs are the same in all racks.
Figure 19 Rack AC power distribution
2. PDM 21. PDM 1
4. PDM 43. PDM 3
6. PDM 55. PDU 1
8. PDM 77. PDM 6
10. PDU 29. PDM 8
Rack System/E power distribution components
AC power is distributed to the Rack System/E rack through Power Distribution Units (PDU) mounted on the two vertical rails in the rear of the rack. Up to four PDUs can be mounted in the rack—two mounted on the right side of the cabinet and two mounted on the left side.
Each of the PDU power cables has an AC power source specific connector. The circuit breaker-controlled PDU outputs are routed to a group of ten AC receptacles. The storage system components plug directly into the PDUs.
Rack AC power distribution
The power distribution configuration in a Rack System/E rack depends on the number of storage systems installed in the rack. If one storage system is installed, only two PDUs are required. If multiple storage systems are installed, four PDUs are required.
24 EVA6400/8400 hardware
The site AC input voltage is routed to each PDU mounted in the rack. Each PDU distributes AC through ten receptacles directly to the storage system components.
PDUs 1 and 3 (optional) are mounted on the left side of the cabinet. Power cords connect
these PDUs to the number 1 disk enclosure power supplies and to the controllers.
PDUs 2 and 4 (optional) are mounted on the right side of the cabinet. Power cords connect
these PDUs to the number 2 disk enclosure power supplies and to the controllers.
For additional information on power distribution support, see the following website:
http://h18004.www1.hp.com/products/servers/proliantstorage/power-protection/pdu.html
Moving and stabilizing a rack
WARNING! The physical size and weight of the rack requires a minimum of two people to move.
If one person tries to move the rack, injury may occur. To ensure stability of the rack, always push on the lower half of the rack. Be especially careful
when moving the rack over any bump (e.g., door sills, ramp edges, carpet edges, or elevator openings). When the rack is moved over a bump, there is a potential for it to tip over.
Moving the rack requires a clear, uncarpeted pathway that is at least 80 cm (31.5 in) wide for the 60.3 cm (23.7 in) wide, 42U rack. A vertical clearance of 203.2 cm (80 in) should ensure sufficient clearance for the 200 cm (78.7 in) high, 42U rack.
CAUTION: Ensure that no vertical or horizontal restrictions exist that would prevent rack movement
without damaging the rack. Make sure that all four leveler feet are in the fully raised position. This process will ensure that the
casters support the rack weight and the feet do not impede movement.
Each rack requires an area 600 mm (23.62 in) wide and 1000 mm (39.37 in) deep (see
Figure 20 (page 25)).
Figure 20 Single rack configuration floor space requirements
2. Rear door1. Front door
4. Service area width 813 mm3. Rack width 600 mm
6. Rack depth 1000 mm5. Rear service area depth 300 mm
8. Total rack depth 1706 mm7. Front service area depth 406 mm
Moving and stabilizing a rack 25
If the feet are not fully raised, complete the following procedure:
1. Raise one foot by turning the leveler foot hex nut counterclockwise until the weight of the rack is fully on the caster (see Figure 21 (page 26)).
2. Repeat Step 1 for the other feet.
Figure 21 Raising a leveler foot
2. Leveler foot1. Hex nut
3. Carefully move the rack to the installation area and position it to provide the necessary service areas (see Figure 20 (page 25)).
To stabilize the rack when it is in the final installation location:
1. Use a wrench to lower the foot by turning the leveler foot hex nut clockwise until the caster does not touch the floor. Repeat for the other feet.
2. After lowering the feet, check the rack to ensure it is stable and level.
3. Adjust the feet as necessary to ensure the rack is stable and level.
26 EVA6400/8400 hardware
2 Enterprise Virtual Array startup
This chapter describes the procedures to install and configure the Enterprise Virtual Array. When these procedures are complete, you can begin using your storage system.
NOTE: Installation of the Enterprise Virtual Array should be done only by an HP authorized
service representative. The information in this chapter provides an overview of the steps involved in the installation and configuration of the storage system.
EVA8400 storage system connections
Figure 22 (page 27) shows how the storage system is connected to other components of the storage
solution.
The HSV450 controllers connect via four host ports (FP1, FP2, FP3, and FP4) to the Fibre
Channel fabrics. The hosts that will access the storage system are connected to the same fabrics.
The HP P6000 Command View management server also connects to the fabric.
The controllers connect through two loop pairs to the drive enclosures. Each loop pair consists
of two independent loops, each capable of managing all the disks should one loop fail.
Figure 22 EVA8400 configuration
8 Controller A1 Network interconnection 9 Controller B2 Management server 10 Cache mirror ports3 Non-host 11 Drive enclosure 14 Host A
EVA8400 storage system connections 27
12 Drive enclosure 25 Host B 13 Drive enclosure 36 Fabric 1
7 Fabric 2
EVA6400 storage system connections
Figure 23 (page 28) shows a typical EVA6400 SAN topology:
The HSV400 controllers connect via four host ports (FP1, FP2, FP3, and FP4) to the Fibre
Channel fabrics. The hosts that will access the storage system are connected to the same fabrics.
The HP P6000 Command View management server also connects to both fabrics.
The controllers connect through one loop pair to the drive enclosures. The loop pair consists
of two independent loops, each capable of managing all the disks should one loop fail.
Figure 23 EVA6400 configuration
7 Fabric 21 Network interconnection 8 Controller A2 Management server 9 Controller B3 Non-host 10 Cache mirror ports4 Host A 11 Drive enclosure 15 Host B 12 Drive enclosure 26 Fabric 1
Direct connect
NOTE: Direct connect is currently supported on Microsoft Windows only.
Direct connect provides a lower cost solution for smaller configurations. When using direct connect, the storage system controllers are connected directly to the host(s), not to SAN Fibre Channel
28 Enterprise Virtual Array startup
switches. Make sure the following requirements are met when configuring your environment for direct connect:
A management server running HP P6000 Command View must be connected to one port on
each EVA controller. The management host must use dual HBAs for redundancy.
To provide redundancy, it is recommended that dual HBAs be used for each additional host
connected to the storage system. Using this configuration, up to four hosts (including the management host) can be connected to an EVA6400/8400.
The Host Port Configuration must be set to Direct Connect using the OCP.
HP P6000 Continuous Access cannot be used with direct connect configurations.
The HSV controller firmware cannot differentiate between an empty host port and a failed
host port in a direct connect configuration. As a result, the Connection state dialog box on the Controller Properties window displays Connection failed for an empty host port. To fix this problem, insert an optical loop-back connector into the empty host port; the Connection state will display Connected. For more information about optical loop-back connectors, contact your HP-authorized service provider.
iSCSI connection configurations
The EVA6400/8400 support iSCSI attach configurations using the HP MPX100. Both fabric connect and direct connect are supported for iSCSI configurations. For complete information on iSCSI configurations, go to the following website:
http://h18006.www1.hp.com/products/storageworks/evaiscsiconnect/index.html
NOTE: An iSCSI connection configuration supports mixed direct connect and fabric connect.
Fabric connect iSCSI
Fabric connect provides an iSCSI solution for EVA Fibre Channel configurations that want to continue to use all EVA ports on FC or if the EVA is also used for HP P6000 Continuous Access.
Make sure the following requirements are met when configuring your MPX100 environment for fabric connect:
A maximum of two MPX100s per storage system are supported
Each storage system port can connect to a maximum of two MPX100 FC ports.
Each MPX100 FC port can connect to a maximum of one storage system port.
In a single MPX100 configuration, if both MPX100 FC ports are used, each port must be
connected to one storage system controller.
In a dual MPX100 configuration, at least one FC port from each MPX100 must be connected
to one storage system controller.
The Host Port Configuration must be set to Fabric Connect using the OCP.
HP P6000 Continuous Access is supported on the same storage system connected in MPX100
fabric connect configurations.
Direct connect iSCSI
Direct connect provides a lower cost solution for configurations that want to dedicate controller ports to iSCSI I/O. When using direct connect, the storage system controllers are connected directly to the MPX100(s), not to SAN Fibre Channel switches.
iSCSI connection configurations 29
Make sure the following requirements are met when configuring your MPX100 environment for direct connect:
A maximum two MPX100s per storage system are supported.
In a single MPX100 configuration, if both MPX100 FC ports are used each port must be
connected to one storage system controller.
In a dual MPX100 configuration, at least one FC port from each MPX100 must be connected
to one storage system controller.
The Host Port Configuration must be set to Direct Connect using the OCP.
HP P6000 Continuous Access cannot be used with direct connect configurations.
EVAs cannot be directly connected to each other to create HP P6000 Continuous Access
configuration. However, hosts can be direct connected to the EVA in a HP P6000 Continuous Access configuration. At least one port from each array in an HP P6000 Continuous Access configuration must be connected to a Fabric connection for remote array connectivity.
Procedures for getting started
ResponsibilityStep
Customer1. Gather information and identify all related storage
documentation.
Customer2. Contact an authorized service representative for
hardware configuration information.
HP Service Engineer3. Enter the World Wide Name (WWN) into the OCP.
HP Service Engineer4. Configure HP P6000 Command View.
Customer5. Prepare the hosts.
HP Service Engineer6. Configure the system through HP P6000 Command
View.
HP Service Engineer7. Make virtual disks available to their hosts. See the storage system software documentation for each host's operating system.
Gathering information
The following items should be available when installing and configuring an Enterprise Virtual Array. They provide information necessary to set up the storage system successfully.
HP 6400/8400 Enterprise Virtual Array World Wide Name label, (shipped with the storage
system)
HP Enterprise Virtual Array Release Notes
Locate these items and keep them handy. You will need them for the procedures in this manual.
Host information
Make a list of information for each host computer that will be accessing the storage system. You will need the following information for each host:
The LAN name of the host
A list of World Wide Names of the FC adapters, also called host bus adapters, through which
the host will connect to the fabric that provides access to the storage system, or to the storage system directly if using direct connect.
30 Enterprise Virtual Array startup
Operating system type
Available LUN numbers
Setting up a controller pair using the OCP
NOTE: This procedure should be performed by an HP authorized service representative.
Two pieces of data must be entered during initial setup using the controller OCP:
World Wide Name (WWN) — Required to complete setup. This procedure should be
performed by an HP authorized service representative.
Storage system password — Optional. A password provides security allowing only specific
instances of HP P6000 Command View to access the storage system.
The OCP on either controller can be used to input the WWN and password data. For more information about the OCP, see “Operator Control Panel” (page 14).
Table 9 (page 31) lists the push-button functions when entering the WWN, WWN checksum, and
password data.
Table 9 Push button functions
FunctionButton
Selects a character by scrolling up through the character list one character at a time.
Moves forward one character. If you accept an incorrect character, you can move through all 16 characters, one character at a time, until you display the incorrect character. You can then change the character.
Selects a character by scrolling down through the character list one character at a time.
Moves backward one character.
Returns to the default display.ESC
Accepts all the characters entered.ENTER
Entering the WWN
Fibre Channel protocol requires that each controller pair have a unique WWN. This 16-character alphanumeric name identifies the controller pair on the storage system. Two WWN labels attached to the rack identify the storage system WWN and checksum. See Figure 24 (page 32).
Procedures for getting started 31
NOTE:
The WWN is unique to a controller pair and cannot be used for any other controller pair or
device anywhere on the network.
This is the only WWN applicable to any controller installed in a specific physical location,
even a replacement controller.
Once a WWN is assigned to a controller, you cannot change the WWN while the controller
is part of the same storage system.
Figure 24 Location of the World Wide Name labels
1. World Wide Name labels
Complete the following procedure to assign the WWN to each pair of controllers.
1. Turn the power switches on both controllers off.
2. Apply power to the rack.
3. Turn the power switch on both controllers on.
NOTE: Notifications of the startup test steps that have been executed are displayed while
the controller is booting. It may take up to two minutes for the steps to display. The default WWN entry display has a 0 in each of the 16 positions.
4. Press or until the first character of the WWN is displayed. Press to accept this character and select the next.
5. Repeat Step 4 to enter the remaining characters.
6. Press Enter to accept the WWN and select the checksum entry mode.
Entering the WWN checksum
The second part of the WWN entry procedure is to enter the two-character checksum, as follows.
1. Verify that the initial WWN checksum displays 0 in both positions.
2. Press or until the first checksum character is displayed. Press to accept this character and select the second character.
3. Press or until the second character is displayed. Press Enter to accept the checksum and exit.
4. Verify that the default display is automatically selected. This indicates that the checksum is valid.
32 Enterprise Virtual Array startup
NOTE: If you enter an incorrect WWN or checksum, the system will reject the data and you must
repeat the procedure.
Entering the storage system password
The storage system password feature enables you to restrict management access to the storage system. The password must meet the following requirements:
8 to 16 characters in length
Can include upper or lower case letters
Can include numbers 0 - 9
Can include the following characters: ! “ # $ % & ‘ ( ) * + , - . / : ; < = > ? @ [ ] ^ _ ` { | }
Cannot include the following characters: space ~ \
Complete the following procedure to enter the password:
1. Select a unique password of 8 to 16 characters.
2. With the default menu displayed, press three times to display System Password.
3. Press to display Change Password?
4. Press Enter for yes. The default password, AAAAAAAA~~~~~~~~, is displayed.
5. Press or to select the desired character.
6. Press to accept this character and select the next character.
7. Repeat the process to enter the remaining password characters.
8. Press Enter to enter the password and return to the default display.
Installing HP P6000 Command View
HP P6000 Command View is installed on a management server. Installation may be skipped if the latest version of HP P6000 Command View is running. Verify the latest version at the HP website:
http://h18006.www1.hp.com/products/storage/software/cmdvieweva/index.html
See the HP P6000 Command View Installation Guide for more information.
Installing optional EVA software licenses
If you purchased optional EVA software, it will be necessary to install the license. Optional software available for the Enterprise Virtual Array includes HP P6000 Business Copy and HP P6000 Continuous Access. Installation instructions are included with the license.
Procedures for getting started 33
3 EVA6400/8400 operation
Best practices
For useful information on managing and configuring your storage system, see the HP 4400 and 6400/8400 Enterprise Virtual Array configuration best practices white paper available from
http://h18006.www1.hp.com/storage/arraywhitepapers.html
Operating tips and information
Reserving adequate free space
To ensure efficient storage system operation, a certain amount of unallocated capacity, or free space, should be reserved in each disk group. The recommended amount of free space is influenced by your system configuration. For guidance on how much free space to reserve, see the HP 4400 and 6400/8400 Enterprise Virtual Array configuration best practices white paper. See “Best
practices” (page 34).
Using FATA disk drives
FATA drives are designed for lower duty cycle applications such as near online data replication for backup. These drives should not be used as a replacement for EVA's high performance, standard duty cycle, Fibre Channel drives. Doing so could shorten the life of the drive.
For useful information on managing and configuring your storage system, see the HP 4400 and 6400/8400 Enterprise Virtual Array configuration best practices white paper. See “Best practices”
(page 34).
Using solid state disk drives
The following requirements apply to solid state disk (SSD) drives:
Supported in the EVA4400 and EVA6400/8400 only, running a minimum controller software
version of 09500000 for the 72 GB drive and 09534000 for the 200 GB and 400 GB drives
SSD drives must be in a separate disk group
The SSD disk group supports a minimum of 6 and a maximum of 8 drives per array
SSD drives can only be configured with Vraid5 or Vraid1 (Vraid1 requires controller software
version 09534000 or later)
Supported with HP P6000 Business Copy
Not supported with HP P6000 Continuous Access
Dynamic Capacity Management extend and shrink features are not supported
Use of these devices in unsupported configurations can lead to unpredictable results, including unstable array operation or data loss.
QLogic HBA speed setting
In a Linux direct connect environment with QLogic 4 Gb/s HBAs, auto speed negotiation is not supported. The QLogic HBA speed setting must be set to 4 Gb/s.
EVA6400/8400 host port negotiates to incorrect speed
The EVA6400/8400 might not correctly negotiate to 4 Gb/s when connected to an HP M-Series 4400, 4700, or 6140 switch with ports set to autonegotiate. The workaround is to set the switch port to 4 Gb/s.
34 EVA6400/8400 operation
Creating 16 TB or greater virtual disks in Windows 2008
When creating a virtual disk that is 16 TB or greater in Windows 2008, ensure that the Allocation unit size field is set to something other than Default in the Windows New Simple Volume wizard. The recommended setting is 16K. If this field is set to Default, you will receive the following error message:
The format operation did not complete because the cluster count is higher than expected.
Importing Windows dynamic disk volumes
If you create a snapshot, snapclone, or mirrorclone with a Windows 2003 RAID-spanned dynamic volume on the source virtual disk, and then try to import the copy to a Windows 2003 x64 (64-bit) system, it will import with Dynamic Foreign status. The following message displays in the DiskPart utility:
The disk management services could not complete the operation.
This error occurs because the 64-bit version of DiskPart fails to import dynamic RAID sets on a new server.
To avoid this issue, use the 32-bit version of DiskPart instead of the 64-bit version. Copy DiskPart from a 32-bit x86 Windows system, located in C:\WINDOWS\system32. Place the DiskPart utility in a temporary folder on the 64-bit x64 Windows system.
Losing a path to a dynamic disk
If you are using Windows 2003 with dynamic disks and a path to the EVA virtual disk is temporarily lost, the Logical Disk Manager (LDM) will erroneously show a failed dynamic volume. For more information, see the following issue on the Microsoft knowledge base website:
http://support.microsoft.com/kb/816307
To resolve the issue, reboot the Windows 2003 server to restore the dynamic volume.
Microsoft Windows 2003 MSCS cluster installation
The MSCS cluster installation wizard on Windows 2003 can fail to find the shared quorum device and disk resources might not be auto-created by the cluster setup wizard. This is a known Windows Cluster Setup issue that has existed since Windows 2003 was released.
There are two possible workarounds for this problem:
Follow the workaround recommendation described in the Microsoft support article entitled
Shared disks are missing or are marked as "Failed" when you create a server cluster in Windows Server 2003 (ID 886807), available for download on the Microsoft website:
http://support.microsoft.com/default.aspx?scid=KB;EN-US;886807
Use the MPIO DSM CLI to set the load balancing policy for each LUN to NLB.
Microsoft is currently working on a resolution to address this issue.
Maximum LUN size
Table 10 (page 35) lists the maximum LUN size supported with each supported operating system.
Table 10 Maximum LUN size
Maximum LUN sizeOperating system
1 TBHP OpenVMS 7.3-2, 8.2, and 8.3 with Alpha servers
1 TBHP OpenVMS 8.2-1, 8.3, and 8.3-1H1 with Integrity
servers
Operating tips and information 35
Table 10 Maximum LUN size (continued)
Maximum LUN sizeOperating system
2 TBHP-UX 11.11
2 TBHP-UX 11.23
16 ZBHP-UX 11.31
1 TB (AIX 5.2ML06 or earlier) 2 ZB (AIX 5.2ML07 or later)
IBM AIX 5.2
1 TB (AIX 5.3ML02 or earlier) 2 ZB (AIX 5.3ML03 or later)
IBM AIX 5.3
2 ZBIBM AIX 6.1
Less than 32 TBMac OS X 10.x
Less than 32 TBMac OS X 11.x
256 TBMicrosoft Windows Server 2003
256 TBMicrosoft Windows Server 2008
Maximum supported block device at 16 TB as of RH5.1Red Hat Linux 3, 4, and 5
Maximum block device size is 16 TB for 32-bit system and 8 iEB for 64-bit systems
SUSE Linux Enterprise Server 8, 9, and 10
2 TBSun Solaris 8, 9, and 10
2 TBVMware ESX 3.0.x and 3.5
Maximum supported block device at 16 TBCitrix Xen
Managing unused ports
When you have unused ports on an EVA, perform the following steps:
1. Place a loopback plug on all unused ports.
2. Change the mode on unused ports from fabric to direct connect.
Changing the host port connectivity
To change the host port connectivity:
1. Disconnect any connected cable.
NOTE: Failing to disconnect the cable prior to making the change will require a controller
restart to clear the condition.
2. Use the OCP and navigate to the host port to be changed.
3. Select fabric for an FC switch connection or direct for direct attachment to an HBA.
4. Reconnect cables.
36 EVA6400/8400 operation
Failback preference setting for HSV controllers
Table 11 (page 37) describes the failback preference behavior for the controllers.
Table 11 Failback preference behavior
BehaviorPoint in timeSetting
The units are alternately brought online to Controller A or to Controller B.
At initial presentationNo preference
If cache data for a LUN exists on a particular controller, the unit will be brought online there.
On dual boot or controller resynch
Otherwise, the units are alternately brought online to Controller A or to Controller B.
All LUNs are brought online to the surviving controller.
On controller failover
All LUNs remain on the surviving controller. There is no failback except if a host moves the LUN using SCSI commands.
On controller failback
The units are brought online to Controller A.At initial presentationPath A - Failover Only
If cache data for a LUN exists on a particular controller, the unit will be brought online there.
On dual boot or controller resynch
Otherwise, the units are brought online to Controller A.
All LUNs are brought online to the surviving controller.
On controller failover
All LUNs remain on the surviving controller. There is no failback except if a host moves the LUN using SCSI commands.
On controller failback
The units are brought online to Controller B.At initial presentationPath B - Failover Only
If cache data for a LUN exists on a particular controller, the unit will be brought online there.
On dual boot or controller resynch
Otherwise, the units are brought online to Controller B.
All LUNs are brought online to the surviving controller.
On controller failover
All LUNs remain on the surviving controller. There is no failback except if a host moves the LUN using SCSI commands.
On controller failback
The units are brought online to Controller A.At initial presentationPath A -
Failover/Failback
If cache data for a LUN exists on a particular controller, the unit will be brought online there.
On dual boot or controller resynch
Otherwise, the units are brought online to Controller A.
All LUNs are brought online to the surviving controller.
On controller failover
All LUNs remain on the surviving controller. After controller restoration, the units that are
On controller failback
online to Controller B and set to Path A are brought online to Controller A. This is a one time occurrence. If the host then moves the LUN using SCSI commands, the LUN will remain where moved.
The units are brought online to Controller B.At initial presentationPath B -
Failover/Failback
Failback preference setting for HSV controllers 37
Table 11 Failback preference behavior (continued)
BehaviorPoint in timeSetting
If cache data for a LUN exists on a particular controller, the unit will be brought online there.
On dual boot or controller resynch
Otherwise, the units are brought online to Controller B.
All LUNs are brought online to the surviving controller.
On controller failover
All LUNs remain on the surviving controller. After controller restoration, the units that are
On controller failback
online to Controller A and set to Path B are brought online to Controller B. This is a one time occurrence. If the host then moves the LUN using SCSI commands, the LUN will remain where moved.
Table 12 (page 38) describes the failback default behavior and supported settings when
ALUA-compliant multipath software is running with each operating system. Recommended settings may vary depending on your configuration or environment.
Table 12 Failback settings by operating system
Supported settingsDefault behaviorOperating system
No Preference Path A/B – Failover Only
Host follows the unit
1
HP-UX
Path A/B – Failover/Failback
No Preference Path A/B – Failover Only
Host follows the unit
1
IBM AIX
Path A/B – Failover/Failback
No Preference Path A/B – Failover Only
Host follows the unit
1
Linux
Path A/B – Failover/Failback
No PreferenceHost follows the unitOpenVMS Path A/B – Failover Only Path A/B – Failover/Failback
(recommended)
No PreferenceHost follows the unit
1
Sun Solaris
Path A/B – Failover Only Path A/B – Failover/Failback
No PreferenceHost follows the unitTru64 UNIX Path A/B – Failover Only Path A/B – Failover/Failback
(recommended)
No Preference Path A/B – Failover Only
Host follows the unit
1
VMware
Path A/B – Failover/Failback
No PreferenceFailback performed on the hostWindows Path A/B – Failover Only Path A/B – Failover/Failback
38 EVA6400/8400 operation
1 If preference has been configured to ensure a more balanced controller configuration, the Path A/B – Failover/Failback
setting is required to maintain the configuration after a single controller reboot.
Changing virtual disk failover/failback setting
Changing the failover/failback setting of a virtual disk may impact which controller presents the disk. Table 13 (page 39) identifies the presentation behavior that results when the failover/failback setting for a virtual disk is changed.
NOTE: If the new setting causes the presentation of the virtual disk to move to a new controller,
any snapshots or snapclones associated with the virtual disk will also be moved.
Table 13 Impact on virtual disk presentation when changing failover/failback setting
Impact on virtual disk presentationNew setting
None. The disk maintains its original presentation.No Preference
If the disk is currently presented on controller B, it is moved to controller A. If the disk is on controller A, it remains there.
Path A Failover
If the disk is currently presented on controller A, it is moved to controller B. If the disk is on controller B, it remains there.
Path B Failover
If the disk is currently presented on controller B, it is moved to controller A. If the disk is on controller A, it remains there.
Path A Failover/Failback
If the disk is currently presented on controller A, it is moved to controller B. If the disk is on controller B, it remains there.
Path B Failover/Failback
Implicit LUN transition
Implicit LUN transition automatically transfers management of a virtual disk to the array controller that receives the most read requests for that virtual disk. This improves performance by reducing the overhead incurred when servicing read I/Os on the non-managing controller. Implicit LUN transition is enabled in VCS 4.x and all versions of XCS.
When creating a virtual disk, one controller is selected to manage the virtual disk. Only this managing controller can issue I/Os to a virtual disk in response to a host read or write request. If a read I/O request arrives on the non-managing controller, the read request must be transferred to the managing controller for servicing. The managing controller issues the I/O request, caches the read data, and mirrors that data to the cache on the non-managing controller, which then transfers the read data to the host. Because this type of transaction, called a proxy read, requires additional overhead, it provides less than optimal performance. (There is little impact on a write request because all writes are mirrored in both controllers’ caches for fault protection.)
With implicit LUN transition, when the array detects that a majority of read requests for a virtual disk are proxy reads, the array transitions management of the virtual disk to the non-managing controller. This improves performance because the controller receiving most of the read requests becomes the managing controller, reducing proxy read overhead for subsequent I/Os.
Implicit LUN transition is disabled for all members of an HP P6000 Continuous Access DR group. Because HP P6000 Continuous Access requires that all members of a DR group be managed by the same controller, it would be necessary to move all members of the DR group if excessive proxy reads were detected on any virtual disk in the group. This would impact performance and create a proxy read situation for the other virtual disks in the DR group. Not implementing implicit LUN transition on a DR group may cause a virtual disk in the DR group to have excessive proxy reads.
Storage system shutdown and startup
The storage system is shut down using HP P6000 Command View. The shutdown process performs the following functions in the indicated order:
Storage system shutdown and startup 39
1. Flushes cache
2. Removes power from the controllers
3. Disables cache battery power
4. Removes power from the drive enclosures
5. Disconnects the system from HP P6000 Command View
NOTE: The storage system may take a long time to complete the necessary cache flush during
controller shutdown when snapshots are being used. The delay may be particularly long if multiple child snapshots are used, or if there has been a large amount of write activity to the snapshot source virtual disk.
Shutting down the storage system
To shut the storage system down, perform the following steps:
1. Start HP P6000 Command View.
2. Select the appropriate storage system in the Navigation pane. The Initialized Storage System Properties window for the selected storage system opens.
3. Click Shut down. The Shutdown Options window opens.
4. Under System Shutdown click Power Down. If you want to delay the initiation of the shutdown, enter the number of minutes in the Shutdown delay field.
The controllers complete an orderly shutdown and then power off. The disk enclosures then power off. Wait for the shutdown to complete.
Starting the storage system
To start a storage system, perform the following steps:
1. Verify that each fabric Fibre Channel switch to which the HSV controllers are connected is powered up and fully booted. The power indicator on each switch should be on.
If you must power up the SAN switches, wait for them to complete their power-on boot process before proceeding. This may take several minutes.
2. Power on the circuit breakers on both EVA rack PDUs, which powers on the controller enclosures and disk enclosures. Verify that all enclosures are operating properly. The status indicator and the power indicator should be on (green).
3. Wait three minutes and then verify that all disk drives are ready. The drive ready indicator and the drive online indicator should be on (green).
4. Verify that the Operator Control Panel (OCP) display on each controller displays the storage system name and the EVA WWN.
5. Start HP P6000 Command View and verify connection to the storage system. If the storage system is not visible, click HSV Storage Network in the navigation pane, and then click Discover in the Content pane to discover the array.
NOTE: If the storage system is still not visible, reboot the management server to re-establish
the communication link.
6. Check the storage system status using HP P6000 Command View to ensure everything is operating properly. If any status indicator is not normal, check the log files or contact your HP-authorized service provider for assistance.
Saving storage system configuration data
As part of an overall data protection strategy, storage system configuration data should be saved during initial installation, and whenever major configuration changes are made to the storage
40 EVA6400/8400 operation
system. This includes adding or removing disk drives, creating or deleting disk groups, and adding or deleting virtual disks. The saved configuration data can save substantial time should it ever become necessary to re-initialize the storage system. The configuration data is saved to a series of files stored in a location other than on the storage system.
This procedure can be performed from the management server where HP P6000 Command View is installed, or any host that can run HP Storage System Scripting Utility (SSSU) to communicate with HP P6000 Command View.
NOTE: For more information about using HP SSSU, see the HP Storage System Scripting Utility
Reference. See “Documents” (page 76).
1. Double-click the HP SSSU desktop icon to run the application. When prompted, enter Manager (management server name or IP address), User name, and Password.
2. Enter LS SYSTEM to display the EVA storage systems managed by the management server.
3. Enter SELECT SYSTEM system name, where system name is the name of the storage system.
The storage system name is case sensitive. If there are spaces between the letters in the name, quotes must enclose the name: for example, SELECT SYSTEM Large EVA.
4. Enter CAPTURE CONFIGURATION, specifying the full path and filename of the output files for the configuration data.
The configuration data is stored in a series of from one to five files, which are SSSU scripts. The file names begin with the name you select, with the restore step appended. For example, if you specify a file name of LargeEVA.txt, the resulting configuration files would be LargeEVA_Step1A.txt, LargeEVA_Step1B, etc.
The contents of the configuration files can be viewed with a text editor.
Saving storage system configuration data 41
NOTE: If the storage system contains disk drives of different capacities, the HP SSSU procedures
used do not guarantee that disk drives of the same capacity will be exclusively added to the same disk group. If you need to restore an array configuration that contains disks of different sizes and types, you must manually recreate these disk groups. The controller software and the CAPTURE CONFIGURATION command are not designed to automatically restore this type of configuration. For more information, see the HP Storage System Scripting Utility Reference.
Example 1 Saving configuration data using HP SSSU on a Windows host
To save the storage system configuration:
1. Double-click the HP SSSU desktop icon to run the application. When prompted, enter Manager
(management server name or IP address), User name, and Password.
2. Enter LS SYSTEM to display the EVA storage systems managed by the management server.
3. Enter SELECT SYSTEM system name, where system name is the name of the storage
system.
4. Enter CAPTURE CONFIGURATION pathname\filename, where pathname identifies the
location where the configuration files will be saved, and filename is the name used as the prefix for the configurations files: for example, CAPTURE CONFIGURATION
c:\EVAConfig\LargeEVA
5. Enter EXIT to close the command window.
Example 2 Restoring configuration data using HP SSSU on a Windows host
To restore the storage system configuration:
1. Double-click the HP SSSU desktop icon to run the application.
2. Enter FILE pathname\filename, where pathname identifies the location where the
configuration files are to be saved and filename is the name of the first configuration file: for example, FILE c:\EVAConfig\LargeEVA_Step1A.txt
3. Repeat the preceding step for each configuration file.
Adding disk drives to the storage system
As your storage requirements grow, you may be adding disk drives to your storage system. Adding new disk drives is the easiest way to increase the storage capacity of the storage system. Disk drives can be added online without impacting storage system operation.
Consider the following best practices to improve availability when adding disks to an array:
Set the add disk option to manual.
Add disks one at a time, waiting a minimum of 60 seconds between disks.
Distribute disks vertically and as evenly as possible to all disk enclosures.
Unless otherwise indicated, use the SET DISK_GROUP command in the HP Storage System
Scripting Utility to add new disks to existing disk groups.
Add disks in groups of eight.
For growing existing applications, if the operating system supports virtual disk growth, increase
virtual disk size. Otherwise, use a software volume manager to add new virtual disks to applications.
See the disk drive replacement instructions for the steps to add a disk drive. See “Replacement
instructions” (page 75) for a link to this document.
42 EVA6400/8400 operation
Creating disk groups
The new disks you add will typically be used to create new disk groups. Although you cannot select which disks will be part of a disk group, you can control this by building the disk groups sequentially.
Add the disk drives required for the first disk group, and then create a disk group using these disk drives. Now add the disk drives for the second disk group, and then create that disk group. This process gives you control over which disk drives are included in each disk group.
NOTE: Standard and FATA disk drives must be in separate disk groups. Disk drives of different
capacities and spindle speeds can be included in the same disk group, but you may want to consider separating them into separate disk groups.
Handling fiber optic cables
This section provides protection and cleaning methods for fiber optic connectors. Contamination of the fiber optic connectors on either a transceiver or a cable connector can impede
the transmission of data. Therefore, protecting the connector tips against contamination or damage is imperative. The tips can be contaminated by touching them, by dust, or by debris. They can be damaged when dropped. To protect the connectors against contamination or damage, use the dust covers or dust caps provided by the manufacturer. These covers are removed during installation, and are installed whenever the transceivers or cables are disconnected. Cleaning the connectors should remove contamination.
The transceiver dust caps protect the transceivers from contamination. Do not discard the dust
covers.
CAUTION: To avoid damage to the connectors, always install the dust covers or dust caps
whenever a transceiver or a fiber cable is disconnected. Remove the dust covers or dust caps from transceivers or fiber cable connectors only when they are connected. Do not discard the dust covers.
To minimize the risk of contamination or damage, do the following:
Dust covers — Remove and set aside the dust covers and dust caps when installing an I/O
module, a transceiver or a cable. Install the dust covers when disconnecting a transceiver or cable.
When to clean — If a connector may be contaminated, or if a connector has not been protected
by a dust cover for an extended period of time, clean it.
How to clean:
1. Wipe the connector with a lint-free tissue soaked with 100% isopropyl alcohol.
2. Wipe the connector with a dry, lint-free tissue.
3. Dry the connector with moisture-free compressed air.
One of the many sources for cleaning equipment specifically designed for fiber optic connectors is:
Alcoa Fujikura Ltd. 1-888-385-4587 (North America) 011-1-770-956-7200 (International)
Using the OCP
Displaying the OCP menu tree
The Storage System Menu Tree lets you select information to be displayed, configuration settings to change, or procedures to implement. To enter the menu tree, press any navigation push-button when the default display is active.
Handling fiber optic cables 43
The menu tree is organized into the following major menus:
System Info—displays information and configuration settings.
Fault Management—displays fault information. Information about the Fault Management menu
is included in “Controller fault management” (page 103).
Shutdown Options—initiates the procedure for shutting down the system in a logical, sequential
manner. Using the shutdown procedures maintains data integrity and avoids the possibility of losing or corrupting data.
System Password—create a system password to ensure that only authorized personnel can
manage the storage system using HP P6000 Command View.
To enter and navigate the storage system menu tree:
1. Press any push-button while the default display is in view. System Information becomes the active display.
2. Press to sequence down through the menus. Press to sequence up through the menus.
Press to select the displayed menu. Press to return to the previous menu.
NOTE: To exit any menu, press Esc or wait ten seconds for the OCP display to return to the default
display.
Table 14 (page 44) identifies all the menu options available within the OCP display.
CAUTION: Many of the configuration settings available through the OCP impact the operating
characteristics of the storage system. You should not change any setting unless you understand how it will impact system operation. For more information on the OCP settings, contact your HP-authorized service representative.
Table 14 Menu options within the OCP display
System PasswordShutdown OptionsFault ManagementSystem Information
Change PasswordRestartLast FaultVersions
Clear PasswordPower OffDetail ViewHost Port Config (Sets Fabric or Direct Connect)
Current Password
(Set or not)
Uninitialize SystemDevice Port Config (Enables/disables device ports)
I/O Module Config (Enables/disables auto-bypass)
Loop Recovery Config (Enables/disables recoveries)
Unbypass Devices
UUID Unique Half
Debug Flags
Print Flags
Mastership Status (Displays controller role — master or slave)
44 EVA6400/8400 operation
Displaying system information
NOTE: The purpose of this information is to assist the HP-authorized service representative when
servicing your system. The system information displays show the system configuration, including the XCS version, the OCP
firmware and application programming interface (API) versions, and the enclosure address bus programmable integrated circuit (PIC) configuration. You can only view, not change, this information.
Displaying versions system information
When you press , the active display is Versions. From the Versions display you can determine the:
OCP firmware version
Controller version
XCS version
NOTE: The terms PPC, Sprite, Glue, SDC, CBIC, and Atlantis are for development purposes and
have no significance for normal operation.
NOTE: When viewing the software or firmware version information, pressing displays the
Versions Menu tree. To display System Information:
1. The default display alternates between the Storage System Name display and the World Wide Name display.
Press any push-button to display the Storage System Menu Tree.
2. Press until the desired Versions Menu option appears, and then press or to move to submenu items.
Shutting down the system
CAUTION: To power off the system for more than 96 hours, use HP P6000 Command View.
You can use the Shutdown System function to implement the shutdown methods listed below. These shutdown methods are explained in Table 15 (page 46).
Shutting down the controller (see “Shutting the controller down” (page 46)).
Restarting the system (see “Restarting the system” (page 46)).
Uninitializing the system (see “Uninitializing the system” (page 47)).
To ensure that you do not mistakenly activate a shutdown procedure, the default state is always NO, indicating do not implement this procedure. As a safeguard, implementing any shutdown method requires you to complete at least two actions.
Using the OCP 45
Table 15 Shutdown methods
DescriptionLCD prompt
Implementing this procedure establishes communications between the storage system and HP P6000 Command View. This procedure is used to restore the controller to an operational state where it can communicate with HP P6000 Command View.
Restart System?
Implementing this procedure initiates the sequential removal of controller power. This ensures no data is lost. The reasons for implementing this procedure include replacing a drive enclosure.
Power off system?
Implementing this procedure will cause the loss of all data. For a detailed discussion of this procedure, see “Uninitializing the system” (page 47).
Uninitialize?
Shutting the controller down
Use the following procedure to access the Shutdown System display and execute a shutdown procedure.
CAUTION: If you decide NOT to power off while working in the Power Off menu, Power Off
System NO must be displayed before you press Esc. This reduces the risk of accidentally powering
down.
NOTE: HP P6000 Command View is the preferred method for shutting down the controller. Shut
down the controller from the OCP only if HP P6000 Command View cannot communicate with the controller.
Shutting down the controller from the OCP removes power from the controller on which the procedure is performed only. To restore power, toggle the controller’s power.
1. Press three times to scroll to the Shutdown Options menu.
2. Press to display Restart.
3. Press to scroll to Power Off.
4. Press to select Power Off.
5. Power off system is displayed. Press Enter to power off the system.
Restarting the system
To restore the controller to an operational state, use the following procedure to restart the system.
1. Press three times to scroll to the Shutdown Options menu.
2. Press to select Restart.
3. Press to display Restart system?.
4. Press Enter to go to Startup. No user input is required. The system will automatically initiate the startup procedure and
proceed to load the Storage System Name and World Wide Name information from the operational controller.
46 EVA6400/8400 operation
Uninitializing the system
Uninitializing the system is another way to shut down the system. This action causes the loss of all storage system data. Because HP P6000 Command View cannot communicate with the disk drive enclosures, the stored data cannot be accessed.
CAUTION: Uninitializing the system destroys all user data. The WWN will remain in the controller
unless both controllers are powered off. The password will be lost. If the controllers remain powered on until you create another storage system (initialize via GUI), you will not have to re-enter the WWN.
Use the following procedure to uninitialize the system.
1. Press three times to scroll to the Shutdown Options menu.
2. Press to display Restart.
3. Press twice to display Uninitialize System.
4. Press to display Uninitialize?
5. Select Yes and press Enter. The system displays Delete all data? Enter DELETE:_______
6. Press the arrow keys to navigate to the open field and type DELETE and then press ENTER. The system uninitializes.
NOTE: If you do not enter the word DELETE or if you press ESC, the system does not
uninitialize. The bottom OCP line displays Uninit cancelled.
Password options
The password entry options are:
Entering a password during storage system initialization (see “Entering the storage system
password” (page 33)).
Displaying the current password.
Changing a password (see “Changing a password” (page 47)).
Removing password protection (see “Clearing a password” (page 48)).
Changing a password
For security reasons, you may need to change a storage system password. The password must contain eight to 16 characters consisting of any combination of alpha, numeric, or special. See
“Entering the storage system password” (page 33) for more information on valid password
characters. Use the following procedure to change the password.
NOTE: Changing a system password on the controller requires changing the password on any
HP P6000 Command View with access to the storage system.
1. Select a unique password of 8 to 16 characters.
2. With the default menu displayed, press three times to display System Password.
3. Press to display Change Password?
4. Press Enter for yes. The default password, AAAAAAAA~~~~~~~~, is displayed.
5. Press or to select the desired character.
6. Press to accept this character and select the next character.
Using the OCP 47
7. Repeat the process to enter the remaining password characters.
8. Press Enter to enter the password and return to the default display.
Clearing a password
Use the following procedure to remove storage system password protection.
NOTE: Changing a system password on the controller requires changing the password on any
HP P6000 Command View with access to the storage system.
1. Press four times to scroll to the System Password menu.
2. Press to display Change Password?
3. Press to scroll to Clear Password.
4. Press to display Clear Password.
5. Press Enter to clear the password. The Password cleared message will be displayed.
48 EVA6400/8400 operation
4 Configuring application servers
Overview
This chapter provides general connectivity information for all supported operating systems. Where applicable, an OS-specific section is included to provide more information.
Clustering
Clustering is connecting two or more computers together so that they behave like a single computer. Clustering may also be used for parallel processing, load balancing, and fault tolerance.
See the Single Point of Connectivity Knowledge (SPOCK) website (http://www.hp.com/storage/
spock) for the clustering software supported on each operating system.
NOTE: For OpenVMS, you must make the Console LUN ID and OS unit IDs unique throughout
the entire SAN, not just the controller subsystem.
Multipathing
Multipathing software provides a multiple-path environment for your operating system. See the following website for more information:
http://h18006.www1.hp.com/products/sanworks/multipathoptions/index.html
See the Single Point of Connectivity Knowledge (SPOCK) website (http://www.hp.com/storage/
spock) for the multipathing software supported on each operating system.
Installing Fibre Channel adapters
For all operating systems, supported Fibre Channel adapters (FCAs) must be installed in the host server in order to communicate with the EVA.
NOTE: Traditionally, the adapter that connects the host server to the fabric is called a host bus
adapter (HBA). The server HBA used with the EVA6400/8400 is called a Fibre Channel adapter (FCA). You might also see the adapter called a Fibre Channel host bus adapter (Fibre Channel HBA) in other related documents.
Follow the hardware installation rules and conventions for your server type. The FCA is shipped with its own documentation for installation. See that documentation for complete instructions. You need the following items to begin:
FCA boards and the manufacturer’s installation instructions
Server hardware manual for instructions on installing adapters
Tools to service your server
The FCA board plugs into a compatible I/O slot (PCI, PCI-X, PCI-E) in the host system. For instructions on plugging in boards, see the hardware manual.
You can download the latest FCA firmware from the following website: http://www.hp.com/
support/downloads. Enter HBA in the Search Products box and then select your product. See the
Single Point of Connectivity Knowledge (SPOCK) website (http://www.hp.com/storage/spock) for supported FCAs by operating system.
Overview 49
Testing connections to the EVA
After installing the FCAs, you can create and test connections between the host server and the EVA. For all operating systems, you must:
Add hosts
Create and present virtual disks
Verify virtual disks from the hosts
The following sections provide information that applies to all operating systems. For OS-specific details, see the applicable operating system section.
Adding hosts
To add hosts using HP P6000 Command View:
1. Retrieve and note the worldwide names (WWNs) for each FCA on your host.
You need this information to select the host FCAs in HP P6000 Command View.
2. Use HP P6000 Command View to add the host and each FCA installed in the host system.
NOTE: To add hosts using HP P6000 Command View, you must add each FCA installed in
the host. Select Add Host to add the first adapter. To add subsequent adapters, select Add Port. Ensure that you add a port for each active FCA.
3. Select the applicable operating system for the host mode. Table 16 Select the host mode for the applicable operating system
Host mode selectionOperating System
HP-UXHP-UX
IBM AIXIBM AIX
LinuxLinux
LinuxMac OS X
OVMSOpenVMS
Sun SolarisOracle Solaris
VMwareVMware
Microsoft Windows Microsoft Windows 2008
Windows
LinuxCitrix Xen Server
4. Check the Host folder in the navigation pane of HP P6000 Command View to verify that the
host FCAs are added.
NOTE: More information about HP P6000 Command View is available at the following
website: http://www.hp.com/support/manuals. Click Storage Software under Storage, and then select HP P6000 Command View software under Storage Device Management Software.
Creating and presenting virtual disks
To create and present virtual disks to the host server:
50 Configuring application servers
1. From HP P6000 Command View, create a virtual disk on the EVA6400/8400.
2. Specify values for the following parameters:
Virtual disk name
Vraid level
Size
3. Present the virtual disk to the host you added.
4. If applicable (OpenVMS), select a LUN number if you chose a specific LUN on the Virtual
Disk Properties window.
Verifying virtual disk access from the host
To verify that the host can access the newly presented virtual disks, restart the host or scan the bus. If you are unable to access the virtual disk:
Verify that all cabling to the switch, EVA, and host is properly connected.
Verify all firmware levels. For more information, see the Enterprise Virtual Array QuickSpecs
and associated release notes.
Ensure that you are running a supported version of the host operating system. For more
information, see the HP P6000 Enterprise Virtual Array Compatibility Reference.
Ensure that the correct host is selected as the operating system for the virtual disk in HP P6000
Command View.
Ensure that the host WWN number is set correctly (to the host you selected).
Verify the FCA switch settings.
Verify that the virtual disk is presented to the host.
Verify zoning.
Configuring virtual disks from the host
After you create the virtual disks on the EVA6400/8400 and rescan or restart the host, follow the host-specific conventions for configuring these new disk resources. For instructions, see the documentation included with your server.
HP-UX
Scanning the bus
To scan the FCA bus and display information about the EVA6400/8400 devices:
1. Enter the # ioscan -fnCdisk command to start the rescan. All new virtual disks become visible to the host.
2. Assign device special files to the new virtual disks using the insf command.
# insf -e
NOTE: Uppercase E reassigns device special files to all devices. Lowercase e assigns device
special files only to the new devices—in this case, the virtual disks.
The following is a sample output from an ioscan command:
# ioscan -fnCdisk
# ioscan -fnCdisk Class I H/W Patch Driver S/W H/W Type Description State ======================================================================================== ba 3 0/6 lba CLAIMED BUS_NEXUS Local PCI Bus
Verifying virtual disk access from the host 51
Adapter (782) fc 2 0/6/0/0 td CLAIMED INTERFACE HP Tachyon XL@ 2 FC Mass Stor Adap /dev/td2 fcp 0 0/6/0/0.39 fcp CLAIMED INTERFACE FCP Domain ext_bus 4 0/6/00.39.13.0.0 fcparray CLAIMED INTERFACE FCP Array Interface target 5 0/6/0/0.39.13.0.0.0 tgt CLAIMED DEVICE ctl 4 0/6/0/0.39.13.0.0.0.0 sctl CLAIMED DEVICE HP HSV400 /dev/rscsi/c4t0d0 disk 22 0/6/0/0.39.13.0.0.0.1 sdisk CLAIMED DEVICE HP HSV400 /dev/dsk/c4t0d1 /dev/rdsk/c4t0d ext_bus 5 0/6/0/0.39.13.255.0 fcpdev CLAIMED INTERFACE FCP Device Interface target 8 0/6/0/0.39.13.255.0.0 tgt CLAIMED DEVICE ctl 20 0/6/0/0.39.13.255.0.0.0 sctl CLAIMED DEVICE HP HSV400 /dev/rscsi/c5t0d0 ext_bus 10 0/6/0/0.39.28.0.0 fcparray CLAIMED INTERFACE FCP Array Interface target 9 0/6/0/0.39.28.0.0.0 tgt CLAIMED DEVICE ctl 40 0/6/0/0.39.28.0.0.0.0 sctl CLAIMED DEVICE HP HSV400 /dev/rscsi/c10t0d0 disk 46 0/6/0/0.39.28.0.0.0.2 sdisk CLAIMED DEVICE HP HSV400 /dev/dsk/c10t0d2 /dev/rdsk/c10t0d2 disk 47 0/6/0/0.39.28.0.0.0.3 sdisk CLAIMED DEVICE HP HSV400 /dev/dsk/c10t0d3 /dev/rdsk/c10t0d3 disk 48 0/6/0/0.39.28.0.0.0.4 sdisk CLAIMED DEVICE HP HSV400 /dev/dsk/c10t0d4 /dev/rdsk/c10t0d4 disk 49 0/6/0/0.39.28.0.0.0.5 sdisk CLAIMED DEVICE HP HSV400 /dev/dsk/c10t0d5 /dev/rdsk/c10t0d5 disk 50 0/6/0/0.39.28.0.0.0.6 sdisk CLAIMED DEVICE HP HSV400 /dev/dsk/c10t0d /dev/rdsk/c10t0d6 disk 51 0/6/0/0.39.28.0.0.0.7 sdisk CLAIMED DEVICE HP HSV400 /dev/dsk/c10t0d7 /dev/rdsk/c10t0d7
Creating volume groups on a virtual disk using vgcreate
You can create a volume group on a virtual disk by issuing a vgcreate command. This builds the virtual group block data, allowing HP-UX to access the virtual disk. See the pvcreate, vgcreate, and lvcreate man pages for more information about creating disks and file systems. Use the following procedure to create a volume group on a virtual disk:
NOTE: Italicized text is for example only.
1. To create the physical volume on a virtual disk, enter a command similar to the following:
# pvcreate -f /dev/rdsk/c32t0d1
2. To create the volume group directory for a virtual disk, enter a command similar to the following:
# mkdir /dev/vg01
3. To create the volume group node for a virtual disk, enter a command similar to the following:
# mknod /dev/vg01/group c 64 0x010000
The designation 64 is the major number that equates to the 64-bit mode. The 0x01 is the minor number in hex, which must be unique for each volume group.
4. To create the volume group for a virtual disk, enter a command similar to the following:
# vgcreate –f /dev/vg01 /dev/dsk/c32t0d1
5. To create the logical volume for a virtual disk, enter a command similar to the following:
# lvcreate -L1000 /dev/vg01/lvol1
In this example, a 1-Gb logical volume (lvol1) is created.
6. Create a file system for the new logical volume by creating a file system directory name and inserting a mount tap entry into /etc/fstab.
7. Run the mkfs command on the new logical volume. The new file system is ready to mount.
IBM AIX
Accessing IBM AIX utilities
You can access IBM AIX utilities such as the Object Data Manager (ODM), on the following website:
http://www.hp.com/support/downloads
52 Configuring application servers
In the Search products box, enter MPIO, and then click AIX MPIO PCMA for HP Arrays. Select IBM AIX, and then select your software storage product.
Adding hosts
To determine the active FCAs on the IBM AIX host, enter:
# lsdev -Cc adapter |grep fcs
Output similar to the following appears:
fcs0 Available 1H-08 FC Adapter fcs1 Available 1V-08 FC Adapter # lscfg -vl fcs0 fcs0 U0.1-P1-I5/Q1 FC Adapter
Part Number.................80P4543
EC Level....................A
Serial Number...............1F4280A419
Manufacturer................001F
Feature Code/Marketing ID...280B
FRU Number.................. 80P4544
Device Specific.(ZM)........3
Network Address.............10000000C940F529
ROS Level and ID............02881914
Device Specific.(Z0)........1001206D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF801315
Device Specific.(Z5)........02881914
Device Specific.(Z6)........06831914
Device Specific.(Z7)........07831914
Device Specific.(Z8)........20000000C940F529
Device Specific.(Z9)........TS1.90A4
Device Specific.(ZA)........T1D1.90A4
Device Specific.(ZB)........T2D1.90A4
Device Specific.(YL)........U0.1-P1-I5/Q1b.
Creating and presenting virtual disks
When creating and presenting virtual disks to an IBM AIX host, be sure to:
1. Set the OS unit ID to 0.
2. Set Preferred path/mode to No Preference.
3. Select a LUN number if you chose a specific LUN on the Virtual Disk Properties window.
Verifying virtual disks from the host
To scan the IBM AIX bus, enter: cfgmgr -v The -v switch (verbose output) requests a full output. To list all EVA devices, enter: Output similar to the following is displayed:
hdisk1 Available 1V-08-01 HP HSV400 Enterprise Virtual Array hdisk2 Available 1V-08-01 HP HSV400 Enterprise Virtual Array hdisk3 Available 1V-08-01 HP HSV400 Enterprise Virtual Array
IBM AIX 53
Linux
Driver failover mode
If you use the INSTALL command without command options, the driver’s failover mode depends on whether a QLogic driver is already loaded in memory (listed in the output of the lsmod command). Possible driver failover mode scenarios include:
If an hp_qla2x00src driver RPM is already installed, the new driver RPM uses the failover of
the previous driver package.
If there is no QLogic driver module (qla2xxx module) loaded, the driver defaults to failover
mode. This is also true if an inbox driver is loaded that does not list output in the /proc/scsi/qla2xxx directory.
If there is a driver loaded in memory that lists the driver version in /proc/scsi/qla2xxx
but no driver RPM has been installed, then the driver RPM loads the driver in the failover mode that the driver in memory currently uses.
Installing a Qlogic driver
NOTE: The HP Emulex driver kit performs in a similar manner; use ./INSTALL -h to list all
supported arguments.
1. Download the appropriate driver kit for your distribution. The driver kit file is in the format
hp_qla2x00-yyyy-mm-dd.tar.gz.
2. Copy the driver kit to the target system.
3. Uncompress and untar the driver kit using the following command:
# tar zxvf hp_qla2x00-yyyy-mm-dd.tar.gz
4. Change directory to the hp_qla2x00-yyyy-mm-dd directory.
5. Execute the INSTALL command.
The INSTALL command syntax varies depending on your conguration. If a previous driver kit is installed, you can invoke the INSTALL command without any
arguments. To use the currently loaded conguration:
# ./INSTALL To force the installation to failover mode, use the -f ag:
# ./INSTALL -f To force the installation to single-path mode, use the -s ag:
# ./INSTALL -s To list all supported arguments, use the -h flag:
# ./INSTALL -h
6. The INSTALL script installs the appropriate driver RPM for your conguration, as well as the
appropriate breutils RPM. Once the INSTALL script is finished, you will either have to reload the QLogic driver modules (qla2xxx, qla2300, qla2400, qla2xxx_conf) or reboot your server.
The commands to reload the driver are:
# /opt/hp/src/hp_qla2x00src/unload.sh
# modprobe qla2xxx_conf
# modprobe qla2xxx
# modprobe qla2300
54 Configuring application servers
# modprobe qla2400
The command to reboot the server is:
# reboot
CAUTION: If the boot device is attached to the SAN, you must reboot the host.
7. To verify which RPM versions are installed, use the rpm command with the -q option. For
example:
# rpm -q hp_qla2x00src
# rpm –q fibreutils
Upgrading Linux components
If you have any installed components from a previous solution kit or driver kit, such as the qla2x00 RPM, invoke the INSTALL script with no arguments, as shown in the following example:
# ./INSTALL
To manually upgrade the components, select one of the following kernel distributions:
For 2.4 kernel based distributions, use version 7.xx.
For 2.6 kernel based distributions, use version 8.xx.
Depending on the kernel version you are running, upgrade the driver RPM manually as follows:
For the hp_qla2x00src RPM:
# rpm -Uvh hp_qla2x00src- version-revision.linux.rpm
For fibreutils RPM, you have two options:
To upgrade the driver:
# rpm -Uvh fibreutils-version-revision.linux.architecture.rpm
To remove the existing driver, and install a new driver:
# rpm -e fibreutils
# rpm -ivh fibreutils-version-revision.linux.architecture.rpm
Upgrading qla2x00 RPMs
If you have a qla2x00 RPM from HP installed on your system, use the INSTALL script to upgrade from qla2x00 RPMs. The INSTALL script removes the old qla2x00 RPM and installs the new hp_qla2x00src while keeping the driver settings from the previous installation. The script takes no arguments. Use the following command to run the INSTALL script:
# ./INSTALL
NOTE: IF you are going to use the failover functionality of the QLA driver, uninstall Secure Path
and reboot before you attempt to upgrade the driver. Failing to do so can cause a kernel panic.
Detecting third-party storage
The preinstallation portion of the RPM contains code to check for non-HP storage. The reason for doing this is to prevent the RPM from overwriting any settings that another vendor may be using. You can skip the detection process by setting the environmental variable HPQLAX00FORCE to y by issuing the following commands:
# HPQLA2X00FORCE=y
# export HPQLA2X00FORCE
You can also use the -F option of the INSTALL script by entering the following command:
Linux 55
# ./INSTALL -F
Compiling the driver for multiple kernels
If your system has multiple kernels installed on it, you can compile the driver for all the installed kernels by setting the INSTALLALLKERNELS environmental variable to y and exporting it by issuing the following commands:
# INSTALLALLKERNELS=y
# export INSTALLALLKERNELS You can also use the -a option of the INSTALL script as follows:
# ./INSTALL -a
Uninstalling the Linux components
To uninstall the components, use the INSTALL script with the -u option as shown in the following example:
# ./INSTALL -u
To manually uninstall all components, or to uninstall just one of the components, use one or all of the following commands:
# rpm -e fibreutils
# rpm -e hp_qla2x00
# rpm -e hp_qla2x00src
Using the source RPM
In some cases, you may have to build a binary hp_qla2x00 RPM from the source RPM and use that manual binary build in place of the scripted hp_qla2x00src RPM. You need to do this if your production servers do not have the kernel sources and gcc installed.
If you need to build a binary RPM to install, you will need a development machine with the same kernel as your targeted production servers. You can install the binary RPM-produced RPM methods on your production servers.
NOTE: The binary RPM that you build works only for the kernel and configuration that you build
on (and possibly some errata kernels). Ensure that you use the 7.xx version of the hp_qla2x00 source RPM for 2.4 kernel-based distributions and the 8.xx version of the hp_qla2x00 source RPM for 2.6 kernel-based distributions.
Use the following procedure to create the binary RPM from the source RPM:
1. Select one of the following options:
Enter the #./INSTALL -S command. The binary RPM creation is complete. You do
need to perform steps 2 through 4.
Install the source RPM by issuing the # rpm -ivh
hp_qla2x00-version-revision.src.rpm command. Continue with step 2.
2. Select one of the following directories:
For Red Hat distributions, use the /usr/src/redhat/SPECS directory.
For SUSE distributions, use the /usr/src/packages/SPECS directory.
3. Build the RPM by using the # rpmbuild -bb hp_qla2x00.spec command.
NOTE: In some of the older Linux distributions, the RPM command contains the RPM build
functionality. At the end of the command output, the following message appears:
56 Configuring application servers
"Wrote: ...rpm".
This line identifies the location of the binary RPM.
4. Copy the binary RPM to the production servers and install it using the following command:
# rpm -ivh hp_qla2x00-version-revision.architecture.rpm
Verifying virtual disks from the host
To ensure that the LUN is recognized after a virtual disk is presented to the host, do one of the following:
Reboot the host.
Enter the /opt/hp/hp_fibreutils/hp_rescan -a command.
To verify that the host can access the virtual disks, enter the # more /proc/scsi/scsi command. The output lists all SCSI devices detected by the server. An EVA6400/8400 LUN entry looks similar
to the following:
Host: scsi3 Channel: 00 ID: 00 Lun: 01
Vendor: HP Model: HSV400 Rev:
Type: Direct-Access ANSI SCSI revision: 02
OpenVMS
Updating the AlphaServer console code, Integrity Server console code, and Fibre Channel FCA firmware
The firmware update procedure varies for the different server types. To update firmware, follow the procedure described in the Installation instructions that accompany the firmware images.
Verifying the Fibre Channel adapter software installation
A supported FCA should already be installed in the host server. The procedure to verify that the console recognizes the installed FCA varies for the different server types. Follow the procedure described in the Installation instructions that accompany the firmware images.
Console LUN ID and OS unit ID
HP P6000 Command View software contains a box for the Console LUN ID on the Initialized Storage System Properties window.
It is important that you set the Console LUN ID to a number other than zero. If the Console LUN ID is not set or is set to zero, the OpenVMS host will not recognize the controller pair. The Console LUN ID for a controller pair must be unique within the SAN. Table 17 (page 58) shows an example of the Console LUN ID.
You can set the OS unit ID on the Virtual Disk Properties window. The default setting is 0, which disables the ID field. To enable the ID field, you must specify a value between 1 and 32767,
OpenVMS 57
ensuring that the number you enter is unique within the SAN. An OS Unit ID greater than 9999 is not capable of being served by MSCP.
CAUTION: It is possible to enter a duplicate Console LUN ID or OS unit ID number. You must
ensure that you enter a Console LUN ID and OS Unit ID that is not already in use. A duplicate Console LUN ID or OS Unit ID can allow the OpenVMS host to corrupt data due to confusion about LUN identity. It can also prevent the host from recognizing the controllers.
Table 17 Comparing console LUN to OS unit ID
System DisplayID type
$1$GGA100:Console LUN ID set to 100
$1$DGA50:OS unit ID set to 50
Adding OpenVMS hosts
To obtain WWNs on AlphaServers, do one of the following:
Enter the show device fg/full OVMS command.
Use the WWIDMGR -SHOW PORT command at the SRM console.
To obtain WWNs on Integrity servers, do one of the following:
Enter the show device fg/full OVMS command.
Use the following procedure from the server console:
1. From the EFI boot Manager, select EFI Shell.
2. In the EFI Shell, enter Shell> drivers.
A list of EFI drivers loaded in the system is displayed.
3. In the listing, find the line for the FCA for which you want to get the WWN information.
For a Qlogic HBA, look for HP 4 Gb Fibre Channel Driver or HP 2 Gb Fibre
Channel Driver as the driver name. For example:
T D D Y C I R P F A V VERSION E G G #D #C DRIVER NAME IMAGE NAME == ======== = = = == == =================================== =================== 22 00000105 B X X 1 1 HP 4 Gb Fibre Channel Driver PciROM:0F:01:01:002
4. Note the driver handle in the first column (22 in the example).
5. Using the driver handle, enter the drvdfg driver_handle command to find the Device
Handle (Ctrl). For example:
Shell> drvcfg 22 Configurable Components Drv[22] Ctrl[25] Lang[eng]
6. Using the driver and device handle, enter the drvdfg s driver_handle
device_handle command to invoke the EFI Driver configuration utility. For example:
Shell> drvcfg -s 22 25
7. From the Fibre Channel Driver Configuration Utility list, select item 8
(Info) to find the WWN for that particular port. Output similar to the following appears:
Adapter Path: Acpi(PNP0002,0300)/Pci(01|01) Adapter WWPN: 50060B00003B478A Adapter WWNN: 50060B00003B478B Adapter S/N: 3B478A
58 Configuring application servers
Scanning the bus
Enter the following command to scan the bus for the OpenVMS virtual disk:
$ MC SYSMAN IO AUTO/LOG
A listing of LUNs detected by the scan process is displayed. Verify that the new LUNs appear on the list.
NOTE: The EVA6400/8400 console LUN can be seen without any virtual disks presented. The
LUN appears as $1$GGAx (where x represents the console LUN ID on the controller).
After the system scans the fabric for devices, you can verify the devices with the SHOW DEVICE command:
$ SHOW DEVICE NAME-OF-VIRTUAL-DISK/FULL
For example, to display device information on a virtual disk named $1$DGA50, enter $ SHOW DEVICE $1$DGA50:/FULL.
The following output is displayed:
Disk $1$DGA50: (BRCK18), device type HSV210, is online, file-oriented device, shareable, device has multiple I/O paths, served to cluster via MSCP Server, error logging is enabled.
Error count 2 Operations completed 4107 Owner process "" Owner UIC [SYSTEM] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 0 Default buffer size 512 Current preferred CPU Id 0 Fastpath 1 WWID 01000010:6005-08B4-0010-70C7-0001-2000-2E3E-0000 Host name "BRCK18" Host type, avail AlphaServer DS10 466 MHz, yes Alternate host name "VMS24" Alt. type, avail HP rx3600 (1.59GHz/9.0MB), yes Allocation class 1
I/O paths to device 9 Path PGA0.5000-1FE1-0027-0A38 (BRCK18), primary path. Error count 0 Operations completed 145 Path PGA0.5000-1FE1-0027-0A3A (BRCK18). Error count 0 Operations completed 338 Path PGA0.5000-1FE1-0027-0A3E (BRCK18). Error count 0 Operations completed 276 Path PGA0.5000-1FE1-0027-0A3C (BRCK18). Error count 0 Operations completed 282 Path PGB0.5000-1FE1-0027-0A39 (BRCK18). Error count 0 Operations completed 683 Path PGB0.5000-1FE1-0027-0A3B (BRCK18). Error count 0 Operations completed 704 Path PGB0.5000-1FE1-0027-0A3D (BRCK18). Error count 0 Operations completed 853 Path PGB0.5000-1FE1-0027-0A3F (BRCK18), current path. Error count 2 Operations completed 826 Path MSCP (VMS24). Error count 0 Operations completed 0
You can also use the SHOW DEVICE DG command to display a list of all Fibre Channel disks presented to the OpenVMS host.
NOTE: Restarting the host system shows any newly presented virtual disks because a hardware
scan is performed as part of the startup. If you are unable to access the virtual disk, do the following:
Check the switch zoning database.
Use HP P6000 Command View to verify the host presentations.
Check the SRM console firmware on AlphaServers.
Ensure that the correct host is selected for this virtual disk and that a unique OS Unit ID is used
in HP P6000 Command View.
OpenVMS 59
Configuring virtual disks from the OpenVMS host
To set up disk resources under OpenVMS, initialize and mount the virtual disk resource as follows:
1. Enter the following command to initialize the virtual disk:
$ INITIALIZE name-of-virtual-disk volume-label
2. Enter the following command to mount the disk:
MOUNT/SYSTEM name-of-virtual-disk volume-label
NOTE: The /SYSTEM switch is used for a single stand-alone system, or in clusters if you
want to mount the disk only to select nodes. You can use the /CLUSTER switch for OpenVMS clusters. However, if you encounter problems in a large cluster environment, HP recommends that you enter a MOUNT/SYSTEM command on each cluster node.
3. View the virtual disk’s information with the SHOW DEVICE command. For example, enter the following command sequence to configure a virtual disk named data1 in a stand-alone environment:
$ INIT $1$DGA1: data1 $ MOUNT/SYSTEM $1$DGA1: data1 $ SHOW DEV $1$DGA1: /FULL
Setting preferred paths
You can set or change the preferred path used for a virtual disk by using the SET DEVICE /PATH command. For example:
$ SET DEVICE $1$DGA83: /PATH=PGA0.5000-1FE1-0007-9772/SWITCH
This allows you to control which path each virtual disk uses. You can use the SHOW DEV/FULL command to display the path identifiers. For additional information on using OpenVMS commands, see the OpenVMS help file:
$ HELP TOPIC For example, the following command displays help information for the MOUNT command:
$ HELP MOUNT
Oracle Solaris
NOTE: The information in this section applies to both SPARC and x86 versions of the Oracle
Solaris operating system.
Loading the operating system and software
Follow the manufacturer’s instructions for loading the operating system (OS) and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer.
Configuring FCAs with the Oracle SAN driver stack
Sun-branded FCAs are supported only with the Oracle SAN driver stack. The Oracle SAN driver stack is also compatible with current Emulex FCAs and QLogic FCAs. Support information is available on the Oracle website: http://www.oracle.com/technetwork/server-storage/solaris/
overview/index-136292.html
To determine which non-Orcle branded FCAs HP supports with the Oracle SAN driver stack, see the latest MPxIO application notes or contact your HP representative.
60 Configuring application servers
Update instructions depend on the version of your OS:
For Solaris 9, install the latest Oracle StorEdge SAN software with associated patches. To
locate the software, log in to My Oracle Support:
https://support.oracle.com/CSP/ui/flash.html
1. Select the Patches & Updates tab and then search for StorEdge SAN Foundation Software
4.4 (formerly called StorageTek SAN 4.4).
2. Reboot the host after the required software/patches have been installed. No further activity
is required after adding any new LUNs once the array ports have been configured with the cfgadm –c command for Solaris 9.
Examples for two FCAs:
cfgadm -c configure c3
cfgadm -c configure c4
3. Increase retry counts and reduce I/O time by adding the following entries to the
/etc/system file:
set ssd:ssd_retry_count=0xa
set ssd:ssd_io_time=0x1e
4. Reboot the system to load the newly added parameters.
For Solaris 10, go the Oracle Software Downloads website (http://www.oracle.com/
technetwork/indexes/downloads/index.html) to install the latest patches. Under Servers and
Storage Systems, select Solaris 10. Reboot the host once the required software/patches have been installed. No further activity is required after adding new LUNs, as the controller and LUN recognition are automatic for Solaris 10.
1. For Solaris 10 x86/64, ensure patch 138889-03 or later is installed. For SPARC, ensure patch 138888-03 or later is installed.
2. Increase the retry counts by adding the following line to the /kernel/drv/sd.conf file:
sd-config-list="HP HSV","retries-timeout:10";
3. Reduce the I/O timeout value to 30 seconds by adding the following line to the /etc/system file:
set sd:sd_io_time=0x1e
4. Reboot the system to load the newly added parameters.
Configuring Emulex FCAs with the lpfc driver
To configure Emulex FCAs with the lpfc driver:
1. Ensure that you have the latest supported version of the lpfc driver (see http://www.hp.com/
storage/spock).
You must sign up for an HP Passport to enable access. For more information on how to use SPOCK, see the Getting Started Guide (http://h20272.www2.hp.com/Pages/spock_overview/
introduction.html).
2. Edit the following parameters in the /kernel/drv/lpfc.conf driver configuration file to set up the FCAs for a SAN infrastructure:
topology=2;
scan-down=0;
nodev-tmo=60;
linkdown-tmo=60;
Oracle Solaris 61
3. If using a single FCA and no multipathing, edit the following parameter to reduce the risk of data loss in case of a controller reboot:
nodev-tmo=120;
4. If using Veritas Volume Manager (VxVM) DMP for multipathing (single or multiple FCAs), edit the following parameter to ensure proper VxVM behavior:
no-device-delay=0;
5. In a fabric topology, use persistent bindings to bind a SCSI target ID to the world wide port name (WWPN) of an array port. This ensures that the SCSI target IDs remain the same when the system reboots. Set persistent bindings by editing the configuration file or by using the lputil utility.
NOTE: HP recommends that you assign target IDs in sequence, and that the EVA has the
same target ID on each host in the SAN. The following example for an EVA6400/8400 illustrates the binding of targets 20 and 21
(lpfc instance 2) to WWPNs 50001fe100270938 and 50001fe100270939, and the binding of targets 30 and 31 (lpfc instance 0) to WWPNs 50001fe10027093a and 50001fe10027093b:
fcp-bind-WWPN="50001fe100270938:lpfc2t20", "50001fe100270939:lpfc2t21", "50001fe10027093a:lpfc0t30", "50001fe10027093b:lpfc0t31";
NOTE: Replace the WWPNs in the example with the WWPNs of your array ports.
6. For each LUN that will be accessed, add an entry to the /kernel/drv/sd.conf file. For example, if you want to access LUNs 1 and 2 through all four paths, add the following entries to the end of the file:
name="sd" parent="lpfc" target=20 lun=1;
name="sd" parent="lpfc" target=21 lun=1;
name="sd" parent="lpfc" target=30 lun=1;
name="sd" parent="lpfc" target=31 lun=1;
name="sd" parent="lpfc" target=20 lun=2;
name="sd" parent="lpfc" target=21 lun=2;
name="sd" parent="lpfc" target=30 lun=2;
name="sd" parent="lpfc" target=31 lun=2;
7. Reboot the server to implement the changes to the configuration files.
8. If LUNs have been preconfigured in the /kernel/drv/sd.conf file, use the devfsadm command to perform LUN rediscovery after configuring the file.
NOTE: The lpfc driver is not supported for Oracle StorEdge Traffic Manager/Sun Storage
Multipathing. To configure an Emulex FCA using the Oracle SAN driver stack, see “Configuring
FCAs with the Oracle SAN driver stack” (page 60).
Configuring QLogic FCAs with the qla2300 driver
See the latest Enterprise Virtual Array release notes or contact your HP representative to determine which QLogic FCAs and which driver version HP supports with the qla2300 driver. To configure QLogic FCAs with the qla2300 driver:
1. Ensure that you have the latest supported version of the qla2300 driver (see http://
www.qlogic.com).
62 Configuring application servers
2. You must sign up for an HP Passport to enable access. For more information on how to use SPOCK, see the Getting Started Guide (http://www.qlogic.com).
3. Edit the following parameters in the /kernel/drv/qla2300.conf driver configuration file to set up the FCAs for a SAN infrastructure (HBA0 is used in the example, but the parameter edits apply to all HBAs):
NOTE: If you are using a Sun-branded QLogic FCA, the configuration file is
\kernal\drv\qlc.conf.
hba0-connection-options=1;
hba0-link-down-timeout=60;
hba0-persistent-binding-configuration=1;
NOTE: If you are using Solaris 10, editing the persistent binding parameter is not required.
4. If using a single FCA and no multipathing, edit the following parameters to reduce the risk of data loss in case of a controller reboot:
hba0-login-retry-count=60;
hba0-port-down-retry-count=60;
hba0-port-down-retry-delay=2;
The hba0-port-down-retry-delay parameter is not supported with the 4.13.01 driver; the time between retries is fixed at approximately 2 seconds.
5. In a fabric topology, use persistent bindings to bind a SCSI target ID to the world wide port name (WWPN) of an array port. This ensures that the SCSI target IDs remain the same when the system reboots. Set persistent bindings by editing the configuration file or by using the SANsurfer utility.
NOTE: Persistent binding is not required for QLogic FCAs if you are using Solaris 10.
The following example for an EVA6400/8400 illustrates the binding of targets 20 and 21 (hba instance 0) to WWPNs 50001fe100270938 and 50001fe100270939, and the binding of targets 30 and 31 (hba instance 1) to WWPNs 50001fe10027093a and 50001fe10027093b:
hba0-SCSI-target-id-20-fibre-channel-port-name="50001fe100270938";
hba0-SCSI-target-id-21-fibre-channel-port-name="50001fe10027093a";
hba1-SCSI-target-id-30-fibre-channel-port-name="50001fe100270939";
hba1-SCSI-target-id-31-fibre-channel-port-name="50001fe10027093b";
NOTE: Replace the WWPNs in the example with the WWPNs of your array ports.
6. If the qla2300 driver is version 4.13.01 or earlier, for each LUN that users will access add an entry to the /kernel/drv/sd.conf file:
name="sd" class="scsi" target=20 lun=1;
name="sd" class="scsi" target=21 lun=1;
name="sd" class="scsi" target=30 lun=1;
name="sd" class="scsi" target=31 lun=1;
If LUNs are preconfigured in the/kernel/drv/sd.conf file, after changing the configuration file. use the devfsadm command to perform LUN rediscovery.
7. If the qla2300 driver is version 4.15 or later, verify that the following or a similar entry is present in the /kernel/drv/sd.conf file:
name="sd" parent="qla2300" target=2048;
Oracle Solaris 63
To perform LUN rediscovery after configuring the LUNs, use the following command:
/opt/QLogic_Corporation/drvutil/qla2300/qlreconfig –d qla2300 -s
8. Reboot the server to implement the changes to the configuration files.
NOTE: The qla2300 driver is not supported for Oracle StorEdge Traffic Manager/Sun Storage
Multipathing. To configure a QLogic FCA using the Oracle SAN driver stack, see “Configuring
FCAs with the Oracle SAN driver stack” (page 60).
Fabric setup and zoning
To set up the fabric and zoning:
1. Verify that the Fibre Channel cable is connected and firmly inserted at the array ports, host ports, and SAN switch.
2. Through the Telnet connection to the switch or Switch utilities, verify that the WWN of the EVA ports and FCAs are present and online.
3. Create a zone consisting of the WWNs of the EVA ports and FCAs, and then add the zone to the active switch configuration.
4. Enable and then save the new active switch configuration.
NOTE: There are variations in the steps required to configure the switch between different
vendors. For more information, see the HP SAN Design Reference Guide, available for downloading on the HP website: http://www.hp.com/go/sandesign.
Oracle StorEdge Traffic Manager (MPxIO)/Oracle Storage Multipathing
Oracle StorEdge Traffic Manager (MPxIO)/Sun Storage Multipathing can be used for FCAs configured with the Oracle SAN driver depending on the operating system version, architecture (SPARC/x86), and patch level installed. For configuration details, see the HP MPxIO application notes, available on the HP support website: http://www.hp.com/support/manuals.
NOTE: MPxIO is included in the SPARC and x86 Oracle SAN driver. A separate installation of
MPxIO is not required. In the Search products box, enter MPxIO, and then click the search symbol. Select the
application notes from the search results.
Configuring with Veritas Volume Manager
The Dynamic Multipathing (DMP) feature of Veritas Volume Manager (VxVM) can be used for all FCAs and all drivers. EVA disk arrays are certified for VxVM support. When you install FCAs, ensure that the driver parameters are set correctly. Failure to do so can result in a loss of path failover in DMP. For information about setting FCA parameters, see “Configuring FCAs with the
Oracle SAN driver stack” (page 60) and the FCA manufacturer’s instructions.
The DMP feature requires an Array Support Library (ASL) and an Array Policy Module (APM). The ASL/APM enables Asymmetric Logical Unit Access (ALUA). LUNs are accessed through the primary controller. After enablement, use the vxdisk list <device> command to determine the primary and secondary paths. For VxVM 4.1 (MP1 or later), you must download the ASL/APM from the Symantec/Veritas support site for installation on the host. This download and installation is not required for VxVM 5.0 or later.
To download and install the ASL/APM from the Symantec/Veritas support website:
1. Go to http://support.veritas.com.
2. Enter Storage Foundation for UNIX/Linux in the Product Lookup box.
3. Enter EVA in the Enter key words or phrase box, and then click the search symbol.
4. To further narrow the search, select Solaris in the Platform box.
5. Read TechNotes and follow the instructions to download and install the ASL/APM.
64 Configuring application servers
6. Run vxdctl enable to notify VxVM of the changes.
7. Verify the configuration of VxVM as shown in Example 3 “Verifying the VxVM configuration”
(the output may be slightly different depending on your VxVM version and the array configuration).
Example 3 Verifying the VxVM configuration
# vxddladm listsupport all | grep HP libvxhpevale.so HP HSV300, HSV400, HSV450
# vxddladm listsupport libname=libvxhpevale.so ATTR_NAME ATTR_VALUE ======================================================================= LIBNAME libvxhpevale.so VID HP PID HSV300, HSV400, HSV450 ARRAY_TYPE A/A-A-HP ARRAY_NAME EVA4400, EVA6400, EVA8400
# vxdmpadm listapm all | grep HP dmphpalua dmphpalua 1 A/A-A-HP Active # vxdmpadm listapm dmphpalua Filename: dmphpalua APM name: dmphpalua APM version: 1 Feature: VxVM VxVM version: 41 Array Types Supported: A/A-A-HP Depending Array Types: A/A-A State: Active
# vxdmpadm listenclosure all ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE ============================================================================ Disk Disk DISKS CONNECTED Disk EVA84000 EVA8400 50001FE1002709E0 CONNECTED A/A-A-HP
By default, the EVA I/O policy is set to Round-Robin. For VxVM 4.1 MP1, only one path is used for the I/Os with this policy. Therefore, HP recommends that you change the I/O policy to Adaptive in order to use all paths to the LUN on the primary controller. Example 4 “Setting the
iopolicy” shows the commands you can use to check and change the I/O policy.
Example 4 Setting the iopolicy
# vxdmpadm getattr arrayname EVA8400 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EVA84000 Round-Robin Round-Robin
# vxdmpadm setattr arrayname EVA8400 iopolicy=adaptive
# vxdmpadm getattr arrayname EVA8400 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ EVA84000 Round-Robin Adaptive
Configuring virtual disks from the host
The procedure used to configure the LUN path to the array depends on the FCA driver. For more information, see “Installing Fibre Channel adapters” (page 49).
Oracle Solaris 65
To identify the WWLUN ID assigned to the virtual disk and/or the LUN assigned by the storage administrator:
Oracle SAN driver, with MPxIO enabled:
You can use the luxadm probe command to display the array/node WWN and associated array for the devices.
The WWLUN ID is part of the device file name. For example:
/dev/rdsk/c5t600508B4001030E40000500000B20000d0s2
If you use luxadm display, the LUN is displayed after the device address. For
example:
50001fe1002709e9,5
Oracle SAN driver, without MPxIO:
The EVA WWPN is part of the file name (which helps you to identify the controller). For example:
/dev/rdsk/c3t50001FE1002709E8d5s2 /dev/rdsk/c3t50001FE1002709ECd5s2
/dev/rdsk/c4t50001FE1002709E9d5s2 /dev/rdsk/c4t50001FE1002709EDd5s2
If you use luxadm probe, the array/node WWN and the associated device files are displayed.
You can retrieve the WWLUN ID as part of the format -e (scsi, inquiry) output; however,
it is cumbersome and hard to read. For example:
09 e8 20 04 00 00 00 00 00 00 35 30 30 30 31 46 .........50001F
45 31 30 30 32 37 30 39 45 30 35 30 30 30 31 46 E1002709E050001F 45 31 30 30 32 37 30 39 45 38 36 30 30 35 30 38 E1002709E8600508 42 34 30 30 31 30 33 30 45 34 30 30 30 30 35 30 B4001030E4000050 30 30 30 30 42 32 30 30 30 30 00 00 00 00 00 00 0000B20000
The assigned LUN is part of the device file name. For example:
/dev/rdsk/c3t50001FE1002709E8d5s2
You can also retrieve the LUN with luxadm display. The LUN is displayed after the device address. For example:
50001fe1002709e9,5
Emulex (lpfc)/QLogic (qla2300) drivers:
You can retrieve the WWPN by checking the assignment in the driver configuration file (the easiest method, because you then know the assigned target) or by using HBAnyware/SANSurfer.
You can retrieve the WWLUN ID by using HBAnyware/SANSurfer.
You can also retrieve the WWLUN ID as part of the format -e (scsi, inquiry) output; however, it is cumbersome and difficult to read. For example:
09 e8 20 04 00 00 00 00 00 00 35 30 30 30 31 46 .........50001F
45 31 30 30 32 37 30 39 45 30 35 30 30 30 31 46 E1002709E050001F 45 31 30 30 32 37 30 39 45 38 36 30 30 35 30 38 E1002709E8600508 42 34 30 30 31 30 33 30 45 34 30 30 30 30 35 30 B4001030E4000050 30 30 30 30 42 32 30 30 30 30 00 00 00 00 00 00 0000B20000
The assigned LUN is part of the device file name. For example:
/dev/dsk/c4t20d5s2
66 Configuring application servers
Verifying virtual disks from the host
Verify that the host can access virtual disks by using the format command. See Example 5 “Format
command”.
Example 5 Format command
# format Searching for disks...done c2t50001FE1002709F8d1: configured with capacity of 1008.00MB c2t50001FE1002709F8d2: configured with capacity of 1008.00MB c2t50001FE1002709FCd1: configured with capacity of 1008.00MB c2t50001FE1002709FCd2: configured with capacity of 1008.00MB c3t50001FE1002709F9d1: configured with capacity of 1008.00MB c3t50001FE1002709F9d2: configured with capacity of 1008.00MB c3t50001FE1002709FDd1: configured with capacity of 1008.00MB c3t50001FE1002709FDd2: configured with capacity of 1008.00MB
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /pci@1f,4000/scsi@3/sd@0,0
1. c2t50001FE1002709F8d1 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128> /pci@1f,4000/QLGC,qla@4/fp@0,0/ssd@w50001fe1002709f8,1
2. c2t50001FE1002709F8d2 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128> /pci@1f,4000/QLGC,qla@4/fp@0,0/ssd@w50001fe1002709f8,2
3. c2t50001FE1002709FCd1 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128> /pci@1f,4000/QLGC,qla@4/fp@0,0/ssd@w50001fe1002709fc,1
4. c2t50001FE1002709FCd2 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128> /pci@1f,4000/QLGC,qla@4/fp@0,0/ssd@w50001fe1002709fc,2
5. c3t50001FE1002709F9d1 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128> /pci@1f,4000/lpfc@5/fp@0,0/ssd@w50001fe1002709f9,1
6. c3t50001FE1002709F9d2 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128> /pci@1f,4000/lpfc@5/fp@0,0/ssd@w50001fe1002709f9,2
7. c3t50001FE1002709FDd1 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128> /pci@1f,4000/lpfc@5/fp@0,0/ssd@w50001fe1002709fd,1
8. c3t50001FE1002709FDd2 <HP-HSV400-0952 cyl 126 alt 2 hd 128 sec 128> /pci@1f,4000/lpfc@5/fp@0,0/ssd@w50001fe1002709fd,2 Specify disk (enter its number):
If you cannot access the virtual disks:
Verify the zoning.
For Oracle Solaris, verify that the correct WWPNs for the EVA (lpfc, qla2300 driver) have
been configured and the target assignment is matched in /kernel/drv/sd.conf (lpfc and qla2300 4.13.01).
Labeling and partitioning the devices
Label and partition the new devices using the Oracle format utility:
CAUTION: When selecting disk devices, be careful to select the correct disk because using the
label/partition commands on disks that have data can cause data loss.
1. Enter the format command at the root prompt to start the utility.
2. Verify that all new devices are displayed. If not, enter quit or press Ctrl+D to exit the format
utility and verify that the configuration is correct (see “Configuring virtual disks from the host”
(page 65)).
3. Record the character-type device file names (for example, c1t2d0) for all new disks.
You will use this data to create the file systems or to use the file system with the Solaris or Veritas Volume Manager.
4. When prompted to specify the disk, enter the number of the device to be labeled.
Oracle Solaris 67
5. When prompted to label the disk, enter Y.
6. Because the virtual geometry of the presented volume varies with size, select autoconfigure
as the disk type.
7. If you are not using Veritas Volume Manager, use the partition command to create or
adjust the partitions.
8. For each new device, use the disk command to select another disk, and then repeat Step 1
through Step 5.
9. When you finish labeling the disks, enter quit or press Ctrl+D to exit the format utility.
For more information, see the System Administration Guide: Devices and File Systems for your operating system, available on the Oracle website:
http://www.oracle/com/technetwork/indexes/documentation/index.html
NOTE: Some format commands are not applicable to the EVA storage systems.
VMware
Installing or upgrading VMware
For installation instructions, see the VMware installation guide for your server. If you have already installed VMware, use the following procedure to patch or upgrade the system:
1. Extract the upgrade-tarball on the system. A sample command extract follows:
esx-n.n.n-14182-upgrade.tar.gz
2. Boot the system in Linux mode by selecting the Linux boot option from the boot menu selection window.
3. Extract the tar file and enter the following command:
upgrade.pl
4. Reboot the system using the default boot option (esx).
Configuring the EVA6400/8400 with VMware host servers
To configure an EVA6400/8400 on a VMware ESX server:
1. Using HP P6000 Command View, configure a host for one ESX server.
2. Verify that the Fibre Channel Adapters (FCAs) are populated in the world wide port name
(WWPN) list. Edit the WWPN, if necessary.
3. Set the connection type to VMware.
4. To configure additional ports for the ESX server:
a. Select a host (defined in Step 1). b. Select the Ports tab in the Host Properties window. c. Add additional ports for the ESX server.
5. Perform one of the following tasks to locate the WWPN:
From the service console, enter the wwpn.pl command.
Output similar to the following is displayed:
[root@gnome7 root]# wwpn.plvmhba0: 210000e08b09402b (QLogic) 6:1:0vmhba1:
210000e08b0ace2d (QLogic) 6:2:0[root@gnome7 root]#
Check the SCSI device information section of /proc/scsi/qla2300/X directory, where
X is a bus instance number.
Output similar to the following is displayed:
SCSI Device Information:
68 Configuring application servers
scsi-qla0-adapter-node=200000e08b0b0638;
scsi-qla0-adapter-port=210000e08b0b0638;
6. Repeat this procedure for each ESX server.
Configuring an ESX server
This section provides information about configuring the ESX server.
Loading the FCA NVRAM
The FCA stores configuration information in the non-volatile RAM (NVRAM) cache. You must download the configuration for HP Storage products.
Perform one of the following procedures to load the NVRAM:
If you have a HP ProLiant blade server:
Download the supported FCA BIOS update, available on http://www.hp.com/support/
downloads, to a virtual floppy.
For instructions on creating and using a virtual floppy, see the HP Integrated Lights-Out user guide.
1.
2. Unzip the file.
3. Follow the instructions in the readme file to load the NVRAM configuration onto each
FCA.
If you have a blade server other than a ProLiant blade server:
Download the supported FCA BIOS update, available on http://www.hp.com/support/
downloads.
1.
2. Unzip the file.
3. Follow the instructions in the readme file to load the NVRAM configuration onto each
FCA.
Setting the multipathing policy
You can set the multipathing policy for each LUN or logical drive on the SAN to one of the following:
Most recently used (MRU)
Fixed
Preferred
ESX 2.5.x commands
The # vmkmultipath –s vmhba0:0:1 –p mru command sets vmhba0:0:1 with an
MRU multipathing policy for all LUNs on the SAN.
The # vmkmultipath -s vmhba1:0:1 -p fixed command sets vmhba1:0:1 with a
Fixed multipathing policy.
The # vmkmultipath -s vmhba1:0:1 -r vmhba2:0:1 -e vmhba2:0:1 command
sets and enables vmhba2:0:1 with a Preferred multipathing policy.
ESX 3.x commands
The # esxcfg-mpath --policy=mru --lun=vmhba0:0:1 command sets vmhba0:0:1
with an MRU multipathing policy.
The # esxcfg-mpath --policy=fixed --lun=vmhba0:0:1 command sets
vmhba1:0:1 with a Fixed multipathing policy.
The # esxcfg-mpath --preferred --path=vmhba2:0:1 --lun=vmhba2:0:1
command sets vmhba2:0:1 with a Preferred multipathing policy.
VMware 69
ESX 4.x commands
The # esxcli nmp device setpolicy --device
naa.6001438002a56f220001100000710000 --psp VMW_PSP_MRU command sets device naa.6001438002a56f220001100000710000 with an MRU multipathing policy.
The # esxcli nmp device setpolicy --device
naa.6001438002a56f220001100000710000 --psp VMW_PSP_FIXED command sets device naa.6001438002a56f220001100000710000 with a Fixed multipathing policy.
The # esxcli nmp fixed setpreferred --device
naa.6001438002a56f220001100000710000 --path vmhba1:C0:T2:L1 command
sets device naa.6001438002a56f220001100000710000 with a Preferred multipathing policy.
NOTE: Each LUN can be accessed through both EVA storage controllers at the same time;
however, each LUN path is optimized through one controller. To optimize performance, if the LUN multipathing policy is Fixed, all servers must use a path to the same controller.
You can also set the multipathing policy from the VMware Management User Interface (MUI) by clicking the Failover Paths tab in the Storage Management section and then selecting Edit… link for each LUN whose policy you want to modify.
Specifying DiskMaxLUN
The DiskMaxLUN setting specifies the highest-numbered LUN that can be scanned by the ESX server.
For ESX 2.5.x, the default value is 8. If more than eight LUNs are presented, you must change
the setting to an appropriate value. To set DiskMaxLUN, select Options> Advanced Settings in the MUI, and then enter the highest-numbered LUN.
For ESX 3.x or ESX 4.x, the default value is set to the Max set value of 256. To set
DiskMaxLun to a different value, in Virtual Infrastructure Client, select Configuration> Advance Settings> Disk> Disk.MaxLun, and then enter the new value.
Verifying connectivity
To verify proper configuration and connectivity to the SAN:
For ESX 2.5.x, enter the # vmkmultipath -q command.
For ESX 3.x, enter the # esxcfg-mpath -l command.
For ESX 4.x, enter the # esxcfg-mpath -b command.
For each LUN, verify that the multipathing policy is set correctly and that each path is marked on. If any paths are marked dead or are not listed, check the cable connections and perform a rescan on the appropriate FCA. For example:
For ESX 2.5.x, enter the # cos-rescan.sh vmhba0 command.
For ESX 3.x or ESX 4.x, enter the # esxcfg-rescan vmhba0 command.
If paths or LUNs are still missing, see the VMware or HP Storage documentation for troubleshooting information.
Verifying virtual disks from the host
To verify that the host can access the virtual disks, enter the # more /proc/scsi/scsi command. The output lists all SCSI devices detected by the server. An EVA6400/8400 LUN entry looks similar
to the following:
Host: scsi3 Channel: 00 ID: 00 Lun: 01
70 Configuring application servers
Vendor: HP Model: HS400 Rev:
Type: Direct-Access ANSI SCSI revision: 02
Configuring raw device mapping
Raw Device Mapping is not supported with Windows 2003 Virtual Machine Guest Operating System on versions earlier than ESX 3.5.0.
To avoid issues with RDM LUNs, complete the following changes for the applicable ESX version.
For VMware ESX 3.0.1 or 3.0.2
Using the Virtual Infrastructure Client console, complete the following steps:
1. From the Configuration Tab, select Advanced Settings > Disk.
2. Add HSV3: HSV4 to the following setting: HSV1:HSV2:DGC:MSA_VOLUME:HSV3:HSV4
3. Click OK to apply the changes.
Windows
Verifying virtual disk access from the host
With Windows, you must rescan for new virtual disks to be accessible. After you rescan, you must select the disk type, and then initialize (assign disk signature), partition, format, and assign drive letters or mount points according to standard Windows conventions.
Setting the Pending Timeout value for large cluster configurations
For clusters, if disk resource counts are greater than 8, HP recommends that you increase the Pending Timeout value for each disk resource from 180 second to 360 seconds. Changing the Pending Timeout value will ensure continuous operation of disk resources across the SAN.
To set the Pending Timeout value:
1. Open Microsoft Cluster Administrator.
2. Select a Disk Group resource in the left pane.
3. Right-click a Disk Resource in the right pane and select Properties.
4. Click the Advanced tab.
5. Change the Pending Timeout value to 360.
6. Click OK.
7. Repeat steps 3-6 for each disk resource.
Windows 71
5 Customer replaceable units
Customer self repair (CSR)
Table 13 (page 73) and Table 19 (page 73) identifies which hardware components are customer
replaceable. Using HP Insight Remote Support or other diagnostic tools, a support specialist will work with you to diagnose and assess whether a replacement component is required to address a system problem. The specialist will also help you determine whether you can perform the replacement.
Parts only warranty service
Your HP Limited Warranty may include a parts only warranty service. Under the terms of parts only warranty service, HP will provide replacement parts free of charge.
For parts only warranty service, CSR part replacement is mandatory. If you request HP to replace these parts, you will be charged for travel and labor costs.
Best practices for replacing hardware components
The following information will help you replace the hardware components on your storage system successfully.
CAUTION: Removing a component significantly changes the air flow within the enclosure. All
components must be installed for the enclosure to cool properly. If a component fails, leave it in place in the enclosure until a new component is available to install.
Component replacement videos
To assist you in replacing the components, videos have been produced of the procedures. To view the videos, go to the following website and navigate to your product:
http://www.hp.com/go/sml
Verifying component failure
Consult HP technical support to verify that the hardware component has failed and that you
are authorized to replace it yourself.
Additional hardware failures can complicate component replacement. Check HP P6000
Command View and/or HP Insight Remote Support as follows to detect any additional hardware problems:
When you have confirmed that a component replacement is required, you may want to
clear the Real Time Monitoring view. This makes it easier to identify additional hardware problems that may occur while waiting for the replacement part.
Before installing the replacement part, check the Real Time Monitoring view for any new
hardware problems. If additional hardware problems have occurred, contact HP support before replacing the component.
See the HP Insight Remote Support documentation for additional information.
Identifying the spare part
Parts have a nine-character spare component number on their label (Figure 25 (page 73)). For some spare parts, the part number will be available in HP P6000 Command View. Alternatively, the HP call center will assist in identifying the correct spare part number.
72 Customer replaceable units
Figure 25 Typical product label
1. Spare component number
Replaceable parts
This product contains the replaceable parts listed in Table 13 (page 73) and Table 19 (page 73). Parts that are available for customer self repair (CSR) are indicated as follows:
Mandatory CSR where geography permits. Order the part directly from HP and repair the product yourself. On-site or return-to-depot repair is not provided under warranty.
• Optional CSR. You can order the part directly from HP and repair the product yourself, or you can request that HP repair the product. If you request repair from HP, you may be charged for the repair depending on the product warranty.
-- No CSR. The replaceable part is not available for self repair. For assistance, contact an HP-authorized service provider.
Table 13 Controller enclosure replacement parts
CSR statusSpare part number (non
RoHS/RoHS)Description
512730–00110 port controller, 4GB total cache (HSV400)
512731–00112 port controller, 7GB Total Cache (HSV450)
512732–00112 port t controller, 11GB Total Cache (HSV450)
512735-001Array battery
489883–001Array power supply
483017–001Array fan module
508563–001OCP module
--512733–001Memory board: cache line flush 10 port
--512734–001Memory board: cache line flush 12 port
Table 19 M6412-A disk enclosure replaceable parts
CSR statusSpare part number (non RoHS/RoHS)Description
461492–0054 Gb FC disk shelf midplane
461493–0054 Gb FC disk shelf backplane
399053–001SPS-BD Front UID
399054–001SPS-BD Power UID with cable
399055–001SPS-BD Front UID Interconnect PCA with cable
Replaceable parts 73
Table 19 M6412-A disk enclosure replaceable parts (continued)
CSR statusSpare part number (non RoHS/RoHS)Description
461494–0054 Gb FC disk shelf I/O module
468715–001FC disk shelf fan module
405914–001FC disk shelf power supply
537582-001Disk drive 300 GB, 10K, EVA M6412–A Enclosure,
Fibre Channel
518734-001Disk drive 450 GB, 10K, EVA M6412–A Enclosure,
Fibre Channel
518735-001Disk drive 600 GB, 10K, EVA M6412–A Enclosure,
Fibre Channel
454410–001Disk drive 146 GB, 15K, EVA M6412–A Enclosure,
Fibre Channel
454411–001Disk drive 300 GB, 15K, EVA M6412–A Enclosure,
Fibre Channel
466277–001Disk drive 400 GB, 15K, EVA M6412–A Enclosure,
Fibre Channel
454412–001Disk drive 450 GB, 15K, EVA M6412–A Enclosure,
Fibre Channel
495808-001Disk drive 600 GB, 15K, EVA M6412–A Enclosure,
Fibre Channel
454414–001Disk drive 1 TB, 7.2K, EVA M6412-A Enclosure, FATA
515189–001Disk drive 72 GB, EVA M6412–A Enclosure, SSD
595336-001Disk drive 200 GB, EVA M6412–A Enclosure, SSD
595337-001Disk drive 400 GB, EVA M6412–A Enclosure, SSD
432374-001SPS-CABLE ASSY, 4Gb COPPER, FC, 2.0m
432375-001SPS-CABLE ASSY, 4Gb COPPER, FC, 0.6m
496917-001SPS-CABLE ASSY, 4Gb COPPER, FC, 0.41m
For more information about CSR, contact your local service provider. For North America, see the CSR website:
http://www.hp.com/go/selfrepair
To determine the warranty service provided for this product, see the warranty information website:
http://www.hp.com/go/storagewarranty
To order a replacement part, contact an HP-authorized service provider or see the HP Parts Store online:
http://www.hp.com/buy/parts
74 Customer replaceable units
Replacing the failed component
CAUTION: Components can be damaged by electrostatic discharge. Use proper anti-static
protection.
Always transport and store CRUs in an ESD protective enclosure.
Do not remove the CRU from the ESD protective enclosure until you are ready to install it.
Always use ESD precautions, such as a wrist strap, heel straps on conductive flooring, and
an ESD protective smock when handling ESD sensitive equipment.
Avoid touching the CRU connector pins, leads, or circuitry.
Do not place ESD generating material such as paper or non anti-static (pink) plastic in an ESD
protective enclosure with ESD sensitive equipment.
HP recommends waiting until periods of low storage system activity to replace a component.
When replacing components at the rear of the rack, cabling may obstruct access to the
component. Carefully move any cables out of the way to avoid loosening any connections. In particular, avoid cable damage that may be caused by:
Kinking or bending. Disconnecting cables without capping. If uncapped, cable performance may be impaired
by contact with dust, metal or other surfaces.
Placing removed cables on the floor or other surfaces, where they may be walked on or
otherwise compressed.
Replacement instructions
Printed instructions are shipped with the replacement part. Instructions for all replaceable components are also included on the documentation CD that ships with the EVA6400/8400 and posted on the web. For the latest information, HP recommends that you obtain the instructions from the web.
Go to the following web site: http://www.hp.com/support/manuals. Under Storage, select Disk Storage Systems, then select HP 6400/8400 Enterprise Virtual Arrays under P6000/EVA Disk Arrays. The manuals page for the EVA6400/8400 appears. Scroll to the Service and maintenance information section where the replacement instructions are posted.
HP controller enclosure replacement instructions
HP cache battery replacement instructions
HP controller blower replacement instructions
HP power supply replacement instructions
HP operator control panel replacement instructions
HP disk enclosure backplane replacement instructions
HP disk enclosure fan module replacement instructions
HP disk enclosure front UID interconnect board (with cable) replacement instructions
HP disk enclosure front UID replacement instructions
HP disk enclosure I/O module replacement instructions
HP disk enclosure midplane replacement instructions
HP disk enclosure power supply replacement instructions
Replacing the failed component 75
6 Support and other resources
Contacting HP
For worldwide technical support information, see the HP support website:
http://www.hp.com/support
Before contacting HP, collect the following information:
Product model names and numbers
Technical support registration number (if applicable)
Product serial numbers
Error messages
Operating system type and revision level
Detailed questions
Subscription service
HP recommends that you register your product at the Subscriber's Choice for Business website:
http://www.hp.com/go/e-updates
After registering, you will receive e-mail notification of product enhancements, new driver versions, firmware updates, and other product resources.
Documentation feedback
HP welcomes your feedback. To make comments and suggestions about product documentation, please send a message to
storagedocsFeedback@hp.com. All submissions become the property of HP.
Related information
Documents
TYou can find the documents referenced in this guide on the Manuals page of the Business Support Center website:
http://www.hp.com/support/manuals
In the Storage section, click Disk Storage Systems or Storage Software and then select your product.
HP websites
For additional information, see the following HP websites:
HP:
http://www.hp.com
HP Storage:
http://www.hp.com/go/storage
HP Partner Locator:
http://www.hp.com/service_locator
HP Software Downloads:
http://www.hp.com/support/downloads
76 Support and other resources
HP Software Depot:
http://www.software.hp.com
HP Single Point of Connectivity Knowledge (SPOCK):
http://www.hp.com/storage/spock
HP SAN manuals:
http://www.hp.com/go/sdgmanuals
Typographic conventions
Table 20 Document conventions
ElementConvention
Cross-reference links and e-mail addressesBlue text: Table 20 (page 77)
Website addressesBlue, underlined text: http://www.hp.com
Bold text
Keys that are pressed
Text typed into a GUI element, such as a box
GUI elements that are clicked or selected, such as menu
and list items, buttons, tabs, and check boxes
Text emphasisItalic text
Monospace text
File and directory names
System output
Code
Commands, their arguments, and argument values
Monospace, italic text
Code variables
Command variables
Emphasized monospace textMonospace, bold text
Indication that the example continues.
. .
An alert that calls attention to important information that if not understood or followed can result in personal injury.
WARNING!
An alert that calls attention to important information that if not understood or followed can result in data loss, data corruption, or damage to hardware or software.
CAUTION:
An alert that calls attention to essential information.
IMPORTANT:
An alert that calls attention to additional or supplementary information.
NOTE:
An alert that calls attention to helpful hints and shortcuts.
TIP:
Typographic conventions 77
Rack stability
Rack stability protects personnel and equipment.
WARNING! To reduce the risk of personal injury or damage to equipment:
Extend leveling jacks to the floor.
Ensure that the full weight of the rack rests on the leveling jacks.
Install stabilizing feet on the rack.
In multiple-rack installations, fasten racks together securely.
Extend only one rack component at a time. Racks can become unstable if more than one
component is extended.
Customer self repair
HP customer self repair (CSR) programs allow you to repair your product. If a CSR part needs replacing, HP ships the part directly to you so that you can install it at your convenience. Some parts do not qualify for CSR. Your HP-authorized service provider will determine whether a repair can be accomplished by CSR.
For more information about CSR, contact your local service provider, or see the CSR website:
http://www.hp.com/go/selfrepair
78 Support and other resources
A Regulatory compliance notices
Regulatory compliance identification numbers
For the purpose of regulatory compliance certifications and identification, this product has been assigned a unique regulatory model number. The regulatory model number can be found on the product nameplate label, along with all required approval markings and information. When requesting compliance information for this product, always refer to this regulatory model number. The regulatory model number is not the marketing name or model number of the product.
Product specific information:
HP ________________ Regulatory model number: _____________ FCC and CISPR classification: _____________ These products contain laser components. See Class 1 laser statement in the “Laser compliance
notices” (page 83) section.
Federal Communications Commission notice
Part 15 of the Federal Communications Commission (FCC) Rules and Regulations has established Radio Frequency (RF) emission limits to provide an interference-free radio frequency spectrum. Many electronic devices, including computers, generate RF energy incidental to their intended function and are, therefore, covered by these rules. These rules place computers and related peripheral devices into two classes, A and B, depending upon their intended installation. Class A devices are those that may reasonably be expected to be installed in a business or commercial environment. Class B devices are those that may reasonably be expected to be installed in a residential environment (for example, personal computers). The FCC requires devices in both classes to bear a label indicating the interference potential of the device as well as additional operating instructions for the user.
FCC rating label
The FCC rating label on the device shows the classification (A or B) of the equipment. Class B devices have an FCC logo or ID on the label. Class A devices do not have an FCC logo or ID on the label. After you determine the class of the device, refer to the corresponding statement.
Class A equipment
This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference, in which case the user will be required to correct the interference at personal expense.
Class B equipment
This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment
Regulatory compliance identification numbers 79
off and on, the user is encouraged to try to correct the interference by one or more of the following measures:
Reorient or relocate the receiving antenna.
Increase the separation between the equipment and receiver.
Connect the equipment into an outlet on a circuit that is different from that to which the receiver
is connected.
Consult the dealer or an experienced radio or television technician for help.
Declaration of Conformity for products marked with the FCC logo, United States only
This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation.
For questions regarding this FCC declaration, contact us by mail or telephone:
Hewlett-Packard Company P.O. Box 692000, Mail Stop 510101 Houston, Texas 77269-2000
Or call 1-281-514-3333
Modification
The FCC requires the user to be notified that any changes or modifications made to this device that are not expressly approved by Hewlett-Packard Company may void the user's authority to operate the equipment.
Cables
When provided, connections to this device must be made with shielded cables with metallic RFI/EMI connector hoods in order to maintain compliance with FCC Rules and Regulations.
Canadian notice (Avis Canadien)
Class A equipment
This Class A digital apparatus meets all requirements of the Canadian Interference-Causing Equipment Regulations.
Cet appareil numérique de la class A respecte toutes les exigences du Règlement sur le matériel brouilleur du Canada.
Class B equipment
This Class B digital apparatus meets all requirements of the Canadian Interference-Causing Equipment Regulations.
Cet appareil numérique de la class B respecte toutes les exigences du Règlement sur le matériel brouilleur du Canada.
European Union notice
This product complies with the following EU directives:
Low Voltage Directive 2006/95/EC
EMC Directive 2004/108/EC
Compliance with these directives implies conformity to applicable harmonized European standards (European Norms) which are listed on the EU Declaration of Conformity issued by Hewlett-Packard for this product or product family.
80 Regulatory compliance notices
This compliance is indicated by the following conformity marking placed on the product:
This marking is valid for non-Telecom products and EU harmonized Telecom products (e.g., Bluetooth).
Certificates can be obtained from http://www.hp.com/go/certificates. Hewlett-Packard GmbH, HQ-TRE, Herrenberger Strasse 140, 71034 Boeblingen, Germany
Japanese notices
Japanese VCCI-A notice
Japanese VCCI-B notice
Japanese VCCI marking
Japanese power cord statement
Korean notices
Class A equipment
Japanese notices 81
Class B equipment
Taiwanese notices
BSMI Class A notice
Taiwan battery recycle statement
Turkish recycling notice
Türkiye Cumhuriyeti: EEE Yönetmeliğine Uygundur
Vietnamese Information Technology and Communications compliance marking
82 Regulatory compliance notices
Laser compliance notices
English laser notice
This device may contain a laser that is classified as a Class 1 Laser Product in accordance with U.S. FDA regulations and the IEC 60825-1. The product does not emit hazardous laser radiation.
WARNING! Use of controls or adjustments or performance of procedures other than those
specified herein or in the laser product's installation guide may result in hazardous radiation exposure. To reduce the risk of exposure to hazardous radiation:
Do not try to open the module enclosure. There are no user-serviceable components inside.
Do not operate controls, make adjustments, or perform procedures to the laser device other
than those specified herein.
Allow only HP Authorized Service technicians to repair the unit.
The Center for Devices and Radiological Health (CDRH) of the U.S. Food and Drug Administration implemented regulations for laser products on August 2, 1976. These regulations apply to laser products manufactured from August 1, 1976. Compliance is mandatory for products marketed in the United States.
Dutch laser notice
French laser notice
Laser compliance notices 83
German laser notice
Italian laser notice
Japanese laser notice
84 Regulatory compliance notices
Spanish laser notice
Recycling notices
English recycling notice
Disposal of waste equipment by users in private household in the European Union
This symbol means do not dispose of your product with your other household waste. Instead, you should protect human health and the environment by handing over your waste equipment to a designated collection point for the recycling of waste electrical and electronic equipment. For more information, please contact your household waste disposal service
Recycling notices 85
Bulgarian recycling notice
Изхвърляне на отпадъчно оборудване от потребители в частни домакинства в Европейския съюз
Този символ върху продукта или опаковката му показва, че продуктът не трябва да се изхвърля заедно с другите битови отпадъци. Вместо това, трябва да предпазите човешкото здраве и околната среда, като предадете отпадъчното оборудване в предназначен за събирането му пункт за рециклиране на неизползваемо електрическо и електронно борудване. За допълнителна информация се свържете с фирмата по чистота, чиито услуги използвате.
Czech recycling notice
Likvidace zařízení v domácnostech v Evropské unii
Tento symbol znamená, že nesmíte tento produkt likvidovat spolu s jiným domovním odpadem. Místo toho byste měli chránit lidské zdraví a životní prostředí tím, že jej předáte na k tomu určené sběrné pracoviště, kde se zabývají recyklací elektrického a elektronického vybavení. Pro více informací kontaktujte společnost zabývající se sběrem a svozem domovního odpadu.
Danish recycling notice
Bortskaffelse af brugt udstyr hos brugere i private hjem i EU
Dette symbol betyder, at produktet ikke må bortskaffes sammen med andet husholdningsaffald. Du skal i stedet den menneskelige sundhed og miljøet ved at afl evere dit brugte udstyr på et dertil beregnet indsamlingssted for af brugt, elektrisk og elektronisk udstyr. Kontakt nærmeste renovationsafdeling for yderligere oplysninger.
Dutch recycling notice
Inzameling van afgedankte apparatuur van particuliere huishoudens in de Europese Unie
Dit symbool betekent dat het product niet mag worden gedeponeerd bij het overige huishoudelijke afval. Bescherm de gezondheid en het milieu door afgedankte apparatuur in te leveren bij een hiervoor bestemd inzamelpunt voor recycling van afgedankte elektrische en elektronische apparatuur. Neem voor meer informatie contact op met uw gemeentereinigingsdienst.
86 Regulatory compliance notices
Estonian recycling notice
Äravisatavate seadmete likvideerimine Euroopa Liidu eramajapidamistes
See märk näitab, et seadet ei tohi visata olmeprügi hulka. Inimeste tervise ja keskkonna säästmise nimel tuleb äravisatav toode tuua elektriliste ja elektrooniliste seadmete käitlemisega egelevasse kogumispunkti. Küsimuste korral pöörduge kohaliku prügikäitlusettevõtte poole.
Finnish recycling notice
Kotitalousjätteiden hävittäminen Euroopan unionin alueella
Tämä symboli merkitsee, että laitetta ei saa hävittää muiden kotitalousjätteiden mukana. Sen sijaan sinun on suojattava ihmisten terveyttä ja ympäristöä toimittamalla käytöstä poistettu laite sähkö- tai elektroniikkajätteen kierrätyspisteeseen. Lisätietoja saat jätehuoltoyhtiöltä.
French recycling notice
Mise au rebut d'équipement par les utilisateurs privés dans l'Union Européenne
Ce symbole indique que vous ne devez pas jeter votre produit avec les ordures ménagères. Il est de votre responsabilité de protéger la santé et l'environnement et de vous débarrasser de votre équipement en le remettant à une déchetterie effectuant le recyclage des équipements électriques et électroniques. Pour de plus amples informations, prenez contact avec votre service d'élimination des ordures ménagères.
German recycling notice
Entsorgung von Altgeräten von Benutzern in privaten Haushalten in der EU
Dieses Symbol besagt, dass dieses Produkt nicht mit dem Haushaltsmüll entsorgt werden darf. Zum Schutze der Gesundheit und der Umwelt sollten Sie stattdessen Ihre Altgeräte zur Entsorgung einer dafür vorgesehenen Recyclingstelle für elektrische und elektronische Geräte übergeben. Weitere Informationen erhalten Sie von Ihrem Entsorgungsunternehmen für Hausmüll.
Recycling notices 87
Greek recycling notice
Απόρριψη άχρηοτου εξοπλισμού από ιδιώτες χρήστες στην Ευρωπαϊκή Ένωση
Αυτό το σύμβολο σημαίνει ότι δεν πρέπει να απορρίψετε το προϊόν με τα λοιπά οικιακά απορρίμματα. Αντίθετα, πρέπει να προστατέψετε την ανθρώπινη υγεία και το περιβάλλον παραδίδοντας τον άχρηστο εξοπλισμό σας σε εξουσιοδοτημένο σημείο συλλογής για την ανακύκλωση άχρηστου ηλεκτρικού και ηλεκτρονικού εξοπλισμού. Για περισσότερες πληροφορίες, επικοινωνήστε με την υπηρεσία απόρριψης απορριμμάτων της περιοχής σας.
Hungarian recycling notice
A hulladék anyagok megsemmisítése az Európai Unió háztartásaiban
Ez a szimbólum azt jelzi, hogy a készüléket nem szabad a háztartási hulladékkal együtt kidobni. Ehelyett a leselejtezett berendezéseknek az elektromos vagy elektronikus hulladék átvételére kijelölt helyen történő beszolgáltatásával megóvja az emberi egészséget és a környezetet.További információt a helyi köztisztasági vállalattól kaphat.
Italian recycling notice
Smaltimento di apparecchiature usate da parte di utenti privati nell'Unione Europea
Questo simbolo avvisa di non smaltire il prodotto con i normali rifi uti domestici. Rispettare la salute umana e l'ambiente conferendo l'apparecchiatura dismessa a un centro di raccolta designato per il riciclo di apparecchiature elettroniche ed elettriche. Per ulteriori informazioni, rivolgersi al servizio per lo smaltimento dei rifi uti domestici.
Latvian recycling notice
Europos Sąjungos namų ūkio vartotojų įrangos atliekų šalinimas
Šis simbolis nurodo, kad gaminio negalima išmesti kartu su kitomis buitinėmis atliekomis. Kad apsaugotumėte žmonių sveikatą ir aplinką, pasenusią nenaudojamą įrangą turite nuvežti į elektrinių ir elektroninių atliekų surinkimo punktą. Daugiau informacijos teiraukitės buitinių atliekų surinkimo tarnybos.
88 Regulatory compliance notices
Lithuanian recycling notice
Nolietotu iekārtu iznīcināšanas noteikumi lietotājiem Eiropas Savienības privātajās mājsaimniecībās
Šis simbols norāda, ka ierīci nedrīkst utilizēt kopā ar citiem mājsaimniecības atkritumiem. Jums jārūpējas par cilvēku veselības un vides aizsardzību, nododot lietoto aprīkojumu otrreizējai pārstrādei īpašā lietotu elektrisko un elektronisko ierīču savākšanas punktā. Lai iegūtu plašāku informāciju, lūdzu, sazinieties ar savu mājsaimniecības atkritumu likvidēšanas dienestu.
Polish recycling notice
Utylizacja zużytego sprzętu przez użytkowników w prywatnych gospodarstwach domowych w krajach Unii Europejskiej
Ten symbol oznacza, że nie wolno wyrzucać produktu wraz z innymi domowymi odpadkami. Obowiązkiem użytkownika jest ochrona zdrowa ludzkiego i środowiska przez przekazanie zużytego sprzętu do wyznaczonego punktu zajmującego się recyklingiem odpadów powstałych ze sprzętu elektrycznego i elektronicznego. Więcej informacji można uzyskać od lokalnej firmy zajmującej wywozem nieczystości.
Portuguese recycling notice
Descarte de equipamentos usados por utilizadores domésticos na União Europeia
Este símbolo indica que não deve descartar o seu produto juntamente com os outros lixos domiciliares. Ao invés disso, deve proteger a saúde humana e o meio ambiente levando o seu equipamento para descarte em um ponto de recolha destinado à reciclagem de resíduos de equipamentos eléctricos e electrónicos. Para obter mais informações, contacte o seu serviço de tratamento de resíduos domésticos.
Romanian recycling notice
Casarea echipamentului uzat de către utilizatorii casnici din Uniunea Europeană
Acest simbol înseamnă să nu se arunce produsul cu alte deşeuri menajere. În schimb, trebuie să protejaţi sănătatea umană şi mediul predând echipamentul uzat la un punct de colectare desemnat pentru reciclarea echipamentelor electrice şi electronice uzate. Pentru informaţii suplimentare, vă rugăm să contactaţi serviciul de eliminare a deşeurilor menajere local.
Recycling notices 89
Slovak recycling notice
Likvidácia vyradených zariadení používateľmi v domácnostiach v Európskej únii
Tento symbol znamená, že tento produkt sa nemá likvidovať s ostatným domovým odpadom. Namiesto toho by ste mali chrániť ľudské zdravie a životné prostredie odovzdaním odpadového zariadenia na zbernom mieste, ktoré je určené na recykláciu odpadových elektrických a elektronických zariadení. Ďalšie informácie získate od spoločnosti zaoberajúcej sa likvidáciou domového odpadu.
Spanish recycling notice
Eliminación de los equipos que ya no se utilizan en entornos domésticos de la Unión Europea
Este símbolo indica que este producto no debe eliminarse con los residuos domésticos. En lugar de ello, debe evitar causar daños a la salud de las personas y al medio ambiente llevando los equipos que no utilice a un punto de recogida designado para el reciclaje de equipos eléctricos y electrónicos que ya no se utilizan. Para obtener más información, póngase en contacto con el servicio de recogida de residuos domésticos.
Swedish recycling notice
Hantering av elektroniskt avfall för hemanvändare inom EU
Den här symbolen innebär att du inte ska kasta din produkt i hushållsavfallet. Värna i stället om natur och miljö genom att lämna in uttjänt utrustning på anvisad insamlingsplats. Allt elektriskt och elektroniskt avfall går sedan vidare till återvinning. Kontakta ditt återvinningsföretag för mer information.
Battery replacement notices
Dutch battery notice
90 Regulatory compliance notices
French battery notice
German battery notice
Battery replacement notices 91
Italian battery notice
Japanese battery notice
92 Regulatory compliance notices
Spanish battery notice
Battery replacement notices 93
B Error messages
This list of error messages is in order by status code value, 0 to xxx.
Table 21 Error Messages
How to CorrectMeaningStatus Code Value
No corrective action required.The SCMI command completed successfully.0
Successful Status
Delete the associated object and try the operation again. Several situations can cause this message: Presenting a LUN to a host:
Delete the current association or
specify a different LUN number.
Storage cell initialize:
Remove or erase disk volumes
before the storage cell can be successfully created.
Adding a port WWN to a host:
Specify a different port WWN.
Adding a disk to a disk group:
Delete the specified disk volume
before creating a new disk volume.
The object or relationship already exists.1
Object Already Exists
Report the error to product support.The command or response buffer is not large
enough to hold the specified number of
2 Supplied Buffer Too Small
items. This can be caused by a user or program error.
Report the error to product support.The handle is already assigned to an existing object. This can be caused by a user or program error.
3 Object Already Assigned
Reclaim some logical space or add
physical hardware.
There is insufficient storage available to perform the request.
4 Insufficient Available Data Storage
Report the error to product support.An unexpected condition was encountered while processing a request.
5 Internal Error
Report the error to product support.This error is no longer supported.6
Invalid status for logical disk
Report the error to product support.The supplied class code is of an unknown type. This can be caused by a user or program error.
7 Invalid Class
Report the error to product support.The function code specified with the class code is of an unknown type.
8 Invalid Function
Report the error to product support.The specified command supplied unrecognized values. This can indicate a user or program error.
9 Invalid Logical Disk Block State
Verify the hardware configuration and
retry the request.
The specified request supplied an invalid loop configuration.
10 Invalid Loop Configuration
Report the error to product support.There are insufficient resources to fulfill the request, the requested value is not
11 Invalid parameter
supported, or the parameters supplied are invalid. This can indicate a user or program error.
94 Error messages
Table 21 Error Messages (continued)
How to CorrectMeaningStatus Code Value
In the following cases, the message can occur because the operation is
The supplied handle is invalid. This can indicate a user error, program error, or a storage cell in an uninitialized state. In the following cases, the storage cell is in an uninitialized state, but no action is required:
12 Invalid Parameter handle
not allowed when the storage cell is in an uninitialized state. If you see these messages, initialize the storage cell and retry the operation. Storage cell set device addition policy
Storage cell discard (informational message): Storage cell set name
Storage cell look up object count (informational message):
Storage cell set time Storage cell set volume replacement
delayStorage cell look up object (informational
message):
Storage cell free command lock Storage cell set console lun id
Report the error to product support.The supplied identifier is invalid. This can
indicate a user or program error.
13 Invalid Parameter Id
Report the error to product support.Quorum disks from multiple storage systems
are present.
14 Invalid Quorum Configuration
Case 1: Report the error to product support. Case 2: To add additional capacity to the disk group, use the management
The supplied target handle is invalid. This can indicate a user or program error (Case
1), or
15 Invalid Target Handle
software to add disks by count or capacity.
Volume set requested usage (Case 2): The operation could not be completed
because the disk has never belonged to a disk group and therefore cannot be added to a disk group.
Report the error to product support.The supplied target identifier is invalid. This
can indicate a user or program error.
16 Invalid Target Id
Report the error to product support.The time value specified is invalid. This can
indicate a user or program error.
17 Invalid Time
Report the error to product support.The operation could not be completed because one or more of the disk media was inaccessible.
18 Media is Inaccessible
Report the error to product support.The Fibre Channel port specified is not valid. This can indicate a user or program error.
19 No Fibre Channel Port
Report the error to product support.There is no firmware image stored for the specified image number.
20 No Image
The disk device must be in either
maintenance mode or in a reserved
The disk device is not in a state to allow the specified operation.
21 No Permission
state for the specified operation to
proceed.
Create a storage cell and retry the
operation.
The operation requires a storage cell to exist.22
Storage system not initialized
Report the error to product support.The Fibre Channel port specified is either not a loop port or is invalid. This can indicate a user or program error.
23 Not a Loop Port
Verify that the controller is a
participating member of the storage
cell.
The controller must be participating in the storage cell to perform the operation.
24 Not a Participating Controller
95
Table 21 Error Messages (continued)
How to CorrectMeaningStatus Code Value
Case 1: Either delete the associated object or resolve the in progress state. Case 2: . Report the error to product support.
Several states can cause this message: Case 1: The operation cannot be performed because an association exists a related object, or the object is in a progress state.
25 Objects in your system are in use, and their state prevents the operation you wish to perform.
Case 3: Unpresent the LUNs before deleting this virtual disk.
Derived unit create: Case 2: The supplied virtual disk handle is already an attribute of another derived unit. This may indicate a programming error
Case 4: Resolve the delay before
performing the operation. Derived unit discard: Case 3: One or more LUNs are presented to EVA hosts that are based on this virtual disk.
Case 5: Delete any remaining virtual
disks or wait for the used capacity to
reach zero before the disk group can Case 4: Logical disk clear data lost: The virtual disk is in the non-mirrored delay window.
be deleted. If this is the last remaining
disk group, uninitialize the storage cell
to remove it. Case 5: LDAD discard: The operation cannot
be performed because one or more virtual
Case 6: Report the error to product
support. disks still exist, the disk group still may be
Case 7: The disk must be in a reserved
state before it can be erased.
recovering its capacity, or this is the last disk group that exists.
Case 8: Delete the virtual disks or LUN
presentations before uninitializing the
storage cell.
Case 6: LDAD resolve condition: The disk group contains a disk volume that is in a data-lost state. This condition cannot be resolved.
Case 9: Delete the LUN presentations
before deleting the EVA host. Case 7: Physical Store erase volume: The
disk is a part of a disk group and cannot be erased.
Case 10: Report the error to product
support.
Case 11: Resolve the situation before
attempting the operation again.
Case 8: Storage cell discard: The storage cell contains one or more virtual disks or LUN presentations.
Case 12: Resolve the situation before
attempting the operation again.
Case 9: Storage cell client discard: = The EVA host contains one or more LUN presentations.
Case 13: This may indicate a
programming error. Report the error
to product support.
Case 10: SCVD discard: The virtual disk contains one or more derived units and
Case 14: Select another disk or
remove the disk from the disk group
cannot be discarded. This may indicate a programming error.
before making it a member of a
different disk group.
Case 11: SCVD set capacity: The capacity cannot be modified because the virtual disk
Case 15: Remove the virtual disks from
the group and retry the operation.
has a dependency on either a snapshot or snapclone.
Case 12: SCVD set disk cache policy: The virtual disk cache policy cannot be modified while the virtual disk is presented and enabled.
Case 13: SCVD set logical disk: The logical disk attribute is already set, or the supplied logical disk is already a member of another virtual disk.
Case 14: VOLUME set requested usage: The disk volume is already a member of a disk group or is in the state of being removed from a disk group.
Case 15: GROUP discard: The Continuous Access group cannot be discarded as one or more virtual disk members exist.
96 Error messages
Table 21 Error Messages (continued)
How to CorrectMeaningStatus Code Value
Report the error to product support.The operation cannot be performed because the object does not exist. This can indicate a user or program error. VOLUME set requested usage: The disk volume set requested usage cannot be
26 Parameter Object Does Not Exist
performed because the disk group does not exist. This can indicate a user or program error.
Case 1: Report the error to product
support.
Case 2: Retry the request at a later
time.
Case 1: The operation cannot be performed because the object does not exist. This can indicate a user or program error. Case 2: DERIVED UNIT discard: The operation cannot be performed because the
27 Target Object Does Not Exist
Case 3: Report the error to product
support.
virtual disk, snapshot, or snapclone does not exist or is still being created.
Case 4: Report the error to product
support.
Case 3: VOLUME set requested usage: The operation cannot be performed because the target disk volume does not exist. This can indicate a user or program error.
Case 4: GROUP get name: The operation cannot be performed because the Continuous Access group does not exist. This can indicate a user or program error.
Verify the hardware connections and
that communication to the device is
successful.
A timeout has occurred in processing the request.
28 Timeout
Report the error to product support.The supplied storage cell identifier is invalid. This can indicate a user or program error.
29 Unknown Id
Report the error to product support.The supplied parameter handle is unknown. This can indicate a user or program error.
30 Unknown Parameter Handle
Report the error to product support.The operation could not be completed because one or more of the disk media had an unrecoverable error.
31 Unrecoverable Media Error
Report the error to product support.This error is no longer supported.32
Invalid State
Verify the hardware connections,
communication to the device, and that
A SCMI transport error has occurred.33
Transport Error
the management software is operating
successfully.
Resolve the condition and retry the
request. Report the error to product
support.
The operation could not be completed because the drive volume is in a missing state.
34 Volume is Missing
Report the error to product support.The supplied cursor or sequence number is invalid. This may indicate a user or program error.
35 Invalid Cursor
Report the error to product support.The specified target logical disk already has an existing data sharing relationship. This can indicate a user or program error.
36 Invalid Target for the Operation
No action required.There are no more events to retrieve. (This message is informational only.)
37 No More Events
Retry the request at a later time.The command lock is busy and being held by another process.
38 Lock Busy
97
Table 21 Error Messages (continued)
How to CorrectMeaningStatus Code Value
Report the error to product support.The storage system time is not set. The storage system time is set automatically by the management software.
39 Time Not Set
Report the error to product support.The requested operation is not supported by this firmware version. This can indicate a user or program error.
40 Not a Supported Version
Report the error to product support.The specified SCVD does not have a logical disk associated with it. This can indicate a user or program error.
41 No Logical Disk for Vdisk
Delete the associated presentation(s)
and retry the request.
The virtual disk specified is already presented to the client and the requested operation is not allowed.
42 Logical disk Presented
Report the error to product support.The request is not allowed on the slave controller. This can indicate a user or program error.
43 Operation Denied On Slave
Report the error to product support.This error is no longer supported.44
Not licensed for data replication
Configure the virtual disk to be a
member of a Continuous Access group
and retry the request.
The operation cannot be performed because the virtual disk is not a member of a Continuous Access group.
45 Not DR group member
Configure the Continuous Access
group correctly and retry the request.
The operation cannot be performed because the Continuous Access group is not in the required mode.
46 Invalid DR mode
Wait for the copying state to complete
and retry the request.
The operation cannot be performed because at least one of the virtual disk members is in a copying state.
47 The target DR member is in full copy, operation rejected
Use the management software to save
the password specified so
communication can proceed.
The management software is unable to log in to the storage system. The storage system password has been configured.
48 Security credentials needed. Please update your system's ID and password in the Storage System Access menu.
Use the management software to set
the password to match the device so
communication can proceed.
The management software is unable to login to the device. The storage system password may have been re-configured or removed.
49 Security credentials supplied were invalid. Please update your system's ID and password in the Storage System Access menu.
No action required.The management software is already logged in to the device. (This message is informational only.)
50 Security credentials supplied were invalid. Please update your system's ID and password in the Storage System Access menu.
Verify that devices are powered on
and that device hardware connections
are functioning correctly.
The Continuous Access group is not functioning. .
51 Storage system connection down
Add one or more virtual disks as
members and retry the request.
No virtual disks are members of the Continuous Access group.
52 DR group empty
Retry the request with valid attributes
for the operation.
The request cannot be performed because one or more of the attributes specified is incompatible.
53 Incompatible attribute
Remove the virtual disk as a member
of a data replication group and retry
the request.
The requested operation cannot be performed on a virtual disk that is already a member of a data replication group.
54 Vdisk is a DR group member
98 Error messages
Table 21 Error Messages (continued)
How to CorrectMeaningStatus Code Value
No action required.The requested operation cannot be
performed on a virtual disk that is a log unit.
55 Vdisk is a DR log unit
Report the error to product support.The battery system is missing or discharged.56
Cache batteries failed or missing.
The virtual disk member must be presented to a client before this operation can be performed.
The virtual disk member is not presented to a client.
57 Vdisk is not presented
Report the error to product support.Invalid status for logical disk. This error is
no longer supported.
58 Other controller failed
Case 1: If this operation is still desired, delete one or more of the items and retry the operation. Case 2: If this operation is still desired, delete one or more of the EVA hosts and retry the operation.
Case 1: The maximum number of items allowed has been reached. Case 2: The maximum number of EVA hosts has been reached.
Case 3: The maximum number of port WWNs has been reached.
59 Maximum Number of Objects Exceeded.
Case 3: If this operation is still desired, delete one or more of the port WWNs and retry the operation.
Case 1: If this operation is still desired, delete one or more of the items on the
Case 1: The maximum number of items already exist on the destination storage cell. Case 2: The size specified exceeds the maximum size allowed.
60 Max size exceeded
destination storage cell and retry the operation. Case 2: Use a smaller size and retry the operation.
Case 3: The presented user space exceeds the maximum size allowed.
Case 3: No action required.
Case 4: The presented user space exceeds the maximum size allowed. Case 4: No action required.
Case 5: The size specified exceeds the maximum size allowed.
Case 6: The maximum number of EVA hosts already exist on the destination storage cell.
Case 5: Use a smaller size and try this operation again.
Case 6: If this operation is still desired, delete one or more of the EVA hosts and retry the operation.
Case 7: The maximum number of EVA hosts already exist on the destination storage cell. Case 7: If this operation is still desired,
delete one or more of the virtual disks
Case 8: The maximum number of Continuous Access groups already exist.
on the destination storage cell and retry the operation.
Case 8: If this operation is still desired, delete one or more of the groups and retry the operation.
Reconfigure one of the storage system controller passwords, then use the
The login password entered on the controllers does not match.
61 Password mismatch. Please update your system's password management software to set the in the Storage System Access password to match the device so
communication can proceed.menu. Continued attempts to access this storage system with an incorrect password will disable management of this storage system.
Wait for the merge operation to
complete and retry the request.
The operation cannot be performed because the Continuous Access connection is currently merging.
62 DR group is merging
Wait for the logging operation to
complete and retry the request.
The operation cannot be performed because the Continuous Access connection is currently logging.
63 DR group is logging
99
Table 21 Error Messages (continued)
How to CorrectMeaningStatus Code Value
Resolve the suspended mode and retry the request.
The operation cannot be performed because the Continuous Access connection is currently suspended
64 Connection is suspended
Retrieve a valid firmware image file and retry the request.
The firmware image file has a header checksum error.
65 Bad image header
Retrieve a valid firmware image file and retry the request.
The firmware image file has a checksum error.
66 Bad image
Retrieve a valid firmware image file and retry the request.
Invalid status for logical disk. This error is no longer supported.
67 The firmware image file is too large. Image too large
Retrieve a valid firmware image file and retry the request
The firmware image file is incompatible with the current firmware.
70 Image incompatible with system configuration. Version conflict in upgrade or downgrade not allowed.
Verify that the firmware image is not corrupted and retry the firmware download process.
The firmware image download process has failed because of a corrupted image segment.
71 Bad image segment
No action required.The firmware version already exists on the
device.
72 Image already loaded
Verify that the firmware image is not corrupted and retry the firmware download process.
The firmware image download process has failed because of a failed write operation.
73 Image Write Error
Case 1: No action required. Case 2: No action required.
Case 1: The operation cannot be performed because the virtual disk or snapshot is part of a snapshot group. Case 2: The operation may be prevented because a snapclone or snapshot operation
74 Logical Disk Sharing
Case 3: If a snapclone operation is in progress, wait until the snapclone operation has completed and retry the
is in progress. If a snapclone operation is in
operation. Otherwise, the operation
progress, the parent virtual disk should be
cannot be performed on this virtual disk.
discarded automatically after the operation completes. If the parent virtual disk has
Case 4: No action required.
snapshots, then you must delete the snapshots before the parent virtual disk can be deleted.
Case 5: No action required.
Case 3: The operation cannot be performed because either the previous snapclone operation is still in progress, or the virtual disk is already part of a snapshot group.
Case 4: A capacity change is not allowed on a virtual disk or snapshot that is a part of a snapshot group.
Case 5: The operation cannot be performed because the virtual disk or snapshot is a part of a snapshot group.
Retrieve a valid firmware image file and retry the request.
The firmware image file is not the correct size.
75 Bad Image Size
Retry the request once the firmware download process is complete.
The controller is currently processing a firmware download. Retry the request once the firmware download process is complete.
76 The controller is temporarily busy and it cannot process the request. Retry the request later.
Report the error to product support.The disk volume specified is in a predictive
failed state.
77 Volume Failure Predicted
100 Error messages
Loading...