HP X5000 G2 Administrator's Manual

Page 1
HP X5000 G2 Network Storage System Administrator Guide
Abstract
This document explains how to install, configure, and maintain all models of the HP X5000 G2 Network Storage System and is intended for system administrators. For the latest version of this guide, go to www.hp.com/support/manuals. Select NAS Systems in the storage group, and then select an X5000 G2 product.
HP Part Number: QW919-96035 Published: July 2013 Edition: 4
Page 2
© Copyright 2011, 2012 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Acknowledgments
Microsoft®, Windows®, and Windows Server® are registered trademarks of Microsoft Corporation in the United States and other countries.
Revision History
DescriptionSoftware
Version
DateEdition
First release2.01.0aNovember 2011First
Corrected network addresses for iLO connections; clarified network connections to both server blades are required during initial configuration; added details
2.02.0aMarch 2012Second
to the Known Issues section; corrected minimum supported Insight Remote Support software version; added details about the 1210m Online Volume Tool; revised content in the System recovery chapter; added SHOW CSR commands for EMU CLI and corrected instructions for using Enclosure Manager physical interface.
Added details to the Known issues section; documented new Storage Viewer feature; documented Alerts and Email enhancements; revised instructions for
2.03.0aMay 2012Third
using the Enclosure Manager physical interface; corrected syntax for SHOW SERVER BOOT and POWEROFF SERVER Enclosure Manager commands.
Page 3
Contents
1 HP X5000 G2 Network Storage System........................................................7
Features..................................................................................................................................7
Hardware components..............................................................................................................7
Software components................................................................................................................8
2 Installing the storage system.........................................................................9
Verify the kit contents................................................................................................................9
Locate and record the product number, serial number, and SAID number.........................................9
Unpack and rack the storage system hardware.............................................................................9
Cable disk enclosures.............................................................................................................10
Network connections..............................................................................................................12
Power on the storage system.....................................................................................................14
Configure the EMU and iLO management processors..................................................................15
3 Configuring the storage system...................................................................19
Accessing the storage system...................................................................................................19
Set up Windows and discover the second node..........................................................................19
Completing initial configuration................................................................................................20
Networking options................................................................................................................21
Network teaming...............................................................................................................21
Multi-home........................................................................................................................21
Dedicated networks...........................................................................................................21
10 GbE versus 1 GbE........................................................................................................21
4 Monitoring and troubleshooting the storage system.......................................22
Using notification alerts...........................................................................................................22
Configuring Alerts and Email...................................................................................................22
HP System Management Homepage.........................................................................................26
Starting the System Management Homepage application........................................................26
System Management Homepage main page.........................................................................26
Using the System Manager......................................................................................................30
Component LEDs....................................................................................................................34
EMU CLI SHOW commands....................................................................................................42
Known issues.........................................................................................................................43
Using Storage Viewer..............................................................................................................47
HP Support websites...............................................................................................................49
HP Insight Remote Support software..........................................................................................50
Microsoft Systems Center Operations Manager...........................................................................51
Windows Recovery Environment ..............................................................................................51
Startup Repair...................................................................................................................51
Memory Diagnostic............................................................................................................53
HP 1210m Volume Online Tool.................................................................................................53
Obtaining the Service Agreement ID.........................................................................................54
Locating the storage system warranty entitlement label.................................................................54
5 Upgrading the storage system....................................................................55
Maintaining your storage system...............................................................................................55
Determining the current storage system software version...............................................................55
Upgrading X5000 G2 software...............................................................................................56
Upgrading a component's firmware..........................................................................................56
Resolving errors after the HP 1210m controller upgrade...........................................................59
Resolving errors after a disk drive firmware upgrade...............................................................59
Resolving an EMU upgrade issue.........................................................................................60
Contents 3
Page 4
Upgrading hardware components.............................................................................................60
Powering the storage system off and on.....................................................................................60
6 Removing and replacing hardware components............................................61
Customer self repair................................................................................................................61
Best practices for replacing components....................................................................................61
During replacement of the failed component..........................................................................61
Accessing component replacement videos.............................................................................61
Identifying the spare part....................................................................................................62
Replaceable parts...................................................................................................................62
Hot, warm, and cold swap components.....................................................................................65
Preventing electrostatic discharge..............................................................................................65
Verifying component failure......................................................................................................66
Verifying proper operation.......................................................................................................66
Wait times for hard disks.........................................................................................................66
Removing the system enclosure from the rack..............................................................................67
Inserting the system enclosure into the rack.................................................................................68
Removing and replacing the server interposer board...................................................................68
Removing and replacing the midplane board.............................................................................70
Removing and replacing a SAS cable .......................................................................................73
Removing and replacing the SAS I/O module............................................................................73
Removing and replacing the fan module....................................................................................75
Removing and replacing the power UID button assembly.............................................................76
Removing and replacing the power supply.................................................................................77
Removing and replacing the HP Ethernet I/O module..................................................................78
Removing and replacing the PCIe module (with card)..................................................................79
Removing and replacing the EMU module.................................................................................81
Removing and replacing the server blade backplane...................................................................82
Removing and replacing the server airflow baffle........................................................................84
Removing and replacing the front bezel (standard)......................................................................85
Removing and replacing the front bezel (full)..............................................................................87
Removing and replacing the front LED display board in the rack (standard)....................................88
Removing and replacing the front LED display board (full)............................................................89
Removing and replacing a drive drawer....................................................................................91
Removing and replacing the drive drawer hard drive...................................................................96
Removing and replacing the drive drawer rails (side or bottom)....................................................98
Removing and replacing the enclosure rails..............................................................................103
Removing and replacing the rack rails.....................................................................................108
Removing and replacing server blades....................................................................................108
Removing and replacing the server blade hard drive.................................................................109
Removing and replacing the 1210m controller board components...............................................111
Removing and replacing the 1210m cache module...............................................................113
Removing and replacing the capacitor pack........................................................................116
Removing and replacing the Mezzanine NIC...........................................................................118
7 Storage system recovery..........................................................................120
System Recovery DVD...........................................................................................................120
Using a downloaded version of the System Recovery DVD.....................................................120
Drive letters are not assigned after a restore........................................................................121
Restoring the factory image with a DVD or USB flash device.......................................................121
Using a USB flash drive for storage system recovery..................................................................121
Recovering both servers.........................................................................................................122
Recovering a single server.....................................................................................................122
Restoring the system with Windows Recovery Environment..........................................................125
4 Contents
Page 5
8 Support and other resources....................................................................128
Contacting HP......................................................................................................................128
HP technical support........................................................................................................128
Subscription service..........................................................................................................128
Related information...............................................................................................................128
HP websites....................................................................................................................128
Rack stability........................................................................................................................129
9 Documentation feedback.........................................................................130
A Managing the EMU................................................................................131
CLI reference........................................................................................................................131
Command line conventions....................................................................................................131
Operational groups..............................................................................................................131
Authentication......................................................................................................................132
Time functions......................................................................................................................135
Inventory and status..............................................................................................................138
Internet control.....................................................................................................................143
Server management..............................................................................................................146
Enclosure control..................................................................................................................149
Forensic...............................................................................................................................153
Session...............................................................................................................................155
Using the Enclosure Manager physical interface.......................................................................157
Activate Button Menu............................................................................................................158
Reboot EM (bE)....................................................................................................................158
Restore Factory Defaults (Fd)..................................................................................................158
Recover Lost Password (Fp).....................................................................................................159
Set DHCP IP Address (dH).....................................................................................................159
Set Link Local IP Address (LL)..................................................................................................159
Display Current IP Address (IP)...............................................................................................159
Exit Button Menu..................................................................................................................160
B Regulatory compliance notices.................................................................161
Regulatory compliance identification numbers..........................................................................161
Federal Communications Commission notice............................................................................161
FCC rating label..............................................................................................................161
Class A equipment......................................................................................................161
Class B equipment......................................................................................................161
Modification...................................................................................................................162
Cables...........................................................................................................................162
Canadian notice (Avis Canadien)...........................................................................................162
Class A equipment...........................................................................................................162
Class B equipment...........................................................................................................162
European Union notice..........................................................................................................162
Japanese notices..................................................................................................................163
Japanese VCCI-A notice....................................................................................................163
Japanese VCCI-B notice....................................................................................................163
Japanese VCCI marking...................................................................................................163
Japanese power cord statement.........................................................................................163
Korean notices.....................................................................................................................163
Class A equipment...........................................................................................................163
Class B equipment...........................................................................................................163
Taiwanese notices.................................................................................................................164
BSMI Class A notice.........................................................................................................164
Taiwan battery recycle statement........................................................................................164
Vietnamese notice............................................................................................................164
Contents 5
Page 6
Laser compliance notices.......................................................................................................165
English laser notice..........................................................................................................165
Dutch laser notice............................................................................................................165
French laser notice...........................................................................................................165
German laser notice.........................................................................................................166
Italian laser notice............................................................................................................166
Japanese laser notice.......................................................................................................166
Spanish laser notice.........................................................................................................167
Recycling notices..................................................................................................................167
English recycling notice....................................................................................................167
Bulgarian recycling notice.................................................................................................168
Czech recycling notice......................................................................................................168
Danish recycling notice.....................................................................................................168
Dutch recycling notice.......................................................................................................168
Estonian recycling notice...................................................................................................169
Finnish recycling notice.....................................................................................................169
French recycling notice.....................................................................................................169
German recycling notice...................................................................................................169
Greek recycling notice......................................................................................................170
Hungarian recycling notice...............................................................................................170
Italian recycling notice......................................................................................................170
Latvian recycling notice.....................................................................................................170
Lithuanian recycling notice................................................................................................171
Polish recycling notice.......................................................................................................171
Portuguese recycling notice...............................................................................................171
Romanian recycling notice................................................................................................171
Slovak recycling notice.....................................................................................................172
Spanish recycling notice...................................................................................................172
Swedish recycling notice...................................................................................................172
Turkish recycling notice.....................................................................................................172
Battery replacement notices...................................................................................................173
Dutch battery notice.........................................................................................................173
French battery notice........................................................................................................173
German battery notice......................................................................................................174
Italian battery notice........................................................................................................174
Japanese battery notice....................................................................................................175
Spanish battery notice......................................................................................................175
Glossary..................................................................................................176
Index.......................................................................................................177
6 Contents
Page 7
1 HP X5000 G2 Network Storage System
The HP X5000 G2 Network Storage System (“storage system”) is an integrated hardware-software solution that provides highly available file and block storage on a Windows failover cluster. Each storage system features HP server blades and dense disk storage in a single 3U enclosure (Figure 1
(page 7)).
Features
The HP X5000 G2 Network Storage System provides the following advantages:
Each system ships from the factory with preintegrated hardware and preloaded software, to
significantly reduce the time and complexity of deploying clusters.
Built on the HP converged application platform, which combines two server blades and dense
storage drawer into a single enclosure
Lower overall TCO with reduced footprint and lower energy consumption
Specially developed setup tools (setup wizards) provide guided setup assistance, performing
many of the complex and time-consuming tasks needed to configure and deploy a high availability storage system. The setup tools make it easy to get both Windows and a two-node cluster configured and running quickly.
HP and Microsoft management integration, including Microsoft Server Manager and System
Center and HP System Insight Manager and Integrated Lights Out (iLO) For more information about X5000 G2 Network Storage System features, go to:
http://www.hp.com/go/X5000-G2
Hardware components
Figure 1 (page 7) and Figure 2 (page 8) show front and rear views of the storage system.
Figure 1 Front view
4. Server blade 2, Bay 21. Disk drawer
5. Server blade 2, OS drives2. Server blade 1, OS drives
6. Chassis fault LED3. Server blade 1, Bay 1
Features 7
Page 8
Figure 2 Rear view
1. System fan
2. HP 2-port 10 Gb I/O module (2). These modules connect to the NIC located on the server blade motherboard.
3. Intraconnect (internal switch connecting servers and EMU)
4. Drive fan
5. SAS I/O module (2)
6. Power button
7. Power supply (2)
8. HP 4-port, 1 Gb Ethernet I/O PCIe module (2)
9. HP 2-port, 1 Gb Ethernet I/O module (connects to the mezzanine NIC in each server blade)
10. Management port for iLO (servers 1 and 2), and Enclosure Manager Unit (EMU)
Software components
Windows Storage Server 2008 R2 SP1 comes preinstalled and activated on the HP X5000 G2 Network Storage System. The operating system software contains the Microsoft iSCSI Software Target and a Microsoft Cluster Service license. The storage system configuration also includes the HP Initial Configuration Tasks window and HP Server Manager, which are used to set up and manage your storage system.
The Initial Configuration Tasks window assists during the initial out of box setup by configuring the network, configuring two nodes from a single node, and deploying the cluster. Use HP Server Manager to further customize the storage system, such as managing volumes and spare drives.
To provide ongoing monitoring and facilitate management, the storage system includes the System Manager, which provides a snapshot view of the health and status of the storage system and tools to manage firmware updates.
8 HP X5000 G2 Network Storage System
Page 9
2 Installing the storage system
This chapter explains how to install the storage system hardware.
Verify the kit contents
Remove the contents, ensuring that you have all of the following components. If components are missing, contact HP technical support.
Hardware
HP X5000 G2 Network Storage System
NOTE: External disk enclosures are not included with the storage system, but up to four
D2600 or D2700 disk enclosures may be connected to the storage system.
Rail kit
Power cords
Media and documentation
HP X5000 G2 Network Storage System Quick Start Guide
HP ProLiant Essentials Integrated Lights-Out Advanced Pack
End User License Agreement
HP X5000 G2 System Recovery DVD
Certificate of Authenticity Card
Safety and Disposal Documentation CD
Locate and record the product number, serial number, and SAID number
Before you begin installation, locate and record the product number of the storage system, serial number, and support contract service agreement ID (SAID) number.
The product number of the storage system and serial number are located in three places:
Top of the storage system
Back of the storage system on a pull-out tab
On the storage system shipping box
The SAID number is listed on your service contract agreement (see “Obtaining the Service Agreement
ID” (page 54)).
Unpack and rack the storage system hardware
WARNING! The storage system enclosure is heavy. Always use at least two people to move the
storage system into the rack.
Verify the kit contents 9
Page 10
1. If your storage system is delivered in a rack, proceed to Step 2. If you ordered the storage system without the rack, install the rail kit and enclosure in the rack using the installation instructions that are included with the rail kit.
IMPORTANT: Ensure that cabling in the back of the rack system does not interfere with
system operation or maintenance. Bind cables loosely with cable ties and route the excess out of the way, along the side of the rack, to keep system components and indicators visible and accessible.
Figure 3 Storage system installed in a rack
1. Storage system enclosure 2-5. Disk enclosures (optional) 6-7. Cable connection, with no bend radius smaller than 5 cm
2. If you purchased disk enclosures, rack and cable the disk enclosures before moving to the next step.
3. Cable the storage system to your network and attach the power cords. See “Rear view”
(page 8) for connecting the power cables.
Cable disk enclosures
The following figures show the correct cabling of disk enclosures to the storage system chassis. Numbers represent the order of attachment. Figure 4 (page 11) shows an HP X5000 G2 Network Storage System with two disk enclosures.
10 Installing the storage system
Page 11
NOTE: Up to four HP D2600 or HP D2700 disk enclosures are supported. A mix of HP D2600
or HP D2700 disk enclosures is not supported.
Figure 4 X5000 G2 with two disk enclosures
1. X5000 G2 2–3. Disk enclosures
4. SAS cable connecting disk enclosure 1 (green cable)
5. Green color code for upper SAS I/O module
6. Red color code for lower SAS I/O module
7. SAS cable connecting disk enclosure 2 (red cable)
Figure 5 (page 12) shows an X5000 G2 Network Storage System with four disk enclosures.
Cable disk enclosures 11
Page 12
Figure 5 X5000 G2 with four disk enclosures
1. X5000 G2 2–5. Disk enclosures
6. SAS cable connecting disk enclosure 1 (green cable)
7. Green color code for upper SAS I/O module
8. Red color code for lower SAS I/O module
9. SAS cable connecting disk enclosure 2 (red cable)
Network connections
Each of the two servers has eight network adapters. One of the adapters, Cluster Internal, is already connected to the corresponding adapter on the second node. This is done through an internal switch located in the Mezz B slot in the rear of the enclosure (5, Figure 6).
12 Installing the storage system
Page 13
Figure 6 Network ports
9. 1 GbE Public 4 (Blade 1)1. 10 GbE Public 1 (Blade 1)
10. 1 GbE Public 3 (Blade 1)2. 10 GbE Public 1 (Blade 2)
11. 1 GbE Public 2 (Blade 1)3. 10 GbE Public 2 (Blade 1)
12. 1 GbE Public 1 (Blade 1)4. 10 GbE Public 2 (Blade 2)
13. 1 GbE Public 4 (Blade 2)5. Cluster Internal
14. 1 GbE Public 3 (Blade 2)6. Enclosure Manager, iLO (Blades 1 and 2)
15. 1 GbE Public 2 (Blade 2)7. Server Management (Blade 1)
16. 1 GbE Public 1 (Blade 2)8. Server Management (Blade 2)
Because the two Cluster Internal adapters are connected, they are automatically assigned an IPv4 link-local address from the address block 169.254.0.0/16. This network will be used in a later step for configuration of the second node from the first node, and it also is used as a private cluster heartbeat network when the cluster is deployed. HP recommends that you do not make changes to the configuration of the Cluster Internal network adapter.
The remaining network adapters are intended for use in your network infrastructure. Each adapter is labeled according to a suggested use (for example, 1 GbE Public 1), but you may rename the adapters in later configuration steps and use them in a way best suited to your environment.
In the network infrastructure that connects the cluster nodes, avoid having single points of failure. One way to do this is to have at least two distinct networks. The HP X5000 G2 already provides one network between the nodes—the Cluster Internal network. You must add at least one more network. As you connect the HP X5000 G2 to your network infrastructure, consider the following requirements:
Since deploying the cluster requires that both servers be joined to an Active Directory domain,
you must have a route to the domain controller from each server on the storage system.
Servers in a cluster must use DNS for name resolution, so you must have a route to a DNS
server from each server on the storage system.
If you are adding more than one adapter per server to your network infrastructure, each
adapter should be on a different subnet.
Figure 7 (page 14) shows two possibilities for adding network cables for an additional network.
Network connections 13
Page 14
Figure 7 Cabling an additional network
1. Connect 10 GbE Public 1 (Blade 1) and 10 GbE Public 2 (Blade 2) to the same subnet in
your network infrastructure. Note that adapters were chosen on different pass-through modules. This prevents the pass-through module from becoming a single point of failure for the connection between the two nodes.
or
2. Connect 1 GbE Public 4 (Blade 1) and 1 GbE Public 4 (Blade 2) to the same subnet in your
network infrastructure.
In later configuration steps you can configure the adapters you have connected to your network. If you have connected to a DHCP-enabled network, no further configuration is necessary. Otherwise, you must assign static addresses to the adapters. You may also want to rename the adapters to reflect their use in your environment. Also note that these are only two examples out of many networking possibilities. NIC teaming may also be used. It is not necessary to make all these decisions now, because you can always add more networks after the system has been deployed.
The Enclosure Manager and iLO port (6, Figure 6 (page 13)) provides for a connection from your network infrastructure to the Enclosure Manager Unit (EMU) and to the iLO on each blade. For ease of setup, the EMU and each iLO processor have been assigned static IP addresses in the factory. You use these addresses to make an initial connection, and then configure each to connect to your network. The factory configured addresses are as follows:
Table 1 Factory configured EMU and iLO addresses
Subnet maskIP addressComponent
255.255.255.010.0.0.10EMU
255.255.255.010.0.0.11Server 1 iLO
255.255.255.010.0.0.12Server 2 iLO
“Configure the EMU and iLO management processors” (page 15) describes how you can directly
connect a laptop or other local system to reconfigure these addresses.
Power on the storage system
1. Power on disk enclosures, if any.
2. Power on the storage system by pushing the power button on the back of the chassis. Once the storage system power is on, power on the server blades if they do not automatically
power on.
14 Installing the storage system
Page 15
Configure the EMU and iLO management processors
Before configuring the management processors, verify the following:
You have determined whether the network ports on the server are to use DHCP or static
addresses. If the network ports are to use static addresses, you must provide the addresses.
For this step, the EMU port should not be connected to a switch. You can connect the EMU
port to a switch after the EMU and iLO NICs are configured.
Configure the EMU and iLO management processors for both servers as follows:
1. Connect a system (the configuration system) in the environment or a laptop to the EMU port (Figure 8 (page 15)). You can use either a crossover or a regular Ethernet cable.
Figure 8 EMU NIC port connection
2. Configure the networking properties for the local system: a. Open Control Panel, select Network Sharing Center or Network Connections, and navigate
to Local Area Connections. b. Select PropertiesInternet Protocol, and then select Properties. c. If Use the following IP address: is selected, record values for the following items and
restore them after completing the EMU and iLO setup:
IP address
Subnet mask
Default gateway
d. Enter the following values:
IP address: 10.0.0.20
Subnet mask: 255.255.255.0
e. Before continuing, ping the following IP addresses to test connectivity to the EMU and
the iLO located in each of the servers: 10.0.0.10, 10.0.0.11, and 10.0.0.12. The EMU
and iLO interfaces have been assigned IP addresses during factory setup. You must either
update the factory values with site-specific static IP addresses or configure the management
processors to use DHCP IP addressing.
Configure the EMU and iLO management processors 15
Page 16
3. Configure iLO on the server blades: a. Open a web browser and log in to iLO using the address: http://10.0.0.11. You
are prompted to enter the user name and password. The password for the Administrator account is located on a pull out tag on the front of the server blade.
After you have logged into iLO, HP recommends that you change the administrator password. To do so, select User Administration under Administration in the iLO management interface.
b. Configure the network as required for your environment. Select Network under
Administration in the iLO management interface. You can either enable DHCP or edit the IP address details and enter site-specific network settings. Click Apply to save your settings.
c. Repeat the process on the other server blade. Open a web browser and log in to iLO
using the address: http://10.0.0.12.
16 Installing the storage system
Page 17
4. Configure the EMU: a. Connect to the Enclosure Manager software using an ssh compatible tool like PuTTY. In
the PuTTY session basic options, enter the EMU IP address (10.0.0.10) and port (22), and select SSH for the connection type (Figure 9 (page 17)).
NOTE: See “Managing the EMU” (page 131) for information on using CLI commands.
Figure 9 Connecting to the Enclosure Manager software
b. After you have connected to the EMU port, set the following attributes:
EMU (DNS) name
Rack name
EMU password (located on the tear-away label on the back of the server blade; see
Figure 10 (page 17))
IP addressing method
To change the static IP address, type the command set ipconfig static at the command line prompt and follow the instructions.
To change the EMU addressing to DHCP, type set ipconfig dhcp at the
command line prompt.
Figure 10 Tear-away label location
Example 1 Setting attributes
CustomerEMU-dnsName> set em name CustomerEMU-dnsName
CSP Enclosure Manager name changed to CustomerEMU-dnsName.
CustomerEMU-dnsName> set rack name CustomerRackName
Changed rack name to "CustomerRackName".
Configure the EMU and iLO management processors 17
Page 18
CustomerEMU-dnsName> set password
New Password: ******** Confirm : ******** Changed password for the "Administrator" user account.
CustomerEMU-dnsName>
NOTE: You will not be able to connect to iLO or the EMU from the configuration system until
you change the network settings on the configuration system.
5. Complete the configuration: a. Connect the EMU port to the appropriate switch/VLAN/subnet. b. Log in to the EMU using ssh and the newly assigned EMU name and validate connectivity.
It is assumed that the EMU name is in the DNS.
Example 2 Verifying connectivity
CustomerEMU-dnsName> show server list all
Bay iLO Name iLO IP Address Status Power UID
--- ----------------------------- --------------- -------- ------- --­ 1 ILOMXQ0110FJ9 16.78.90.51 OK On Off 2 ILOMXQ0110FHU 16.78.90.113 OK On Off Totals: 2 server blades installed, 2 powered on.
18 Installing the storage system
Page 19
3 Configuring the storage system
This chapter explains the out of box experience that occurs when you first power on the storage system. This includes setup tasks, such as the selection of language and regional settings for the OS, network configuration, time zone, provisioning storage required for the cluster, and deploying the two-node cluster. All configuration may be done from a single server. There is no need to log on to the second server.
Accessing the storage system
For initial configuration of the storage system, you must have console access to one of the server blades. You can use either a local I/O diagnostic (SUV) cable or an iLO connection. The iLO connection is the preferred method because it allows for remote access. If you are using the direct connect method, connect the supplied SUV cable to the front of the storage system server blades in the following sequence: keyboard, mouse, monitor cable, and monitor power cable. Regardless of which access method you use, perform the configuration from only one of the server blades. The server blade you choose for configuration will be designated the first node, and the other server blade will be designated the second node.
Figure 11 Keyboard, mouse, and monitor
1. Storage system enclosure
2. Monitor
3. Keyboard (USB)
4. Mouse (USB)
NOTE: The keyboard, mouse, and monitor are not provided with the storage system.
For remote access, open a web browser and enter the iLO name or IP address for a server blade located in either bay. Log in using the iLO administrator name and newly created password for that blade.
For instructions on using iLO, see the Integrated Lights Out user guide available from http://
www.hp.com/go/ilo. On the iLO web page, select More iLO Documentation.
Set up Windows and discover the second node
When the storage system starts, the servers will begin a first time setup procedure that takes approximately 10 to 15 minutes, including the Set Up Windows wizard. Use only one node to complete the setup procedure.
Accessing the storage system 19
Page 20
In the Set Up Windows wizard, you are asked to choose a language, regional settings, and keyboard layout. After you accept the EULA, the server you are connected to attempts to discover the second server. This is done over the internal switch (5, Figure 6 (page 13)). If the second node is not ready, you may see a message stating Cannot establish communication with the second node. Click Retry to attempt discovery, and repeat the retry until the second node is discovered. After the second node is discovered, there will be a few more installation steps that occur automatically on each server, and then both servers will reboot.
NOTE: If you click Cancel instead of Retry, you must access the second node from iLO or a direct
(SUV) connection and manually perform the Set Up Windows wizard on the second node. Because the discovery process has not completed, there will also be an extra step later to establish a connection between the two nodes. You will find instructions for this, if needed, in the online help of the Initial Configuration Tasks (ICT).
Completing initial configuration
After the servers reboot, continue the configuration using the first node. A default administrator password (HPinvent!) has been set and this is used to log on automatically. Leave this administrator password unchanged until you are prompted for a new password in a later configuration step. After logon, the HP ICT window is launched automatically.
Figure 12 ICT window
Use the HP ICT to perform setup tasks in the order they appear. See the provided online help for each group of tasks for more information about the task. After completing the “Provide cluster name and domain” task, both nodes will reboot. After allowing time for a reboot, log on once again to the first node. This time, rather than logging on as local Administrator, log on using the domain account that was specified in the “Provide cluster name and domain” task. You may now complete the remaining tasks, which includes creation of the two-node cluster. The final task is an optional
20 Configuring the storage system
Page 21
step to deploy one or more file servers on the cluster. You may also wait and create file servers later using Server Manager.
The ICT is intended for initial setup, so once it is complete, you may select the Do not show this window at the next logon box. If you do want to launch the ICT at a later time, you may do so from Server Manager or by typing oobe from a Windows command prompt.
When the HP ICT window is closed, Server Manager is launched automatically. Use Server Manager for further customization of the storage system, such as adding roles and features, and share and storage management. See the Getting Started node in the navigation tree of Server Manager for more help on using the storage system.
NOTE: Although BitLocker is supported by the Windows operating system, it is not supported on
the X5000 G2 Network Storage System because BitLocker is not supported on clustered volumes. For more information, see the following Microsoft article:
http://support.microsoft.com/kb/947302
If encryption is required, the Encrypting File System (EFS) is supported on clustered volumes. For more information on EFS, see the following Microsoft article:
http://support.microsoft.com/kb/223316
Networking options
The large number of network adapters on each server in the X5000 G2 provides a number of different options for networking. The network adapter named "Cluster Internal" is pre-configured as a private cluster heartbeat and should be left as is, but all other adapters are available for use. Use the guidelines below as an aid in making configuration choices.
Network teaming
Network teaming is a common use of multiple network adapters. Teaming is used to increase available network bandwidth and provide fault tolerance. Teaming can be across multiple ports in the same network adapter or across network adapters.
Multi-home
Distributing network workload across multiple network adapters is also commonly used. Placing each network interface on a different subnet allows the workload on each subnet to be serviced in parallel rather than through a single interface.
Dedicated networks
Implementing a unified storage solution requires that different protocols be used to access a storage system. In one instance, a block protocol like iSCSI is used to present storage to a virtual machine host. At the same time, a file protocol like SMB is used for sharing files for department or user home directories. A dedicated storage network for each protocol allows the network traffic to be kept separate to maximize performance. Similarly, one network interface can be used for system management and monitoring while another interface can be used for data traffic.
10 GbE versus 1 GbE
Other than the obvious difference in speed, 10 GbE provides an order of magnitude difference in lower latency. Lower latency is ideal for transactional database applications and virtualization. Combining a 10 GbE dedicated storage network for a virtual machine infrastructure, and a 1 GbE network for shared folder, takes the most advantage of the network offerings. A classic example is thin clients whose resources are hosted on virtual machines (for example, the Citrix model).
Networking options 21
Page 22
4 Monitoring and troubleshooting the storage system
The storage system provides several monitoring and troubleshooting options. You can access the following troubleshooting alerts and solutions to maintain the system health:
Notification alerts
System Management Homepage (SMH)
System Manager
Hardware component LEDs
EMU CLI SHOW commands
HP and Microsoft support websites
HP Insight Remote Support software
Microsoft Systems Center Operations Manager (SCOM) and Microsoft websites
HP SIM 6.3 or later, which is required for proper storage system/HP SIM integration.
NOTE: Integration with HP SIM is only supported using the WBEM/WMI interfaces. Do not
attempt to configure HP SIM to use the ProLiant SNMP agents, because the configuration is untested and unsupported. The ProLiant SNMP agents are enabled on the storage system by default and should not be disabled as they are used for internal management functions. If they are enabled for external client consumption, HP SIM must be configured so it does not attempt to communicate with these agents.
NOTE: WBEM events for storage are logged into Windows Application logs and WBEM events
for Server and Enclosure are logged into Windows System logs. If you are unable to resolve a storage system operation issue after using the various options, contact
HP Support. You must provide your SAID and your warranty and entitlement labels. See “Obtaining
the Service Agreement ID” (page 54) and “Locating the storage system warranty entitlement label” (page 54).
Using notification alerts
When you receive an alert, open the System Manager (described in “Using the System Manager”
(page 30)) to view a high-level description of the issue. You may then choose to open the System
Management Homepage or HP SIM to obtain more detailed information.
IMPORTANT: While the notification alerts report issues as they arise, it is still important to monitor
the storage system regularly to ensure optimal operation.
Configuring Alerts and Email
Configure Alerts and Email in the System Manager to send email notification of system events.
IMPORTANT: HP recommends that you configure Alerts and Email (and also install HP Insight
Remote Support) to ensure that you are proactively alerted to issues. Proactive notification enables you to address issues before they become serious problems.
To create an alert for a recipient:
1. Open the Server Manager by clicking the icon located to the right of the Start button on the Windows taskbar.
2. Expand the tree under System Manager.
3. In the tree, select Alerts and Email.
22 Monitoring and troubleshooting the storage system
Page 23
Figure 13 Configuring Alerts and Email
4. Do one of the following:
Select New to create a profile.
Select Copy or Edit to modify an existing profile.
The Alert Settings window appears.
Configuring Alerts and Email 23
Page 24
Figure 14 Alert and Email settings
5. Complete the following fields:
Name—Enter the name of a recipient (for example, John Doe).
Recipient address—Enter the email address of the recipient (for example,
John.Doe@company.com).
From address—Enter an email address that will display to the recipient indicating where
the message originated. It can be the same as the recipient address, if desired.
SMTP address—Enter a valid SMTP address (for example, SMTP.company.com).
Alerts Severity—Select the severity for which you want to receive alerts. You will also
receive alerts for any severity higher than the one you select. Select All to receive alerts for all severities.
Components Alerts—Select the components for which you want to receive alerts, or select
All to receive alerts for all components.
6. To test the ability for the recipient to receive email alerts, click Send Test Email. If the recipient receives the test email, no further action is required. If the test email is not received, check that the information entered for the recipient is correct.
24 Monitoring and troubleshooting the storage system
Page 25
Figure 15 Send test email
7. Click Save. The name of the recipient is displayed on the main Alerts and Email window. To configure the SNMP settings:
1. In the Server Manager navigation pane, select System and Network Settings.
2. Select SNMP Settings in the lower-right pane.
3. Provide the contact and location information for the System Administrator, and then click OK.
4. To make SNMP visible externally: a. Select StartAdministrative ToolsServices. b. Select SNMP Service. c. Right-click and select Properties to display the SNMP Service properties. d. Select the Security tab and specify the following items:
The external hosts that may use the SNMP protocol.
The SNMP Community string. HP recommends that you use something other than the
typical ‘Public’ string.
IMPORTANT: Configure HP SIM security to prevent the SIM management server from
gaining access to SNMP.
The SNMP trap function in the storage system is enabled by default. Any SNMP client (on localhost) listening on default port number 171 can receive traps. You can configure the destination IP address using the snmp.xml configuration file in the directory \Program Files\ HPWBEM\Tools\snmp.xml.
Configuring Alerts and Email 25
Page 26
HP System Management Homepage
The HP System Management Homepage (SMH) is a web-based interface that consolidates and simplifies single system management for HP servers. The SMH is the primary tool for identifying and troubleshooting hardware issues in the storage system. You may choose this option to diagnose a suspected hardware problem. Go to the SMH main page and open the Overall System Health Status and the Component Status Summary sections to review the status of the storage system hardware.
By aggregating the data from HP web-based agents and management utilities, the SMH provides a common, easy-to-use interface for displaying the following information:
Hardware fault and status monitoring
System thresholds
Diagnostics
Software and firmware version control for an individual server
HP Storage 1210m firmware information
The SMH Help menu provides documentation for using, maintaining, and troubleshooting the application. For more information about the SMH software, go to www.hp.com/support/manuals and enter System Management Homepage in the Search box. Select HP System Management Homepage Software. A list of documents and advisories is displayed. To view SMH user guides, select User Guide.
Starting the System Management Homepage application
To start the application, double-click the HP System Management Homepage desktop shortcut or enter https://hostname:2381/ in Internet Explorer. The hostname can be localhost or the IP address of the server you want to monitor. To log into SMH, enter the same username and password you use to log in to the server. Users who have administrative privileges on the server have the same privileges in the SMH application.
To view the SMH of one server from another server, you must modify the Windows firewall settings as follows:
1. Open the Control Panel and select System SecurityWindows FirewallAllowed Programs.
2. Select Allow another program and click Browse in the Add a Program dialog box.
3. Navigate to C:\hp\hpsmh\bin and select hpsmhd. Click Open and then click Add. HP System Management Homepage displays in the Allowed Programs and Features window.
4. Select Home/work (Private) and Public and click OK.
5. To access the SMH on another server, enter the following URL:
https://<server IP address>:2381
System Management Homepage main page
Figure 16 (page 27) shows the SMH main page.
26 Monitoring and troubleshooting the storage system
Page 27
Figure 16 System Management Homepage main page
The page provides system, subsystem, and status views of the server and displays groupings of systems and their status.
NOTE:
NICs will display with a failed status (red icon) if they are unplugged. To remove unused NICs
from the system status, you can disable them by selecting Control PanelHardwareDevice Manager, right-click on the specific NIC, and then select Disable.
When you remove a disk or disconnect a cable, the SMH interface might not display alerts
when you click the Refresh button. You can force a hard refresh by clicking the Home button or by navigating to the problem area. The default refresh interval is two minutes. To change the interval in the Settings menu, select Autorefresh, and then Configure Page refresh settings. The minimum interval is five seconds and the maximum is 30 minutes.
Overall System Health Status
A webapp sets the value of the Overall System Health Status icon by using a predefined heuristic. If no webapp can determine the status, the worst possible status is displayed in the Component Status Summary section.
Component Status summary
The Component Status Summary section displays links to all subsystems that have a critical, major, minor, or warning status. If there are no critical, major, minor or warning items, the Component Status Summary section displays no items.
Enclosure
This section provides information about the enclosure cooling, IDs, power, Unit Identification LED, PCIe devices, and I/O modules.
NOTE: A large number of disk errors may indicate that an I/O module has failed. Inspect the
I/O module LEDs on the storage system and any disk enclosures, and replace any failed component.
Because both a system and drive fan are required, the maximum and minimum number of
fans required is two. If either fan becomes degraded, the system could shut down quickly. Because the fans are not mutually redundant, even if the status of a single fan has changed,
HP System Management Homepage 27
Page 28
the new status is reported immediately in the Components Status Summary section on the SMH main page.
When the Enclosure Manager IP address is set incorrectly, the enclosure status displayed is
Lost communication. Because the Enclosure Manager has lost communication with the external network, none of the other items in the Enclosure Information section can be displayed.
The enclosure I/O ports are numbered from 1 to 8 in the SMH.
Figure 17 I/O module
These numbers correspond to the I/O modules in the enclosure bays.
Figure 18 I/O module bays
5. PCIe module1. LOM module
6. PCIe module2. LOM module
7. SAS I/O module3. MEZZ module
8. SAS I/O module4. Intraconnect (internal switch)
Network
This section shows the status of the network connections.
28 Monitoring and troubleshooting the storage system
Page 29
Storage
This section displays information about the following components:
Storage System—Links to the page that displays information about storage in the drive drawer
and any external disk enclosures. This storage is managed by the 1210m controller.
Smart array subsystem—Links to the page that displays information about operating system
drives and smart array controllers.
NOTE: The SMH will display a total of four power supplies for each External Storage Enclosure.
If there is more than one External Storage Enclosure connected, the SMH may not show the correct number of power supplies for each of these enclosures.
The Storage System page is organized as a left panel and a main page:
Figure 19 Storage system
The left panel provides links to information about the following items:
Controller
Select a storage controller to view its type, status, firmware version, and serial number.
Logical Drives
A list of logical drives associated with the controller appears in the left panel tree view. Select one of the logical drive entries to display the status of the drive, fault tolerance (RAID level), and capacity (volume size). A link to the logical drive storage pool is also displayed.
Storage Pools
A list of storage pools associated with the controller displays in the left panel tree view. Select one of the pool entries to display its status, capacity, communication status with the controller, primordial state, and cache properties.
NOTE: If read or write cache is enabled the value displayed is 2; otherwise, the value is 3.
The Storage Pools page also displays a list of disk drives and storage volumes present in the pool.
Under the Physical Drives tree, the list of disk enclosures is displayed. Under each enclosure, the list of disk drives present in each disk enclosures is displayed. When there is no drive in
HP System Management Homepage 29
Page 30
the enclosure, the display shows Bay Bay number – Empty. Select one of the disk enclosures or disk drives to see information for that enclosure or drive.
Physical Drives
This section provides an overview of all disk drives attached to the controller. Drives are identified and grouped as assigned, unassigned, and spare drives. Each physical drive is listed as a separate entry in the Storage System submenu. Select any of the physical drives to display more information about the drive.
NOTE: Spare drives are only used when a disk drive fails. Until a spare drive is used, it
remains offline and its LEDs will remain off.
System
This section displays status for various system components.
Version Control
This section provides information about the Version Control Agent.
Software
This section provides information about system firmware and software.
Using the System Manager
The System Manager provides the status of each server blade that is configured in the storage system. Be sure to note the server blade that is being assessed when you open the System Manager. Log in to each server blade to evaluate its status.
To use the System Manager, which has been preinstalled and configured, use Remote Desktop or
iLO to access the server blade. Click the Server Manager icon located in the taskbar to the right of the Start button or select StartAdministrative ToolsServer Manager. When Server Manager appears, select System Manger in the left navigation pane.
To troubleshoot using the System Manager:
1. Open the System Manager.
2. Open the System Summary tab to review the overall health of the storage system hardware, and firmware.
If the status icon is green, the system is running properly. A yellow icon is a warning that there are conditions that might cause a problem. If the icon is red, a problem exists in the storage system.
3. Open each tab in the System Manager to assess the status of the storage system.
4. Follow the instructions provided on the System Manager tabs for any reported issue.
System Summary
The System Summary tab displays information such as the enclosure name, IP Address, firmware revision, and serial number. The lower part of the System Summary also shows the status of hardware, and whether your current firmware revision is up to date. If a green check mark does not appear beside the configuration status, go to the related tab for information about the issue.
30 Monitoring and troubleshooting the storage system
Page 31
Figure 20 System summary
Hardware Status
The Hardware Status tab provides the health status for each storage system component. The System section displays information for the server blade that you are logged in to. If a problem is reported in the System section, you should check the Hardware Status tab on each server blade.
NOTE: If the System Manager shows that a LUN has an error, open the System Management
Homepage and determine whether the LUN is degraded due to a disk failure. If so, also use the System Management Homepage to determine which disk needs to be replaced.
Using the System Manager 31
Page 32
Figure 21 Hardware status
Firmware
The Firmware tab indicates whether the firmware of a component is outdated. If the specific firmware requires that you reboot after installing the update, a message instructing you to reboot the storage system appears. Since the tool does not connect to the Internet to identify new firmware, you must periodically check the HP support web page and download new firmware when available. Be sure to check the Firmware tab on each server blade. Some firmware updates must be made on both server blades.
IMPORTANT: If a firmware update requires a reboot, you must reboot your storage system
manually. For more information about firmware updates, see“Upgrading a component's firmware”
(page 56).
32 Monitoring and troubleshooting the storage system
Page 33
Figure 22 Firmware
Reports
The Reports tab gathers logs for the hardware, software, Microsoft Windows system configuration, and the Microsoft Exchange diagnostics in one place. These logs are used by HP support engineers to help diagnose your system, if needed; you do not need to view and interpret the logs yourself.
To generate reports:
1. Consult with HP support to determine what type of report is required.
- If complete reports are required, go to step 2.
- If an abbreviated report can be used, select the Run Quick Report Only option. A quick report contains less information but is created in much less time. It may contain all the necessary information HP support needs.
2. Click Generate Support File. A license agreement window is displayed.
3. Select Yes to accept the license.
The report generation process begins. It may take up to 45 minutes to create the reports. When the process is complete, the lower portion of the screen indicates that HP System Reports have been generated.
4. Click Open Reports Folder to access the .cab file containing the report results.
This file is ready to forward to the HP support engineers.
Using the System Manager 33
Page 34
Figure 23 Reports
Component LEDs
LEDs indicate the status of hardware components. This section provides images of the component LED locations and describes the status of LED behaviors. To obtain additional information on some status indicators, you can use the EMU CLI SHOW commands described in “Managing the EMU”
(page 131).
Figure 24 Server blade LEDs
Table 2 Server blade LEDs status
StatusDescriptionItem
Blue = Needs service checkUID LED1
Blue flashing = remote management (remote console in use via iLO)
OFF = No remote management
Green = NormalHealth LED2
Flashing = Booting
34 Monitoring and troubleshooting the storage system
Page 35
Table 2 Server blade LEDs status (continued)
StatusDescriptionItem
Amber = Degraded condition
Red = Critical condition
Green = Network linkedNIC 1 LED*3
Green flashing = Network activity
OFF = No link or activity
Green = Network linkedFlex-10 NIC 2 LED*4
Green flashing = Network activity
OFF = No link or activity
Reserved5
Green = OnSystem power LED6
Amber = Standby (auxiliary power available)
OFF = OFF
*Actual NIC numbers depend on several factors, including the operating system installed on the server blade.
Figure 25 Front LED display board
Table 3 Front LED status
StatusDescriptionItem
Green = The drive is online, but is not currently active.Hard drive LEDs1
Flashing irregularly green = The drive is online and it is operating normally.Normal mode (UID LED is
solid)
Flashing green (1 Hz) = Do not remove the drive. Removing the drive may terminate the current operation and cause data loss. The drive is rebuilding, or it is part of an array that is undergoing expansion, logical drive extension, a stripe size migration, or RAID migration.
Flashing amber/green = Drive is configured and indicating a predictive failure. The drive may also be undergoing a rebuild, expansion, extension, or migration.
Flashing amber (1 Hz) = A predictive failure alert has been received for this drive. Replace the drive as soon as possible.
Amber = Drive failure, link failure, or mismatched configuration.
OFF = The drive is offline, a spare, or not configured as part of an array.
Component LEDs 35
Page 36
Table 3 Front LED status (continued)
StatusDescriptionItem
Green = The drive has been selected by a management application and it is operating normally.
Hard drive LEDs1
Flashing amber (1 Hz) = The drive is not selected and is indicating a predictive failure.
Drive locate mode (UID LED is flashing)
Flashing amber/green = The drive has been selected by a management application and is indicating a predictive failure.
Amber = The drive might or might not be selected and is indicating drive failure, link failure, or mismatched configuration.
OFF = The drive is not selected.
Flashing amber if there is a failed component in the system.
NOTE: The amber chassis fault LED flashes if any component fault is
detected by the System Management Homepage. A fault can be as minor as a cable unplugged from a NIC port, and therefore may not be cause for concern.
Chassis fault LED2
OFF if the system is in good health.
Solid green if the system is in good health.Chassis health LED3
OFF if there is a failed component in the system.
This is either blue or off. When on it can be steady or blinking. Used only for unit identification. To set the LED, use the following CLI command: SET
ENCLOSURE UID { ON | OFF | SLOW | FAST }
Chassis UID LED4
OFF = Enclosure is functioning normally.
NOTE: All these LEDs are off if the enclosure has power but is turned off (see Table 11 (page 40)).
Then only the equivalent Chassis LEDs (2,3,4) on the rear Power Pod show status.
Figure 26 Hard drive LEDs
1. Fault/UID LED (amber/blue)
2. Online LED (green)
Table 4 SAS hard drive LED combinations
StatusDescriptionItem
OFF = Override drive activity output. Drive is not a member of any RAID volumes <or> Drive is configured but in a replacement or failed state for
Activity/Online LED1
at least one volume that is a member of a RAID volume <or> Drive is a spare drive that is or has been activated but has not been rebuilt.<and>Drive is not rebuilding<and>Drive is not a member of a volume undergoing capacity expansion or RAID migration.
Solid green = Drive is a member of a RAID volume <and> Drive is not a spare drive <and> Drive is not in a replacement or failed state for any volume that is a member of a RAID volume <and> Drive is not currently performing I/O activity.
36 Monitoring and troubleshooting the storage system
Page 37
Table 4 SAS hard drive LED combinations (continued)
StatusDescriptionItem
Blinking green (@ 4 Hz 50% of duty cycle) = Drive is currently performing I/O activity <and> Drive is a member of a RAID volume <and> Drive is not in a replacement or failed state for any volumes that is a member of a RAID volume (drive is online) <and> Drive is not rebuilding <and> Drive is not a member of a volume undergoing capacity expansion or RAID migration.
Blinking green (@1 Hz 50% duty cycle — override drive activity output) = Drive rebuilding <or> member of volume undergoing Capacity Expansion/RAID Migration.
OFF = Drive is not failed <and> Drive is not selected (unit identification).Fault/Identification LED
– Bicolor amber/blue
2
Solid blue = Drive is not failed <and> Drive is selected (unit identification).
Solid amber = Drive is failed <and> Drive is not selected.
Blinking amber (@ 1Hz 50% duty cycle) = Drive is in a predictive failure state <and> Drive is not failed <and> Drive is not selected.
Blinking alternate amber/blue (@ 1Hz 50% duty cycle) = Drive Failed <or> Drive is in a predictive failure state<and>Drive is selected.
NOTE: Spare drives are only used when a disk drive fails. Until a spare drive is used, it remains
offline and its LEDs will remain off.
Figure 27 1210m Cache module controller LEDs
Table 5 1210m Cache module controller LED status
StatusDescriptionItem
Green off, amber on = A backup is in progress.Controller LEDsGreen LED upper left;
Amber LED lower right
Green flashing (1 Hz), amber on = A restore is in progress.
Green flashing (1 Hz), amber off = The capacitor pack is charging.
Green on, amber off = The capacitor pack has completed charging.
Green flashing (2 Hz) alternating with amber; amber flashing (2 Hz) alternating with green LED = One of the following condition exists: – The charging process has timed out.
– The capacitor pack is not connected.
Green on, amber on = The flash code image failed to load.
Green off, amber off = The flash code is corrupt.
Component LEDs 37
Page 38
Figure 28 Enclosure Manager unit LEDs
Table 6 Enclosure manager unit LEDs status
StatusDescriptionItem
The LED blinks during power-up, but then the display changes only in response to commands from the Enclosure Manager Display.
EM display1
Amber flashing/green LED off = issue. Use the CLI commands SHOW ENCLOSURE STATUS and SHOW SYSLOG EM to determine possible fault causes.
EM fault LED2
The health LED is only green and is either on (Healthy) or off (Power off or Faulted).
EM health LED3
LEDs are off when the enclosure is powered off.
Figure 29 HP 2-port 1 GB Ethernet I/O modules LEDs
Table 7 HP 2-port 1 GB Ethernet I/O modules LEDs status
StatusDescriptionItem
Solid green when module health is goodModule health LED1
OFF* when module has failed
Solid amber when module has failedModule fault LED2
OFF* when module health is good
*LEDs are off when enclosure is powered off.
38 Monitoring and troubleshooting the storage system
Page 39
Figure 30 HP 2-port 1 GB Ethernet, Mezz A and B I/O modules LEDs
Table 8 HP 2-port 1 GB Ethernet, Mezz A and B I/O modules LEDs status
StatusDescriptionItem
Solid green when module health is goodModule health LED1
OFF* when module has failed
Solid amber when module has failedModule fault LED2
OFF* when module health is good
*LEDs are off when enclosure is powered off.
Figure 31 HP 1 GB intraconnect module LEDs
Table 9 HP 1 GB intraconnect module LEDs status
StatusDescriptionItem
Solid green when module health is goodModule health LED1
OFF* when module has failed
Solid amber when module has failedModule fault LED2
OFF* when module health is good
*LEDs are off when enclosure is powered off.
Component LEDs 39
Page 40
Figure 32 Power supply LEDs
Table 10 Power supply LED status
StatusDescriptionItem
Green = Power on and power supply functioning properly.Power supply1
OFF = One or more of the following conditions exists: System powered off, AC power unavailable, Power supply failed, Power supply exceeded current limit. Use the CLI command SHOW ENCLOSURE POWERSUPLY STATUS ALL for more details.
Figure 33 Chassis switches and indicator LEDs
Table 11 Chassis switches and indicator LEDs status
StatusDescriptionItem
Solid blue = Requires service check.UID1
Solid green when system health is good.Chassis health2
OFF if a module or component in the system has failed
Flashing amber if a module or component in the system has failed.
NOTE: The amber chassis fault LED flashes if any
component fault is detected by the System Management Homepage. A fault can be as minor as a cable unplugged from a NIC port, and therefore may not be cause for concern.
Chassis fault3
40 Monitoring and troubleshooting the storage system
Page 41
Table 11 Chassis switches and indicator LEDs status (continued)
StatusDescriptionItem
OFF if system health is good.
Green when enclosure power is ON.Power button/LED4
Amber when enclosure has AC power but is turned off.
Figure 34 SAS I/O modules LEDs
Table 12 SAS I/O module LEDs status
StatusDescriptionItem
Green* = HealthySAS Port 11, 2
Amber = Issue
Green* = HealthySAS Port 23, 4
Amber = Issue
Green = HealthyOverall I/O module status5, 6
Amber = Issue
Green* = HealthySAS Port 37, 8
Amber = Issue
Green* = HealthySAS Port 49, 10
Amber = Issue
*If there is anything connected to a connector, the corresponding green LED is on and blinks off with activity. If there is nothing connected to a connector, both LEDs are off.
Component LEDs 41
Page 42
Figure 35 Fan LEDs
The two fan modules are physically identical, but their control is not. The Fault/health LED on FAN 1 is a single bi-color LED controlled by the EMU via the Health Monitor – it is either off, steady green, or flashing amber. The lens of the fan LED is colorless and looks grayish-white when off.
System Fan — Fan 1
Fan 1 LED is driven by the EMU firmware. The fan microprocessor inside the Fan module cannot sense or control this LED. If the EMU fails, or if the connection between the EMU and the fan fails, the LED cannot be controlled and thus may not reflect actual state. Also, because Fan 1 LED has no power unless enclosure power is on, the EMU cannot indicate Fan status in standby mode.
There is no autonomic hardware circuit controlling the FAN Fault LED. Assuming the LED is working, it flashes Amber by the EMU if one or two of the 3 fan rotors is not functioning, or if the microprocessor on the fan module is unresponsive, or if code on the module is unreadable.
Drive Fan — Fan 2
The Fault/health LED on FAN 2 is not controlled at all by the EMU – but is controlled by one of the management processors inside the SAS I/O Module. This LED cannot be lit unless enclosure power is on, and its state depends upon signals from one of the SAS I/O modules.
To troubleshoot a degraded fan, you can use the EMU CLI commands SHOW ENCLOSURE STATUS and SHOW ENCLOSURE FAN ALL described in “Managing the EMU” (page 131).
EMU CLI SHOW commands
Use the EMU CLI SHOW commands described in “Managing the EMU” (page 131) to obtain additional information about component status as indicated by the hardware LEDs described in “Component
LEDs” (page 34). To access the CLI, log in to the EMU as Administrator.
The system is shipped with a single enabled user account: Administrator. The password of the Administrator account is unique, programmed at the factory, and printed on the tear-away label on the back of the unit and the label on top of the EMU. Logging in to the system requires the Secure Shell protocol (SSH). Windows systems can use ssh clients such as PuTTy, which can be freely downloaded.
To log in to the EMU:
1. Note the IP address of the EMU.
2. ssh to the EMU.
3. Log in as Administrator.
The following is a sample login session:
login as: Administrator
-----------------------------------------------------------------------------
WARNING: This is a private system. Do not attempt to login unless you are an authorized user. Any authorized or unauthorized access and use may be moni­tored and can result in criminal or civil prosecution under applicable law.
42 Monitoring and troubleshooting the storage system
Page 43
----------------------------------------------------------------------------­User: /src/bin/build@msaonyx Script: ./parbuild Directory: /src/quire/QUIRE-CSP-1-20/daily/2011102701/bld/QUIRE-CSP-1-20 FileTag: 102720111904 Date: 2011-10-27T19:04:57 Firmware Output: jsbach Firmware Version: 0x0120 SVN Version: 3414
Administrator@10.0.0.10's password:
HP CSP System Enclosure Manager (C) Copyright 2006-2010 Hewlett-Packard Development Company, L.P.
Type 'HELP' to display a list of valid commands. Type 'HELP ' to display detailed information about a specific command. Type 'HELP HELP' to display more detailed information about the help system.
EM-78E7D1C140F2>
After logging in, you can set the Administrator password using the Enclosure Manager Settings window. Go to the C:\Program Files\HP\HP Configuration Wizard directory and double-click HPEMConfig.exe.
Known issues
Table 13 identifies known issues with the storage system and provides workarounds to mitigate
them.
Table 13 Known issues
ResolutionIssue
This occurs if the domain user or group does not have the proper security access to the WMI namespace. Access to the WMI namespace is not given by default. To permit access:
HP System Insight Manager is not able to retrieve the OS and product name for the network storage system.
1. Select Server Manager+Configuration.
2. Right-click WMI Control and select Properties.
3. On the Security tab, select Root and the HPQ namespace.
4. Click Security, and give permission to the user or group.
5. Restart WMI.
This occurs when a previous identifier is listed in the DNS entries and the DNS server has not been set up to allow for updates from external clients. To fix this issue:
The cluster IP address on a cluster network resource cannot resolve the DNS name or update the DNS name correctly, causing the network resource to appear offline or with a warning message.
1. Log in to the Active Directory Domain Services with your Domain Admins,
Administrator, or Security Group access.
2. Go to the DNS server, select the computer name, and then click Forward
Lookup Tables.
3. Select the domain to add the file server.
4. Locate the DNS entries for the file server name. You can either update
the information manually by double-clicking on the entry and entering the correct file server information, or by deleting the existing DNS entry.
NOTE: Entering the correct file server information or deleting the DNS entry
requires that you manually enable the network resource on the cluster. You can do this by manually right-clicking on the network resource on the file server and selecting the Bring this resource online option. When deleting the DNS entry, this option creates and updates a new DNS entry on the DNS server.
During startup of the Microsoft iSCSI Software Target service, WinTarget makes a synchronous call to the Active Directory. If the Active Directory server
After joining a domain, the "Microsoft iSCSI Software Target" service may fail
does not respond in a timely manner, the service fails to start after 30 seconds. To resolve this issue, type the following on the command line:
with the error message: Windows
could not start the Microsoft
reg add HKLM\System\CurrentControlSet\Control /v ServicesPipeTimeout /t REG_DWORD /d 60000 /f shutdown /r /t 0
Known issues 43
Page 44
Table 13 Known issues (continued)
ResolutionIssue
The use of this registry key is documented at http://support.microsoft.com/
kb/824344.
iSCSI Software Target service on MACHINE_NAME. Error 1053: The service did not respond to the start or control request in a timely manner.
The UID LED cannot be enabled or disabled in the System Management Home page until the Enclosure Manager key has been generated. The Enclosure Manager key can be generated using the System Manager snap-in.
The Enclosure UID page that is part of the HP System Management Homepage cannot be used to enable or disable the UID status LED.
When selecting Provide cluster name and domain in the ICT you need:There is difficulty in setting up a storage system with clustering with insufficient administrative privileges.
An Active Directory domain where both nodes will be joined as member
servers.
A domain user account that has Create Computer Objects and Read All
Properties permissions.
The reboot time for the network storage system is approximately six to seven
minutes.
Length of network storage system reboot time
This issue can be resolved by doing one of the following:A cluster network interface reports status (from cluster net or cluster
Allow 12–24 hours to pass for the Microsoft Failover Cluster Manager to
resolve the issue automatically.
netint command or Failover Cluster Manager) as “network partitioned” when
Manually restart the node that was restarted or affected.
a cluster node is rebooted and rejoins the cluster. The network interface is still
Manually disable or enable the NIC on the affected cluster node that is
causing the issue, under Network Connections.
usable for communications to other nodes; it is only the communications between the cluster nodes that is affected. The cluster interface status will change after 12–24 hours. This issue is intermittent and takes approximately 30–50 reboots of a node to reproduce.
FSRM only configures its config store in the cluster database when the FSRM
service starts. To resolve this issue, reboot one node in the cluster, or you
File Server Resource Manager (FSRM) displays the error message: File
can start/stop the FSRM services by issuing the following commands on the
command line using elevated privileges:
Server Resource Manager global configuration cannot
net stop srmsvc
be accessed since it is not installed yet.
net stop srmreports
net start srmreports
net start srmsvc
This issue is documented by Microsoft at:
http://technet.microsoft.com/en-us/library/gg214171(WS.10).aspx
The HP X5000 G2 Network Storage System does not have a VDS Hardware
Provider, so the following utilities do not work:
Several utilities do not operate with the HP X5000 G2 Network Storage System.
The Microsoft Storage Manager for SAN (Server Manager+Storage+SMFS)
Diskraid (command line utility)
Provision Storage link from Share and Storage Management MMC (Server
Manager+Roles+File Services+Share and Storage Management)
44 Monitoring and troubleshooting the storage system
Page 45
Table 13 Known issues (continued)
ResolutionIssue
HP recommends the following:When attempting to create a cluster, the
cluster validation wizard fails, indicating
If one or more volumes were removed before running the validation
wizard, they may be flagged with warnings during the cluster validation even though they do not exist. These warnings can be ignored.
a failure in the storage tests from the Microsoft test report. The failure described is the inability to write to one of the disk partitions.
The network used for the cluster heartbeat uses APIPA (169.254.0.0/16)
addresses which will be flagged with warnings during the cluster validation. These warnings can be ignored.
The cluster validation may indicate that one or more disks are corrupted.
When this happens the test terminates. You should run chkdsk.exe on the disk. It is not important to use any options on the chkdsk command line. For example, if the drive letter for the corrupted disk is Q: then you would run chkdsk q:.
HP strongly recommends using Active Directory or other RFC2307 compliant LDAP stores for NFS user name mapping. Using Active Directory Lightweight
The NFS user mapping should not use ADLDS.
Data Services (ADLDS) is not recommended. Configuring ADLDS in a clustered environment is beyond the scope of this document.
The Onboard Administrator GUI Launch button only applies to systems with the C3000/C7000 chassis. You are not able to use this button with the X5000 G2 Network Storage System.
The Onboard Administrator GUI cannot be launched from the Integrated Lights-Out 3 page. The Launch button is in the Active Onboard Administration section under the BL c-Class node in the Integrated Lights-Out 3 navigation tree.
Update the SAS I/O module firmware again. If the second update is unsuccessful, review the log file for more information.
On rare occasions, an update to the SAS I/O module firmware may result in the following message: Flash failed
for xxxx half of SAS I/O Module. Check log file yyyy for more information (where xxxx
is internal or external and yyyy is the path and name of the log file).
Identify and reseat the SAS I/O module that is causing the issue. An I/O module may need to be replaced if there is less information available for one I/O module than the other.
If the System Management Homepage lists a fan module as unknown or dormant, it might not be a fan issue. It may mean that a SAS I/O module needs to be reseated.
An alert is generated for cache and supercapacitor (cache backup) issues, but you can also run the following command from a command prompt or PowerShell to determine the issue:
If the cache on the 1210m controller in either server blade of the X5000 G2 becomes disabled, it will greatly affect
ccu show controller all details
If the output is similar to the following, replace the cache module or supercapacitor:
performance. As a protective measure to ensure the safety of data, if either 1210m controller experiences an issue requiring the cache to be disabled, the
controller 0: 500143800442D690 Manufacturer: HP
cache is disabled on both controllers. This results in the reduction of
Model: 1210m
performance for both controllers until the degraded part is repaired or replaced.
Part Number: 607190-001 SKU: None Serial Number: PBGJR0XTAZ40FK Firmware Version: 0156 Firmware Build: 2011061702 Peer Controller: 500143800442E600 Operational Status: Degraded, Cache disabled: redundant controller battery issue Health Status: Degraded/Warning Cache Size: 1073741824 Read Cache Default: Enabled Write Cache Default: Enabled Battery 0 Status: fully charged Battery 0 Estimated Charge Remaining: 100%
Known issues 45
Page 46
Table 13 Known issues (continued)
ResolutionIssue
controller 1: 500143800442E600 Manufacturer: HP Model: 1210m Part Number: 607190-001 SKU: None Serial Number: PBGJR0XTAZ407Z Firmware Version: 0156 Firmware Build: 2011061702 Peer Controller: 500143800442D690 Operational Status: Degraded, Cache flash backup hardware failure, Cache disabled: low battery charge Health Status: Degraded/Warning Cache Size: 1073741824 Read Cache Default: Enabled Write Cache Default: Enabled Battery 0 Status: charging Battery 0 Estimated Charge Remaining: 0%
Delete the volume that is in a Bad state and try again to create a volume.When creating a volume for a new pool, if there is no redundancy and the volume is in a Bad state, the Create a Volume wizard will fail at the format and partition step. The following error message will display:
Failed to located LUN on host
When configuring iLO settings in the Initial Configuration Tasks window,
Log on to the second node using the local administrator account and
configure the iLO settings using the iLO Configuration Utility in the HP System Tools program group.
changing iLO settings on the second node may result in an error that the user
Connect to iLO and change the settings using the iLO user interface.
has insufficient permissions. This can happen even if the user is a domain
NOTE: Changing the iLO network settings may cause your iLO sessions to
disconnect.
administrator and belongs to the administrator group on each node. The User Access Control system prevents the application from running on the second node. No prompt is displayed to the user to allow execution on the second node.
Delete existing LUNs using either the Delete a Volume or Create a Witness
Disk wizard after installing the System Recovery image.
You have installed the System Recovery image on a storage system that has an external disk enclosure connected to the storage system and the disk enclosure either has existing LUNs or a degraded disk. After installing the image, the Hardware Status tab in the System Manager and the System Management Homepage indicate the disk drives are degraded and, if you use the Create a Volume wizard, the disk drives are only displayed for one node.
This error message typically appears when the subnet mask is incorrectly
specified for one or more network adapters. Verify that each network adapter
If you connect the storage system to more than one network or subnet and then run
is correctly configured to operate on the respective network by checking thatthe Create a File Server wizard, the wizard may fail with the following error: the IP address, network mask, and gateway is applicable. You can also
execute the ping command to confirm connectivity on the network.
Static network xx.xx.xx.x /xx was not configured. Please use -StaticAddress to use this network or use
-IgnoreNetwork to use it.
46 Monitoring and troubleshooting the storage system
Page 47
Table 13 Known issues (continued)
ResolutionIssue
This is a temporary condition that exists only until the volume has been added to the cluster as a cluster disk. Afterwards, the volume appears on the owner
A drive letter is selected for a new volume in the Create a Volume wizard,
node with the desired drive letter. On the alternate node, the volume will notbut that drive letter is used only on one appear in Windows Explorer. The disk will appear in Disk Management onnode. The other node uses a different
letter. the alternate node, but it will have a status of Reserved and will not have a
drive letter. If the cluster moves the disk resource to the alternate node, the desired drive letter will be used on the alternate node (provided the drive letter is still available). Drive letters can be changed using the Failover Cluster Manager after the disk has been added to the cluster.
The Test WBEM Events tool displays an error when the tool is launched by a user other than Administrator.
1. Add the specific user name to the following namespaces:
root\HPQ
root\HPQ\default
root\HPQ\TestEvent
root\Interop
root\CIMv2
2. For each namespace, complete the following namespace security steps:
a. Right-click My ComputerManageServices and Applications. b. Right-click WMI Control. c. Click Properties. d. Select the Security tab. e. Select the namespace. f. Click the Security button and enable the following permissions for the
user:
Execute Methods
Full Write
Partial Write
Provider Write
Enable Account
Remote Enable
Read Security
Edit Security
3. Click Apply and then click OK twice.
Using Storage Viewer
You can access the Storage Viewer under Manage storage in the Server Manager. The Storage Viewer enables you to view details about each LUN – name, size RAID level, pool assignment, spare drive indication, and cluster disk name (if applicable). In the lower part of the tool, select one of the following tabs to view additional information:
Volumes: Displays any Windows volumes on the LUN, the volume label, and mount paths.
Drives: Displays details about the physical drives that comprise the LUN (drive bay, size, RPM,
disk name, and serial number).
Spares: Displays details about any spares that are assigned to the LUN (drive bay, size, RPM,
disk name, and serial number). If more information is available, when you hover over any part of the row, a tool tip opens with details.
Jobs: Displays the status of any jobs running on the LUN (checking volume data integrity and
rebuilding). For example, if a spare drive is in use as a failed drive replacement, a tool tip message will be shown.
Using Storage Viewer 47
Page 48
Figure 36 Storage Viewer (LUNs view)
You can also view details about each drive – bay location, ID, serial number, size, health, and model number. In the lower part of the tool, you can view volume information related to the drive.
48 Monitoring and troubleshooting the storage system
Page 49
Figure 37 Storage Viewer (Drives view)
HP Support websites
Use the “Support and troubleshooting” task at the HP Support & Drivers website (http://
www.hp.com/go/support) to troubleshoot problems with the storage system. After entering the
storage system name and designation (for example, X5000 G2 Network Storage System) or component information (for example, SAS I/O module), use the following links for troubleshooting information:
Download drivers and software—Provides drivers and software for your operating system.
Troubleshoot a problem—Provides a listing of customer notices, advisories, and bulletins
applicable for the product or component.
Manuals—Provides the latest user documentation applicable to the product or component.
User guides can be a useful source for troubleshooting information. For most storage system hardware platforms, the following ProLiant server manuals may be useful for troubleshooting assistance:
HP ProLiant Server User Guide or HP ProLiant Server Maintenance and Service Guide
These guides contain specific troubleshooting information for the server.
HP ProLiant Servers Troubleshooting Guide
The guide provides common procedures and solutions for many levels of troubleshooting with a ProLiant server.
IMPORTANT: Some troubleshooting procedures found in ProLiant server guides may not
apply to the storage system. If necessary, check with your HP Support representative for further assistance.
HP Support websites 49
Page 50
For X5000 G2 guides, go to www.hp.com/support/manuals, select NAS Systems under storage, and select an X5000 G2 product.
For software-related components and issues, online help or user guide documentation may offer troubleshooting assistance. Known issues, workarounds and service releases are addressed in this guide or the release notes.
Customer notices—Address informational topics about the HP X5000 G2 Storage System.
Customer advisories—Address know issues and solutions or workarounds.
NOTE: You must register for Subscriber's Choice to receive customer advisories and notices. See
“Subscription service” (page 128) for more information.
HP Insight Remote Support software
HP strongly recommends that you install HP Insight Remote Support software to complete the installation or upgrade of your product and to enable enhanced delivery of your HP Warranty, HP Care Pack Service, or HP contractual support agreement. HP Insight Remote Support supplements your monitoring, 24x7 to ensure maximum system availability by providing intelligent event diagnosis, and automatic, secure submission of hardware event notifications to HP, which initiates a fast and accurate resolution, based on the service level of your product. Notifications may be sent to your authorized HP Channel Partner for onsite service, if configured and available in your country. The software is available in two variants:
HP Insight Remote Support Standard: This software supports server and storage devices and
is optimized for environments with 1 to 50 servers. Ideal for customers who can benefit from pronotification, but do not need proservice delivery and integration with a management platform.
HP Insight Remote Support Advanced: This software provides comprehensive remote monitoring
and proservice support for HP servers, storage, network and SAN environments, plus selected non-HP servers than have a support obligation with HP. It is integrated with HP Systems Insight Manager. A dedicated server is recommended to host both HP Systems Insight Manager and HP Insight Remote Support Advanced.
Details for both versions are available at:
http://www.hp.com/go/insightremotesupport
To implement Insight Remote Support for HP X5000 G2 systems, follow the instructions in release A.05.70 or later of the following guides:
HP Insight Remote Support Standard Hosting Device Configuration Guide (for standard support)
HP Insight Remote Support Advanced CMS Configuration and Usage Guide (for advanced
support)
HP Insight Remote Support Standard Managed Systems Configuration Guide (for standard
support).
To obtain these guides:
1. Go to the Insight Remote Software website (previously cited).
2. From the Product Information menu, select either Insight Remote Standard or Insight Remote Support Advanced software.
3. Select Support Documentation.
50 Monitoring and troubleshooting the storage system
Page 51
Be aware of the following specifics for HP X5000 G2 systems:
The storage system is a "managed system" as described in Insight Remote Support guides.
The X5460sb is equivalent to a ProLiant server and meets all the requirements for a managed
system. Follow guidelines and procedures for Windows ProLiant servers in the Insight Remote Support
documentation.
The storage system hardware is preconfigured for Insight Remote Support and uses the WMI
(WBEM) provider.
Register the system using the X5000 G2 product number and serial number, instead of the
blade serial number and part number. Confirm and overwrite any prepopulated values with the serial number of the storage system.
The product number and serial number are located on the pull-out tab located below the Enclosure Management module on the back of the enclosure, ((Figure 40 (page 54))).
You must register WBEM access credentials in HP SIM for Insight Remote Support Advanced.
Microsoft Systems Center Operations Manager
Microsoft Systems Center Operations Manager (SCOM) provides comprehensive monitoring, performance management, and analysis tools to maintain Windows OS and application platforms. This solution allows you to monitor Microsoft Windows environments and HP storage products through a common OpsMgr console. To download HP management packs for Microsoft System Center Operations Manager, including installation, configuration, and usage documentation, visit the HP Management Packs for Microsoft Systems Center site at:
www.hp.com/go/storageworks/scom2007
Windows Recovery Environment
You can use Windows Recovery Environment to help diagnose and recover from operating system errors which may prevent Windows from booting. To use Windows Recovery Environment to perform a system recovery, see “Restoring the system with Windows Recovery Environment”
(page 125).
Startup Repair
1. Do one of the following: a. For direct access, attach the SUV cable (supplied with the HP X5000 G2 Network Storage
System) to the port on the front of the server blade you want to recover. Connect a monitor and USB mouse to the SUV cable. Using the remaining USB connector on the SUV cable, connect either a USB DVD drive (and insert the System Recovery DVD) or a bootable USB flash device (prepared with a System Recovery image).
b. For remote management access, connect to the server using iLO from a client PC. Insert
the System Recovery DVD in the client PC or attach a bootable USB flash device that has been prepared with a System Recovery image.
2. Reboot the server blade to either the USB flash device or USB DVD drive. The system BIOS attempts to boot to the USB device first by default. Watch the monitor output
during the boot as you may need to press a key to boot to the USB media
NOTE: If directly connected, you may have to change the BIOS settings to ensure proper
boot sequence. If connected remotely, you may have to change some iLO settings to ensure proper boot sequence.
Microsoft Systems Center Operations Manager 51
Page 52
3. Select Windows Recovery Environment. The recovery environment is loaded.
4. Once the recovery environment is loaded, the System Recovery Options wizard opens. On the first window, select the keyboard input method, which is based on your location (for example, select US for United States) and click Next.
5. Select either of the following options (it does not matter which option is selected) and click Next:
Use recovery tools that can help fix problems starting Windows. Select an operating
system to repair.
Restore your computer using a system image that you created earlier.
6. Click Cancel until the Choose a recovery tool window opens.
Figure 38 System recovery options
7. Click Startup Repair. The utility automatically attempts to repair the system image startup process. It also attempts
to repair the errors. If the errors cannot be repaired, an alert window is displayed:
Figure 39 Startup repair alerts
8. Select Don’t send.
9. When the utility has finished running, click Restart when prompted to restart the system.
52 Monitoring and troubleshooting the storage system
Page 53
Memory Diagnostic
1. Do one of the following: a. For direct access, attach the SUV cable (supplied with the HP X5000 G2 Network Storage
System) to the port on the front of the server blade you want to recover. Connect a monitor and USB mouse to the SUV cable. Using the remaining USB connector on the SUV cable, connect either a USB DVD drive (and insert the System Recovery DVD) or a bootable USB flash device (prepared with a System Recovery image).
b. For remote management access, connect to the server using iLO from a client PC. Insert
the System Recovery DVD in the client PC or attach a bootable USB flash device that has been prepared with a System Recovery image.
2. Reboot the server blade to either the USB flash device or USB DVD drive. The system BIOS attempts to boot to the USB device first by default. Watch the monitor output
during the boot as you may need to press a key to boot to the USB media
NOTE: If directly connected, you may have to change the BIOS settings to ensure proper
boot sequence. If connected remotely, you may have to change some iLO settings to ensure proper boot sequence.
3. Select Windows Recovery Environment. The recovery environment is loaded.
4. Once the recovery environment is loaded, the System Recovery Options wizard opens. On the first window, select the keyboard input method, which is based on your location (for example, select US for United States) and click Next.
5. Select either of the following options (it does not matter which option is selected) and click Next:
Use recovery tools that can help fix problems starting Windows. Select an operating
system to repair.
Restore your computer using a system image that you created earlier.
6. Click Cancel until the Choose a recovery tool window opens.
7. Click Windows Memory Diagnostic.
8. Select one of the following options:
Restart now and check for problems. Select this option to restart the system and scan for
memory issues. Do not remove the attached USB DVD or USB flash drive.
Check for problems the next time I start my computer. Select this option to schedule a
memory diagnostic after you restart the system. Do not remove the attached USB DVD or USB flash drive.
HP 1210m Volume Online Tool
Use the HP 1210m Volume Tool to manually set all volumes online. Manually setting volumes online may be necessary if a disk enclosure is powered down before the server blades are powered down and the enclosure contains disks with LUNs on them.
IMPORTANT: You should only use this tool under the guidance of HP Support to avoid potential
data loss. The tool is included (but not installed) with HP X5000 G2 software version 2.02.0a or later.
To install the tool:
1. Navigte to the C:\hpnas\support directory on the server blade.
2. Double click Volume_Manager_Install.msi.
To use the tool:
HP 1210m Volume Online Tool 53
Page 54
1. Navigate to C:\Program Files (x86)\Hewlett-Packard\HP 1210m Volume Online Tool.
2. Double-click HPVolumeOnlineTool.exe to start the tool.
NOTE: Before the tool opens, a disclaimer about potential data loss is displayed. Read the
disclaimer and accept the terms to continue. If you decline, the tool closes.
3. When the HP 1210m Volume Online Tool opens, the LUNs that are in an Enabled but Offline state are displayed.
4. Click Force Online.
When the operation is complete, the tool indicates that the LUNs are now in the Enabled state.
Obtaining the Service Agreement ID
Obtain the SAID from your service contract agreement and keep it in a secure location. You must provide it when you contact HP Support.
Locating the storage system warranty entitlement label
You must locate and identify the serial number and product number for the storage system components to obtain service under the warranty. The numbers are listed on the warranty entitlement label located on the pull-out tab below the Enclosure Management module on the back of the enclosure (Figure 40 (page 54)).
Figure 40 Warranty entitlement label location
54 Monitoring and troubleshooting the storage system
Page 55
5 Upgrading the storage system
The HP X5000 G2 Network Storage System is comprised of a common hardware platform containing two server blades. Each server runs Windows Storage Server 2008 R2 or later.
When HP determines that it is desirable to upgrade one or more of these components, a notification is posted to the HP support website for the HP X5000 G2 Network Storage System with the release notes and the updated code. HP recommends that you upgrade the storage system software as part of normal system maintenance for increased reliability and a better customer experience. Upgrades might also be necessary when replacing a server blade or other component.
Maintaining your storage system
HP recommends the following guidelines for maintaining your system, depending on your environment:
You are not required to install any updates if your storage system is working properly.
If security updates are important for your operating environment, you can:
Use Microsoft Windows Update to download updates.
Use Windows Update Server to update the server blades in the storage system.
Download and install specific security updates as needed.
If your maintenance policy is to only update servers to current and tested versions of drivers,
firmware, and software, install the services releases when they are available.
If your maintenance policy allows you to update servers to current versions of drivers, firmware,
and software that have not been tested with the storage system, you can go to http://
www.hp.com and download and install specific updates from the storage system product
page or the platform product page (for example, the specific server model used for the storage system server blade).
When updating the server blades, update the NIC drivers, firmware, and software in the same
update window. HP recommends updating the RAID controller drivers, firmware, and software in the same update window to ensure proper operation of the storage system.
Determining the current storage system software version
You can find the version using the Server Manager or the registry. From the Server Manager:
1. Expand the tree under System Manager.
2. Select System and Network Settings and locate the value for HP Quick Restore.
3. Read the value for HP Quick Restore in the System and Network Settings right pane.
Figure 41 System and network settings
Maintaining your storage system 55
Page 56
From the registry:
1. Log in to the server blade.
2. Open a command window.
3. Enter the reg query command as shown in the following example:
C:\> reg query HKLM\Software\Wow6432Node\Hewlett-Packard\StorageWorks /s
The version information that displays depends on the software version that you are currently running. The output format will look similar to the following:
HKEY_LOCAL_MACHINE\Software\Wow6432Node\Hewlett-Packard\StorageWorks\QuickRestore BASE REG_SZ 2.0.1.21 QRVersion REG_SZ 2.01.1a.93
The QRVersion field lists the version.
Upgrading X5000 G2 software
Software upgrades are typically distributed as service releases. Download the service release file:
1. Go to http://www.hp.com/go/support.
2. Select Drivers and Software.
3. Enter X5000 G2 in the Enter a product name/number box and click Search.
4. Select your X5000 G2 product, select the operating system, and then select the service release.
5. Follow the instructions included with the service release for installation.
Upgrading a component's firmware
To determine if a component requires a firmware upgrade:
IMPORTANT: Remember the following:
You must complete these steps on each server.
HP recommends that you complete any upgrade during a scheduled maintenance period
and/or during periods of low user activity.
1. Install Hewlett-Packard CMP - Firmware Module from the latest X5000 G2 Service Release.
2. Select the System Manager in Server Manager. Click the System Summary tab, and inspect the Firmware update recommended icon status (Figure 42 (page 57)). If the icon is green ,
no firmware update is needed. If the icon is yellow , a firmware update is required. You can also check the Software component on the System Management Homepage to verify what
56 Upgrading the storage system
Page 57
firmware versions are currently installed. See “HP System Management Homepage” (page 26) for more information.
Figure 42 System summary tab
3. If a firmware update is needed, select the Firmware tab to view a list of the components that can be upgraded (Figure 43 (page 58)).
The list of components can include:
Integrated Lights-Out (iLO)
HP 1210m controller
System ROM (I24) for the server blade
Power Management controller firmware (c-Class blades)
Smart Array HP P410i Blade HDD controller
Enclosure Manager Unit (EMU)
External half of the SAS I/O module
Internal half of the SAS I/O module
I/O module on an external disk enclosure
Hard disk drives (various models)
Upgrading a component's firmware 57
Page 58
Figure 43 Firmware tab
4. On the Firmware tab, select the box next to each component to be upgraded.
5. Click Apply Updates. The status reports that an upgrade is in progress.
CAUTION: When upgrading the controller firmware, you must complete the upgrade and
power cycle one controller and then upgrade and power cycle the other controller. Otherwise, the firmware may synchronize with the controller running the previous version of code.
If, after upgrading the firmware on the controllers, the storage system does not see any storage or the controllers do not start, see “Resolving errors after the HP 1210m controller upgrade”
(page 59).
NOTE: Each firmware upgrade takes a few minutes to complete. If you are upgrading
multiple components, such as hard drives, the upgrade takes more time.
6. Reboot the server blade from the Start menu, if needed. A message appears on the Firmware tab to alert you if a reboot is required after each
component firmware upgrade. After the upgrade completes, the Firmware Status changes from “Firmware updates recommended” to “A reboot is required.”
NOTE: If a reboot is not required after the component firmware upgrade completes, a green
checkmark displays next to the component name.
7. Open the Firmware tab in the System Manager and verify that the upgrade was successful. You can also check the upgrade status on the System Summary tab.
If the firmware upgrade failed, the component is listed as an available upgrade in the Firmware tab after the firmware upgrade and reboot. To determine the next steps for a successful firmware upgrade, go to the Reports tab (see “Reports” (page 33)) and run a report.
58 Upgrading the storage system
Page 59
Resolving errors after the HP 1210m controller upgrade
If the firmware upgrade for the HP 1210m controller does not complete successfully, the controllers could stop responding. As a result, the Controller Properties dialog box in Windows Device Manager displays “This device cannot start” and the storage system Configuration Wizard fails to detect storage.
To resolve this issue, first try the Simple method. If the issue persists, try the Advanced method. Simple method:
1. Upgrade the HP 1210m controller firmware on one server blade.
2. Upgrade the HP 1210m controller firmware on the other server blade.
3. Shut down both server blades.
4. Power on both server blades. Advanced method:
1. Shut down both server blades and power off the entire HP X5000 G2 enclosure.
2. Power off and disconnect all disk enclosures.
3. Pull one of the server blades a quarter of the way out of the enclosure.
4. Power on the HP X5000 G2 enclosure.
5. If not already powered on, power on the server blade that remained in the enclosure.
6. Open System Manager.
7. Select the Firmware tab.
8. Select the 1210m controller and click Apply Updates.
9. Shut down the server blade that was powered on in Step 5.
10. Power off the HP X5000 G2 enclosure.
11. Push the other server blade back into the enclosure.
12. Reconnect the disk enclosures.
13. Power on the HP X5000 G2 enclosure and both server blades.
14. Verify that the 1210m controller firmware in both server blades is current.
Resolving errors after a disk drive firmware upgrade
If, after upgrading disk drive firmware on the storage system and rebooting the storage system, the System Manager indicates an upgrade is needed, complete the following procedure:
1. Log in to the server in Bay 2.
2. Shut down the server in Bay 2 from the Windows Start menu.
3. Log in to the server in Bay 1 and open Server Manager.
4. Open the System Manager and select the Firmware tab.
5. In the Components Available for Upgrades window, select the drive firmware.
6. Click Apply Updates.
7. Once the update is complete, shut down Server Manager and any other running applications.
8. Shut down the server in Bay 1 from the Windows Start menu.
9. Power off any connected disk enclosures.
10. Power down the storage system chassis by pressing and holding the power button on the back of the enclosure.
11. Disconnect the power cables from the storage system and any connected disk enclosures.
12. Reconnect the power cables.
13. Power on any disk enclosures.
14. Power on the storage system chassis by pressing the power button on the back of the enclosure.
15. If necessary, manually power on the servers.
Upgrading a component's firmware 59
Page 60
Resolving an EMU upgrade issue
When upgrading the EMU firmware, if the EMU and the server blade initiating the upgrade are not on the same subnet, the upgrade fails. The following message (an example) displays on the System Manager Firmware tab:
Flash failed for Enclosure Management Unit (EMU) using cpXXXXXX.exe. Check log files (C:\ProgramData\Hewlett-Packard\CMP\logs\firmware.log and C:\CPQSYSTEM\log\cpqsetup.log) for further information.
The C:\CPQSYSTEM\log\EmuFlash.log displays the following information (an example only):
Enclosure Manager information:Product Name : HP CSP EMUPart Number :
620022-001Serial Number : PBCYU0G9V0C01XUUID : 99PBCYU0G9V0C01XManufacturer
: HPFirmware Ver. : EM: 1.10 Jul 12 2011; HM: 1.3EMU Type : 1Hw Version
: Rev. BAux Info : SVN: 3221 branches/QUIRE-CSP-1-10
1.10 Jul 12 2011
Starting Flash Routine
Launching Http Server
Host IP Address:
Host IP not found
If this issue occurs, configure the EMU and server blade networking to be on the same subnet and retry the firmware upgrade.
Upgrading hardware components
To replace a hardware component with an upgrade, follow the component removal and replacement instructions in “Removing and replacing hardware components” (page 61). For example, to replace the HP 1 GB Ethernet I/O module with a 10 GB module, follow the instructions in “Removing and
replacing the HP Ethernet I/O module” (page 78). If you need to shut down a server blade or the
storage system to replace a component, follow the instructions in “Powering the storage system off
and on” (page 60).
Powering the storage system off and on
Follow these steps to shut down a single server blade or to perform a storage system shutdown:
1. From the Windows desktop, select StartShut Down as follows: a. While you are connected to blade 2, shut down blade 2 by clicking Start, and then Shut
Down.
b. While you are connected to blade 1, shut down blade 1 by clicking Start, and then Shut
Down.
NOTE: Let the Windows shutdown run to completion, which will power the blade off.
2. Power off any disks in disk enclosures by pressing and holding down the power button located on the back of each disk enclosure.
3. Power off the storage system enclosure by pressing and holding down the power button located on the back of the enclosure.
4. Disconnect the power cables (optional).
To power on the server blades and storage system, reverse the shutdown procedure.
60 Upgrading the storage system
Page 61
6 Removing and replacing hardware components
This chapter describes procedures for removing and replacing hardware components.
Customer self repair
HP customer self repair (CSR) programs allow you to repair your HP product. If a CSR part needs replacing, HP ships the part directly to you so that you can install it at your convenience. Some parts do not qualify for CSR. Your HP-authorized service provider determines whether a repair can be accomplished by CSR.
For more information about CSR, contact your local service provider, or see the CSR website:
http://www.hp.com/go/selfrepair
Best practices for replacing components
The following sections provide information to help you successfully replace the hardware components on your system.
WARNING! To reduce the risk of personal injury or damage to the equipment:
Be sure that only one component is extended from a rack at a time. A rack may become
unstable if more than one component is extended at the same time.
Do not extend the hard drive drawers beyond the supporting surface when the unit is not
installed in a rack.
CAUTION: Removing a component significantly changes the air flow within the enclosure. All
components must be installed for the enclosure to cool properly. If a component fails, leave it in place in the enclosure until a new component is available for installation.
IMPORTANT: Be sure to unpack the replacement part before you remove the existing component.
During replacement of the failed component
HP recommends waiting until periods of low system activity to replace a component.
For all hot/warm swappable components (SAS I/O module, fan module, Ethernet I/O module,
PCIe module, server airflow baffle, server blades, and hard drives), be sure to unpack the replacement part before removing the existing part.
When replacing components at the rear of the rack, cabling may obstruct access to the
component. Carefully move any cables out of the way to avoid loosening any connections. In particular, avoid cable damage that may be caused by:
Kinking or bending
Disconnecting cables without capping. If uncapped, cable performance may be impaired
by contact with dust, metal, or other surfaces.
Placing removed cables on the floor or other surfaces where they may be walked on or
otherwise compressed.
Accessing component replacement videos
HP produced videos of the procedures to assist you in replacing components. To view the videos, go to the HP Customer Self Repair Services Media Library website and navigate to your product:
http://www.hp.com/go/sml
Customer self repair 61
Page 62
Identifying the spare part
Parts have a nine-character spare part number on their label. For some spare parts, the part number is available in the system. Alternatively, the HP call center can assist in identifying the correct spare part number.
Replaceable parts
This product contains replaceable parts. To identify the replaceable parts, see the individual component guides listed in Table 15 (page 65).
Parts that are available for CSR are indicated as follows:
Mandatory CSR — You order the part directly from HP and repair the product yourself. On-site
or return-to-depot repair is not provided under warranty.
Optional CSR — You can order the part directly from HP and repair the product yourself, or
you can request that HP repair the product. If you request repair from HP, you may be charged for the repair depending on the product warranty.
No CSR — The replaceable part is not available for self repair. For assistance, contact an
HP-authorized service provider.
For more information about CSR — contact your local service provider. For North America, see the CSR website:
http://www.hp.com/go/selfrepair
To determine the warranty service provided for this product, see the warranty information website:
http://www.hp.com/go/storagewarranty
To order a replacement part, contact an HP-authorized service provider or see the HP Parts Store online:
http://www.hp.com/buy/parts Figure 44 (page 63) shows an exploded view of the system.
62 Removing and replacing hardware components
Page 63
Figure 44 Exploded view of the system
Table 14 (page 64) lists the CSR replaceable parts for the system.
Replaceable parts 63
Page 64
Table 14 Storage system replaceable parts
Replacement type (Cold, Warm, Hot)CSR availabilityPart numberReplaceable unit (RU)Item
ColdOptional631117-001Server interposer1
ColdNo631115-001Midplane board2
HotMandatory408765-001.5 M mini SAS cable(Not shown)
HotMandatory408767-001Mini SAS cable 2 M(Not shown)
HotMandatory408768-001Mini SAS cable 4 M(Not shown)
HotMandatory631941-001LFF SAS I/O module3
HotMandatory631940-001SFF SAS I/O module(Not shown)
HotMandatory631109-001Fan modules4
ColdOptional399054-001Power UID button assembly5
HotMandatory631942-001Power supplies6
HotMandatory631111-0012-port 10 Gb Ethernet module7
HotMandatory611378-0012-port 1 Gb Ethernet module8
HotMandatory631114-0011 Gb intraconnect module(Not shown)
WarmOptional593721-B21NC365T 4–port Ethernet
server adapter
9
HotMandatory631112-001Enclosure Manager module10
ColdNo631116-001Server blade backplane11
ColdNo631129-001Server airflow baffle12
ColdNo631130-001Coil power assembly13
ColdOptional631118-001Drive drawer bezel LFF14
ColdOptional631124-001Drive drawer bezel SFF(Not shown)
ColdOptional631126-001LFF LED display board15
ColdOptional631125-001SFF LED display board(Not shown)
ColdOptional631128-001LFF drive drawer assembly16
ColdOptional631127-001SFF drive drawer assembly(Not shown)
HotMandatory389015-001Hard drive drawer blanks(Not shown)
HotMandatory508011-0011 TB Hard drive(Not shown)
HotMandatory508010-0012 TB Hard drive(Not shown)
ColdNo631131-001Drawer rails bottom17
ColdNo631132-001Drawer rails left18
WarmOptional462748-001Mezzanine NIC(Not shown)
HotOptional629960-001, 629960-002, 629960-003
Right ear bezel on chassis (3, one for each model)
(Not shown)
WarmOptional615360-0011210m controller19
WarmOptional598414-001Cache module for 1210m20
64 Removing and replacing hardware components
Page 65
Table 14 Storage system replaceable parts (continued)
Replacement type (Cold, Warm, Hot)CSR availabilityPart numberReplaceable unit (RU)Item
WarmMandatory587225-001Supercapacitor for 1210m
Cache
21
ColdOptional631133-001Rail kit assembly(Not shown)
ColdOptional
1
AP770AHP 82B HBA (Brocade) PCI
fibre HBA
(Not shown)
WarmOptional
1
AJ763AHP 82E HBA (Emulex)(Not shown)
WarmOptional
1
AJ764AHP 82Q HBA (Q-Logic)(Not shown)
1
Used only for backup. See www.hp.com/go/ebs for information about tested backup applications.
For more information on removing and replacing components, see Table 15 (page 65) for a list of individual component documents.
Table 15 Related component documents
GuideComponent nameComponent
HP ProLiant BL460c G7 Server Blade Maintenance and Service Guide
X5460sb bladesServer blade
HP D2600/D2700 Disk Enclosure User GuideThe large form factor (LFF) supports 12 3.5-inch disk drives and the small form factor (SFF) supports 25 2.5-inch disk drives.
Disks in disk enclosures
Hot, warm, and cold swap components
Hot or warm swapping a component means removing and replacing it while the main power is still on. Cold swapping means removing and replacing the component while the main power is off. Port (purple) colored handles on components like the fan module indicate the component is hot-swappable.
IMPORTANT: Remove and replace components quickly without interrupting the process.
Preventing electrostatic discharge
CAUTION: Components can be damaged by electrostatic discharge (ESD). Use proper antistatic
protection.
Always transport and store CSR replaceable parts in an ESD-protective enclosure.
Do not remove CSR replaceable parts from the ESD-protective enclosure until you are ready
to install it.
Always use ESD precautions, such as a wrist strap, heel straps on conductive flooring, and
an ESD-protective smock when handling ESD-sensitive equipment.
Avoid touching all connector pins, leads, or circuitry.
Do not place ESD-generating material such as paper or non-antistatic (pink) plastic in an
ESD-protective with ESD-sensitive equipment.
Hot, warm, and cold swap components 65
Page 66
Verifying component failure
Use the following methods to verify component failure:
Analyze any failure messages received. Fault monitoring software from HP provides a
recommended action.
From the System Manager, select the System Summary tab to check the enclosure health status
or select the Hardware Status tab to identify a failed component. See “Using the System
Manager” (page 30) for more information.
You can also use the System Management Homepage to identify hardware problems. For
example, to identify the affected enclosure, select Unit Identification Device in the Enclosure pane and then on the Unit Identification Device window, click On. The blue UID indicator on the controller enclosure blinks. See “HP System Management Homepage” (page 26) for more information.
Look for a blinking amber LED on the component. See “Component LEDs” (page 34) for LED
information.
Verifying proper operation
After replacing a system component, check the following to verify that the component is operating properly:
If applicable, verify that the green LED is lit continuously or blinking. If not, try reseating the
component.
From the System Manager, navigate to the Hardware Status and System Summary tabs to
confirm the component failure alert no longer appears. The status should be (Good).
Wait times for hard disks
If the hard drive is part of a volume, the following wait times apply: Removal: Less than three seconds for the LED to turn off
Insert:
Less than one second for first disk activity
Less than 15 seconds for the disk to be ready for REBUILD. The LED blinks at 1 Hz.
NOTE: The transition to solid green depends on how long the REBUILD takes (the LEDs
indicate REBUILD).
If the hard drive is not part of a volume, the following wait times apply: Removal: No indication appears because the LED is already off
Insert:
Less than one second for the first disk activity to appear
Less than 15 seconds for the disk to be ready to use
66 Removing and replacing hardware components
Page 67
Removing the system enclosure from the rack
1. Extend the hard drive drawer (Figure 45 (page 67)): a. Press upward on the release button on the hard drive drawer (1). b. Pull the drawer handle down 90 degrees (2). c. Extend the hard drive drawer (3).
Figure 45 Extending the hard drive drawer
2. Label the hard drives (Figure 46 (page 67)).
IMPORTANT: Use the drive labels provided with the replacement part when removing the
drives to ensure you replace the drives in the correct order.
Figure 46 Hard drive labeling
3. Remove all hard drives.
WARNING! Carefully check the drive labels provided with the replacement board, and then
install the hard drives in the same slots from which you removed them. If the drives are not installed in the correct slots, the system might fail.
4. Push the hard drive drawer back into the system enclosure.
5. Label each server blade and then remove both server blades.
6. Label the cables and then unplug all cables from the back of the system enclosure.
Removing the system enclosure from the rack 67
Page 68
7. Unscrew the retaining screws from the bezel ears, and then remove the enclosure from the rack.
WARNING! The system enclosure is heavy, even after removing the hard drives. Always
use at least two people to remove the system from the rack.
Inserting the system enclosure into the rack
1. Place the enclosure into the rack, and secure the enclosure by tightening the two retaining screws.
WARNING! The system enclosure is heavy, even after removing the hard drives. Always
use at least two people to replace the system in the rack.
2. Replace both server blades in their original bays.
3. Extend the hard drive drawer (Figure 47 (page 68)): a. Press upward on the release button on the hard drive drawer (1). b. Pull the drawer handle down 90 degrees (2). c. Extend the hard drive drawer (3).
Figure 47 Extending the hard drive drawer
4. Replace all hard drives.
IMPORTANT: Install the hard drives in the same slots from which you removed them or the
system might fail. Use the drive labels to ensure that you replace the drives in the correct order.
5. Push the hard drive drawer back into the system enclosure.
6. Plug in all cables at the back of the system enclosure, and ensure that all cables are returned to their original locations.
7. Power on the system by pressing the power button ON.
8. Confirm that the system has resumed normal operations.
Removing and replacing the server interposer board
Removing the server interposer board
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Power off the system as described in “Powering the storage system off and on” (page 60).
3. Remove the enclosure from the rack as described in “Removing the system enclosure from the
rack” (page 67).
68 Removing and replacing hardware components
Page 69
4. Remove the top back panel by pressing the panel release button and lifting the latch to slide the top back panel off.
5. Open the release handle (1, Figure 48 (page 69)), and pull up to remove the server interposer board (2, Figure 48 (page 69)).
NOTE: You may need to use significant force to accomplish this task.
Figure 48 Removing the server interposer board
Replacing the server interposer board
1. With the release handle open, align the server interposer board with the alignment pins (1,
Figure 49 (page 69)), and then close the server interposer release mechanism (2, Figure 49 (page 69)).
NOTE: Remember to move the server backplane power cable out of the way of the alignment
pins.
Figure 49 Replacing the server interposer board
2. Reinstall the top back panel.
3. Replace the enclosure in the rack as described in “Inserting the system enclosure into the rack”
(page 68).
Removing and replacing the server interposer board 69
Page 70
Removing and replacing the midplane board
Removing the midplane board
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Power off the system as described in “Powering the storage system off and on” (page 60).
3. Remove the enclosure from the rack as described in “Removing the system enclosure from the
rack” (page 67).
4. Remove the top back panel by pressing the panel release button and lifting the latch to slide the top back panel off.
5. Remove all modules from the back of the enclosure.
NOTE: Make a note of all module locations so they can be placed back into their original
locations.
6. Open the release handle (1, Figure 50 (page 70)), and pull up to remove the server interposer board (2, Figure 50 (page 70)).
NOTE: This step may require significant force to accomplish.
Figure 50 Removing the server interposer board
7. Remove the plug bracket (2, Figure 51 (page 70)) from the coil power plug by removing the thumbscrew (1).
Figure 51 Removing the plug bracket from the coil power plug
70 Removing and replacing hardware components
Page 71
8. Unplug the coil power assembly from the midplane board (Figure 52 (page 71)).
Figure 52 Unplugging the coil power assembly
9. Extend the server blades.
10. Remove the server blade airflow baffle from inside the enclosure (Figure 53 (page 71)).
Figure 53 Removing the server blade airflow baffle
11. Unplug the power cable from the server blade midplane (1, Figure 54 (page 71)), and then unplug the rear UID PCA from the midplane board (2).
Figure 54 Unplugging the power cable and the UID PCA
Removing and replacing the midplane board 71
Page 72
12. Complete the following (Figure 55 (page 72)): a. Loosen the two thumbscrews holding midplane board in place (1). b. Pull the captive locking pin out of the midplane board (2). c. Lift the midplane board out of the enclosure (3).
Figure 55 Removing the midplane board
Replacing the midplane board
1. On the replacement midplane board, pull out the captive locking pin as you lower the board into the enclosure (1, Figure 56 (page 72)).
2. To complete the installation of the replacement midplane board: a. Push the captive locking pin into the midplane board (2). b. Tighten the two thumbscrews holding the midplane board in place (3).
Figure 56 Installing the midplane board
3. Plug the rear UID PCA into the midplane board.
4. Plug the power cable into the server blade midplane.
5. Partially insert the drive drawer.
6. Plug the coil power plug into the midplane board.
7. Reattach the coil power plug bracket.
8. Reinsert the server blade airflow baffles.
9. Reinstall the server interposer board, see “Replacing the server interposer board” (page 69).
10. Push the hard drive drawer back into the enclosure.
72 Removing and replacing hardware components
Page 73
11. Replace the top back panel.
12. Reinsert all rear components in the enclosure.
13. Replace the enclosure in the rack as described in “Inserting the system enclosure into the rack”
(page 68).
Removing and replacing a SAS cable
CAUTION: Remove only one cable at a time to prevent downtime.
IMPORTANT: Check the QuickSpecs for the device before you purchase and connect SAS cables
to ensure that the cables do not exceed the maximum supported length. Only specific cable lengths were tested and approved for use with external disk enclosures.
Ensure that cabling in the back of the rack system does not interfere with system operation or maintenance. Bind cables loosely with cable ties and route the excess out of the way, along the side of the rack. When cables are tied together and routed down the side of the rack, system components and indicators are easily visible and accessible.
Removing a SAS cable
Remove the SAS cable that connects the system SAS I/O module to the disk enclosure.
Replacing a SAS cable
1. Connect the SAS cable between the system SAS I/O module and the disk enclosure.
2. Verify that the replacement SAS cable is working properly by checking the associated LED status on the SAS I/O module.
3. Confirm that the system has resumed normal operations.
Removing and replacing the SAS I/O module
Removing the SAS I/O module
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Label the cables so they can be returned to their original locations.
3. Unplug all cables from the SAS I/O module.
IMPORTANT: The SAS I/O cables must be installed in the same slots from which they are
removed or the system might fail.
4. Pull up on the SAS I/O module release button (1, Figure 57 (page 74)).
Removing and replacing a SAS cable 73
Page 74
5. Push down on the SAS I/O module lever (2, Figure 57 (page 74)), and then remove the failed SAS I/O module (3, Figure 57 (page 74)).
NOTE: You may need to use significant force to accomplish this task.
Figure 57 Removing the SAS I/O module
Replacing the SAS I/O module
1. To install the replacement SAS I/O module (Figure 58 (page 74)): a. Insert the SAS I/O module into the enclosure (1). b. Push up on the SAS I/O module lever (2) until it locks into place.
NOTE: You may need to use significant force to accomplish this task.
Figure 58 Replacing the SAS I/O module
2. Plug in all cables to the SAS I/O module.
IMPORTANT: You must install the SAS I/O cables in the same slots from which they were
removed or the system might fail.
3. Verify that the replacement SAS I/O module is working properly by checking the overall module status LED (“SAS I/O module LEDs status” (page 41)).
NOTE: The green overall module status LED should turn on within five seconds after the new
module is inserted in the system, which reflects the necessary time to boot the firmware.
74 Removing and replacing hardware components
Page 75
4. Confirm the firmware version.
5. Confirm that the system has resumed normal operations.
Removing and replacing the fan module
There are two fan modules: one server fan module, which cools the server half of the enclosure, and one hard drive fan module, which cools the drive half of the enclosure. The two fan modules are not redundant for each other.
CAUTION: You must replace the server fan module within three minutes or a thermal shutdown
of the system may occur. The total time allowance is three minutes for replacing the fan module,
which includes the removal of the original server fan module and installation of the replacement fan.
Removing a fan module significantly changes the air flow within the enclosure. Both fan modules must be installed for the enclosure to cool properly. The fan modules are not redundant to each other, and each module cools a different half of the enclosure. If a single fan module fails, leave it in place in the enclosure until a new fan is available to install. The fan modules have some built-in redundancy to keep operating until a replacement can be made. The remaining fan module speeds up and allows operation for a limited time, based on operating and environmental conditions. If a temperature threshold is exceeded, the enclosure automatically shuts down.
Removing the fan module
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Press up on the drive fan module release lever (1, Figure 59 (page 75)) and remove the fan module (2).
Figure 59 Removing the fan module
Removing and replacing the fan module 75
Page 76
Replacing the fan module
1. Insert the replacement fan module (Figure 60 (page 76)).
Figure 60 Replacing the fan module
2. Verify that the replacement component is working properly by checking the associated LED status.
NOTE: It should take approximately 15 seconds for the LED status to appear.
3. Confirm that the system has resumed normal operations.
Removing and replacing the power UID button assembly
Removing the power UID button assembly
1. Power off the system as described in “Powering the storage system off and on” (page 60).
2. Remove the enclosure from the rack as described in “Removing the system enclosure from the
rack” (page 67).
3. Remove the top back panel by pressing the panel release button and lifting the latch to slide the top back panel off.
4. Remove the hard drive fan module (Figure 61 (page 76)).
Figure 61 Removing the fan module
76 Removing and replacing hardware components
Page 77
5. Complete the following (Figure 62 (page 77)): a. Unplug the cable from the power UID button assembly (1). b. Remove the screw from the power UID button assembly (2). c. Remove the faulty power UID button assembly (3).
Figure 62 Removing the power UID button assembly
Replacing the power UID button assembly
1. Complete the following (Figure 63 (page 77)): a. Insert the replacement power UID button assembly (1). b. Replace the screw in the power UID button assembly (2). c. Plug the cable into the power UID button assembly (3).
Figure 63 Replacing the power UID button assembly
2. Push the hard drive drawer back in the system enclosure.
3. Replace the hard drive fan module.
4. Replace the top back panel.
5. Replace the enclosure as described in “Inserting the system enclosure into the rack” (page 68).
Removing and replacing the power supply
Removing the power supply
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Remove the power cord from the power supply.
3. Press the power supply release lever to the left.
4. Remove the failed power supply.
Removing and replacing the power supply 77
Page 78
Replacing the power supply
1. Insert the replacement power supply.
2. Plug the power cord into the power supply.
3. Verify that the replacement component is working properly by checking the associated LED status.
4. Confirm that the system has resumed normal operations.
Removing and replacing the HP Ethernet I/O module
Removing the HP Ethernet I/O module
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Label the cables, and then unplug all cables from the HP Ethernet I/O module.
3. Press the module release mechanism to the right (1, Figure 64 (page 78)), and then remove the failed module (2).
Figure 64 Removing the HP Ethernet I/O module
Replacing the HP Ethernet I/O module
1. Insert the replacement HP Ethernet I/O module (Figure 65 (page 78)).
Figure 65 Replacing the HP Ethernet I/O module
2. Plug in all cables to the replacement module to their original locations.
78 Removing and replacing hardware components
Page 79
3. Verify that the replacement component is working properly by checking the associated LED status.
NOTE: It should take approximately 15 seconds for the LED status to display.
4. Confirm the firmware version.
5. Confirm that the system has resumed normal operations.
Removing and replacing the PCIe module (with card)
Removing the PCIe module
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Use the System Manager to identify which server needs to have the PCIe module removed. If it is for both servers, then perform this operation for one server, then the other server, so that both servers are not turned off at the same time.
3. Power off the appropriate server blade associated with the PCIe module that is being removed. Server 1 is the top server, and the PCIe module is on the left when looking from the back.
Server 2 is the bottom server, and the PCIe module is on the right when looking from the back.
CAUTION: Be sure to power off the server before removing the PCIe module.
4. Label the cables so they can be returned to their original locations.
5. Unplug all cables from the PCIe module.
6. Press the PCIe module release mechanism to release the handle (1, Figure 66 (page 79)), and then pull the handle to remove the PCIe module from the system (2).
Figure 66 Removing the PCIe module
Removing and replacing the PCIe module (with card) 79
Page 80
7. Complete the following (Figure 67 (page 80)): a. Remove the two screws from the bracket of the failed PCIe module (1). b. Remove the bracket (2). c. Remove the PCIe card from the failed module (3).
Figure 67 Removing the PCIe card
Replacing the PCIe module
1. Install the PCIe card in the replacement module (1, Figure 68 (page 80)), replace the bracket (2), and then reinsert the two screws into the bracket of the replacement module (3).
Figure 68 Installing the PCIe card
2. Insert the replacement PCIe module into the system (1, Figure 69 (page 81)), and lock the release lever (2).
NOTE: The PCIe module should be inserted with the lever in the open position.
80 Removing and replacing hardware components
Page 81
Figure 69 Installing the PCIe module
3. Plug in all cables to the PCIe module in their original locations.
4. Power on the server blade by pressing the power button ON.
5. Verify that the replacement component is working properly by checking the associated LED status.
6. Confirm that the system has resumed normal operations.
Removing and replacing the EMU module
Removing the EMU module
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Unplug any cables from the EMU module.
3. Press the EMU module release lever to the right (1, Figure 70 (page 81)), and then remove the EMU module (2).
Figure 70 Removing the EMU
Removing and replacing the EMU module 81
Page 82
Replacing the EMU module
1. Insert the replacement EMU module and ensure the release lever locks in place (Figure 71
(page 82)).
Figure 71 Installing the EMU
2. Plug the cables back into the EMU module.
3. Verify that the new component is working properly by checking the associated LED status.
4. Confirm the firmware version.
5. Obtain an IP address.
IMPORTANT: Some of the configuration information is automatically repopulated, but you
must reconfigure the network settings and password.
6. Confirm that the system has resumed normal operations.
NOTE: This may take approximately one minute, or the time it takes for the Enclosure Manager
to boot.
Removing and replacing the server blade backplane
Removing the server blade backplane
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Power off the system as described in “Powering the storage system off and on” (page 60).
3. Remove the enclosure from the rack as described in “Removing the system enclosure from the
rack” (page 67).
4. Remove the top back panel by pressing the panel release button and lifting the latch to slide the top back panel off.
5. Remove the midplane board as described in “Removing the midplane board” (page 70).
6. Remove the small baffle from beside the server blade backplane by pinching the tabs and lifting the small baffle out of the enclosure.
7. Remove the large baffle from the bottom of the enclosure.
82 Removing and replacing hardware components
Page 83
8. Complete the following (Figure 72 (page 83)): a. Unplug the power cable from the server blade backplane by pinching the plug release
mechanism (1). b. Remove the screw (2). c. Remove the server blade backplane from the enclosure (3).
Figure 72 Removing the server blade backplane
Replacing the server blade backplane
1. Complete the following (Figure 73 (page 83)): a. Install the replacement server blade backplane (1). b. Replace the screw (2). c. Plug in the power cable (3).
Figure 73 Installing the server blade backplane
2. Replace the large baffle on the bottom of the enclosure.
3. Replace the small baffle beside the server blade backplane.
Removing and replacing the server blade backplane 83
Page 84
4. Replace the midplane board (Figure 74 (page 84)): a. Pull out the captive locking pin as you lower the board into the enclosure (1). b. Push the captive locking pin into the midplane board (2). c. Tighten the two thumbscrews holding the midplane board in place (3).
Figure 74 Installing the midplane board
5. Plug the rear UID PCA into the midplane board.
6. Replace the midplane board as described in “Replacing the midplane board” (page 72).
Removing and replacing the server airflow baffle
Removing the server airflow baffle
1. Power off the system as described in “Powering the storage system off and on” (page 60).
2. Remove the enclosure from the rack as described in “Removing the system enclosure from the
rack” (page 67).
3. Remove the top back panel by pressing the release button and lifting the latch to slide the top back panel off.
4. Remove the server blade airflow baffle from inside the enclosure (Figure 75 (page 84)).
Figure 75 Removing the server blade airflow baffle
84 Removing and replacing hardware components
Page 85
Replacing the server airflow baffle
1. Install the replacement server blade airflow baffle (Figure 76 (page 85)).
Figure 76 Installing the server blade airflow baffle
2. Reinstall the top back panel.
3. Replace the enclosure as described in “Inserting the system enclosure into the rack” (page 68).
Removing and replacing the front bezel (standard)
NOTE: Use “Removing and replacing the front bezel (full)” (page 87) if you are not able to reach
all of the screws due to the position of the system in the rack.
Removing the front bezel
1. Power off the system as described in “Powering the storage system off and on” (page 60).
2. Extend the hard drive drawer (Figure 77 (page 85)): a. Press upward on the release button on the hard drive drawer (1). b. Pull the drawer handle down 90 degrees (2). c. Extend the hard drive drawer (3).
Figure 77 Extending the hard drive drawer
Removing and replacing the front bezel (standard) 85
Page 86
3. Remove all eight screws from the front bezel (1, Figure 78 (page 86)), and then lift the front bezel up and out to remove the front bezel (2).
NOTE: There are two screws on the bottom, four screws on the sides (two on each side),
and two screws hidden behind the handle.
Figure 78 Removing the front bezel
Replacing the front bezel
1. Install the replacement front bezel with the handle at a 90 degree angle making sure the bottom pins are aligned with the bottom holes (1, Figure 79 (page 86)), and replace the screws into the front bezel (2).
NOTE: There are two screws on the bottom, four screws on the sides (two on each side),
and two screws hidden behind the handle.
Figure 79 Replacing the front bezel
2. Push the drive drawer back into the system enclosure.
3. Power on the system by pressing the power button ON.
4. Verify that the replacement component is working properly by checking the associated LED status.
5. Confirm that the system has resumed normal operations.
86 Removing and replacing hardware components
Page 87
Removing and replacing the front bezel (full)
NOTE: This full procedure is only required if all screws are not accessible due to the position of
the system in the rack.
Removing the front bezel (full)
1. Power off the system as described in “Powering the storage system off and on” (page 60).
2. Remove the enclosure from the rack as described in “Removing the system enclosure from the
rack” (page 67).
3. Pull the hard drive handle down 90 degrees, and slide out the hard drive drawer.
4. Remove all eight screws from the front bezel and pull the handle down 90 degrees (1, Figure 80
(page 87)). Then lift the front bezel up and out to remove the front bezel (2).
NOTE: There are two screws on the bottom, four screws on the sides (two on each side),
and two screws hidden behind the handle.
Figure 80 Removing the front bezel
Removing and replacing the front bezel (full) 87
Page 88
Replacing the front bezel (full)
1. Install the replacement front bezel with the handle at a 90 degree angle, making sure the bottom pins are aligned with the bottom holes (1, Figure 81 (page 88)), and replace the screws in the front bezel (2).
NOTE: There are two screws on the bottom, four screws on the sides (two on each side),
and two screws hidden behind the handle.
Figure 81 Replacing the front bezel
2. Close the drive handle.
3. Push the drive drawer back into the system enclosure.
4. Replace the enclosure as described in “Inserting the system enclosure into the rack” (page 68).
Removing and replacing the front LED display board in the rack (standard)
NOTE: If you are not able to access all of the screws due to the enclosure position in the rack,
use the full procedure instructions.
Removing the front LED display board in the rack
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Power off the system as described in “Powering the storage system off and on” (page 60).
3. Remove the front bezel as described in “Replacing the front bezel” (page 86).
88 Removing and replacing hardware components
Page 89
4. Complete the following (Figure 82 (page 89)): a. Disconnect the LED display board from the drive backplane by pinching the ends of the
LED display board cable together (1). b. Remove the four screws from the LED display board (2). c. Remove the LED display board from the drive drawer (3).
Figure 82 Removing the front LED display board
Replacing the front LED display board in the rack
1. Complete the following (Figure 83 (page 89)): a. Install the replacement LED display board (1). b. Replace the four LED display board screws (2). c. Reconnect the LED display board to the drive drawer (3).
Figure 83 Installing the front LED display board
2. Replace the front bezel as described in “Replacing the front bezel” (page 86).
Removing and replacing the front LED display board (full)
Removing the front LED display board (full)
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Power off the system as described in “Powering the storage system off and on” (page 60).
3. Remove the enclosure as described in “Removing the system enclosure from the rack” (page 67).
Removing and replacing the front LED display board (full) 89
Page 90
4. Pull the hard drive drawer handle down 90 degrees, and slide out the hard drive drawer.
5. Remove all eight screws from front bezel (1, Figure 84 (page 90)). Then, lift the front bezel up and out to remove the front bezel (2).
NOTE: There are two screws on the bottom, four screws on the sides (two on each side),
and two screws hidden behind the handle.
Figure 84 Removing the front bezel
6. Complete the following (Figure 85 (page 90)): a. Disconnect the LED display board from the drive backplane by pinching the ends of the
LED display board cable together (1). b. Remove the four screws from the LED display board (2). c. Remove the LED display board from the drive drawer (3).
Figure 85 Removing the front LED display board
90 Removing and replacing hardware components
Page 91
Replacing the front LED display board (full)
1. Complete the following (Figure 86 (page 91)): a. Install the replacement LED display board (1). b. Replace the four LED display board screws (2). c. Reconnect the LED display board to the drive drawer (3).
Figure 86 Installing the front LED display board
2. Replace the front bezel as described in “Replacing the front bezel (full)” (page 88).
Removing and replacing a drive drawer
Removing the drive drawer
1. Verify the failed component as described in “Verifying component failure” (page 66).
2. Power off the system as described in “Powering the storage system off and on” (page 60).
3. Remove the enclosure as described in “Removing the system enclosure from the rack” (page 67).
4. Remove the top back panel by pressing the panel release button and lifting the latch to slide the top back panel off.
5. Remove the hard drive fan module (Figure 87 (page 91)).
Figure 87 Removing the fan module
6. Push up on the SAS I/O module release button (1, Figure 88 (page 92)).
7. Push down on the SAS I/O module lever (2, Figure 88 (page 92)), and then remove the SAS I/O module (3).
NOTE: This step may require significant force to accomplish.
Removing and replacing a drive drawer 91
Page 92
Figure 88 Removing the SAS I/O module
8. Extend the drive drawer (Figure 45 (page 67)): a. Press upward on the release button on the hard drive drawer (1). b. Pull the drawer handle down 90 degrees (2). c. Extend the hard drive drawer (3).
Figure 89 Extending the hard drive drawer
NOTE: You must repeat Step 8 for the remaining SAS I/O module.
9. Remove the plug bracket (2, Figure 90 (page 93)) from the coil power plug by removing the thumbscrew (1).
92 Removing and replacing hardware components
Page 93
Figure 90 Removing the plug bracket from the coil power plug
10. Unplug the coil power assembly from the midplane board (Figure 91 (page 93)).
Figure 91 Unplugging the coil power assembly
11. Press the release mechanism on the side rail (1, Figure 92 (page 93)), and then pull the hard drive drawer fully out of the enclosure (2).
WARNING! The hard drive drawer is heavy, even after removing the hard drives. Make
sure the drawer is fully supported as you remove it from the enclosure.
Figure 92 Removing the drive drawer
Removing and replacing a drive drawer 93
Page 94
Replacing the drive drawer
1. Unlock the side enclosure rail and push it into the back enclosure (Figure 93 (page 94)).
2. Align the bottom replacement drive drawer rails with the bottom enclosure rails.
Figure 93 Unlocking the enclosure rails
3. Align the side rails and then push the replacement drive drawer partially back into the system enclosure until approximately two inches of the drawer is still out of the enclosure (Figure 94
(page 94)).
CAUTION: Do not push the drive drawer completely into the enclosure. You must first connect
the power coil assembly to prevent damaging the power coil assembly.
Figure 94 Partially installing the drive drawer
4. Pull the cable slightly out of the coil power plug and connect it to the midplane board (Figure 95
(page 95)).
94 Removing and replacing hardware components
Page 95
Figure 95 Connecting the coil power assembly to the midplane board
5. Reattach the plug bracket (1, Figure 96 (page 95)) to the coil power plug and tighten the thumbscrew (2).
Figure 96 Reattaching the plug bracket to the coil power plug
6. Push the drive drawer fully back into the system enclosure (1, Figure 97 (page 96)) and the handle back into place (2).
Removing and replacing a drive drawer 95
Page 96
Figure 97 Pushing the drive drawer into the system enclosure
7. Replace the top back panel.
8. Replace the drive fan module.
9. Replace both SAS I/O modules.
10. Replace the enclosure as described in “Inserting the system enclosure into the rack” (page 68).
Removing and replacing the drive drawer hard drive
CAUTION:
Do not replace the hard drive with a SATA drive. Be sure to replace the hard drive only with
an approved SAS drive.
Do not replace the drive drawer hard drive during peak data transfer times. Make sure the
hard drive LED is off before you remove the hard drive.
Ensure that the capacity of the replacement drive is at least equal to the capacity of the original
drive. The capacity of the replacement drive should not be smaller.
NOTE: After replacing the hard drives, the approximate wait times for viewable disk LED activity
vary.
Removing the drive drawer hard drive
1. Verify the failed component as described in “Verifying component failure” (page 66).
96 Removing and replacing hardware components
Page 97
2. Extend the hard drive drawer (Figure 98 (page 97)): a. Press upward on the release button on the hard drive drawer (1). b. Pull the drawer handle down 90 degrees (2). c. Extend the hard drive drawer (3).
Figure 98 Extending the hard drive drawer
3. Locate the failed hard drive.
NOTE: Use the hard drive bay labels and the drive LED status (an amber LED or no LEDs)
to help identify the failed drive.
4. To remove the failed hard drive (Figure 99 (page 97)): a. Press the release button (1). b. Pull the release lever (2). c. Remove the hard drive (3).
Figure 99 Remove the failed hard drive
Removing and replacing the drive drawer hard drive 97
Page 98
Replacing the drive drawer hard drive
1. Install the hard drive (Figure 100 (page 98)): a. Insert the replacement hard drive with the lever in the open position (1). b. Push the release lever into place (2).
Figure 100 Installing the hard drive
2. Push the drive drawer back into the system enclosure.
3. Verify that the replacement component is working properly by checking the associated LED status.
NOTE: This may require a wait time of less than 15 seconds for the LED status to appear.
4. Confirm that the system has resumed normal operations.
5. Confirm the hard drive firmware version.
IMPORTANT: You must reboot the storage solution after updating the drive drawer hard
drive firmware.
Removing and replacing the drive drawer rails (side or bottom)
NOTE: Spare rail kits consist of rail pairs, one side rail, and two bottom drive drawer rails. See
“Removing and replacing the enclosure rails” (page 103) for enclosure rail instructions.
Removing the drive drawer rails
1. Power off the system as described in “Powering the storage system off and on” (page 60).
2. Remove the enclosure as described in “Removing the system enclosure from the rack” (page 67).
3. Remove the top back panel by pressing the panel release button and lifting the latch to slide the top back off.
98 Removing and replacing hardware components
Page 99
4. Extend the hard drive drawer (Figure 101 (page 99)): a. Press upward on the release button on the hard drive drawer (1). b. Pull the drawer handle down 90 degrees (2). c. Extend the hard drive drawer (3).
Figure 101 Extending the hard drive drawer
5. Remove the plug bracket (2, Figure 102 (page 99)) from the coil power plug by removing the thumbscrew (1).
Figure 102 Removing the plug bracket from the coil power plug
6. Unplug the coil power assembly from the midplane board (Figure 103 (page 100)).
Removing and replacing the drive drawer rails (side or bottom) 99
Page 100
Figure 103 Unplugging the coil power assembly
7. Press the release mechanism on the side rail (1, Figure 104 (page 100)), and then pull the hard drive drawer fully out of the enclosure (2).
WARNING! The hard drive drawer is heavy, even after removing the hard drives. Make
sure the drawer is fully supported as you remove it from the enclosure.
Figure 104 Removing the drive drawer
100 Removing and replacing hardware components
Loading...