HP XC User Manual

HP XC Systems with HP Server Blades and Enclosures HowTo

Version 3.1 or Version 3.2
Published: April 2007 Edition: 9
© Copyright 2006, 2007 Hewlett-Packard Development Company, L.P.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as
constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
AMD and AMD Opteron are trademarks or registered trademarks of Advanced Micro Devices, Inc.
Firefox is a registered trademark of Mozilla Foundation.
InfiniBand is a registered trademark and service mark of the InfiniBand Trade Association.
Intel, Itanium, and Xeon are trademarks or registered trademarks of Intel Corporation in the United States and other countries.
Internet Explorer is a registered trademark of Microsoft Corporation
Linux is a U.S. registered trademark of Linus Torvalds.
Quadrics and QsNetIIare registered trademarks of Quadrics, Ltd.
Red Hat and RPM are registered trademarks of Red Hat, Inc.
UNIX is a registered trademark of The Open Group.
Windows and Internet Explorer are registered trademarks of Microsoft Corporation.

Table of Contents

1 Overview.........................................................................................................................9
1.1 Minimum Requirements...................................................................................................................9
1.2 Read the Documentation Before You Begin......................................................................................9
1.3 Supported Server Blade Combinations...........................................................................................10
1.4 c-Class Server Blade Hardware Components.................................................................................10
1.4.1 Supported HP ProLiant C-Class Server Blade Models...........................................................10
1.4.2 Enclosures and Onboard Administrators...............................................................................11
1.4.3 iLO2 Console Management Device.........................................................................................11
1.4.4 Management Processor Console Management Device...........................................................12
1.4.5 Mezzanine Cards.....................................................................................................................12
1.4.6 Interconnect Modules..............................................................................................................12
2 Task Summary and Checklist......................................................................................13
2.1 Best Practice for System Configuration...........................................................................................13
2.2 Installation and Configuration Checklist........................................................................................13
3 Cabling.........................................................................................................................15
3.1 Network Overview .........................................................................................................................15
3.2 Cabling for the Administration Network........................................................................................15
3.3 Cabling for the Console Network...................................................................................................16
3.4 Cabling for the Interconnect Network............................................................................................17
3.4.1 Configuring a Gigabit Ethernet Interconnect..........................................................................17
3.4.2 Configuring an InfiniBand Interconnect.................................................................................18
3.4.3 Configuring the Interconnect Network Over the Administration Network..........................19
3.5 Cabling for the External Network...................................................................................................19
3.5.1 Configuring the External Network: Option 1.........................................................................19
3.5.2 Configuring the External Network: Option 2.........................................................................20
3.5.3 Configuring the External Network: Option 3 - Non Gigabit Ethernet Interconnect
Clusters............................................................................................................................................21
3.5.4 Creating VLANs......................................................................................................................22
4 Installing HP XC System Software On the Head Node...........................................23
4.1 Task 1: Refer to the Installation Guide............................................................................................23
4.2 Task 2: Install HP XC System Software on the Head Node............................................................23
4.2.1 Connect to the Onboard Administrator..................................................................................23
4.2.2 Start the Installation................................................................................................................24
5 Discovering the Hardware Components....................................................................27
5.1 Task 1: Prepare for the System Configuration.................................................................................27
5.1.1 Node Naming Differences.......................................................................................................27
5.1.2 Head Node Naming................................................................................................................27
5.2 Task 2: Change the Default IP Address Base (Optional).................................................................28
5.3 Task 3: Use the cluster_prep Command to Prepare the System.....................................................28
5.4 Task 4: Discover Switches................................................................................................................29
5.5 Task 5: Set the Onboard Administrator Password..........................................................................30
5.6 Task 6: Discover Enclosures and Nodes..........................................................................................30
Table of Contents 3
6 Making Node-Specific Settings..................................................................................33
6.1 Making Settings on Non-Blade Servers...........................................................................................33
6.2 Making Settings on HP ProLiant Server Blades..............................................................................33
6.3 Making Settings on HP Integrity Server Blades..............................................................................36
7 Configuring the HP XC System...................................................................................39
7.1 Task 1: Install Patches or RPM Updates..........................................................................................39
7.2 Task 2: Refer To the Installation Guide For System Configuration Tasks.......................................39
7.2.1 Using Specific IP Addresses to Configure InfiniBand Interconnect Switch Controller
Cards................................................................................................................................................39
7.2.2 Running the startsys Command With Specific Options To Start the System and Propagate
the Golden Image............................................................................................................................39
7.3 Task 3: Verify Success......................................................................................................................40
7.4 You Are Done..................................................................................................................................40
8 Troubleshooting............................................................................................................41
8.1 One or More Ports Do not Communicate Properly on a Gigabit Ethernet Switch.........................41
8.2 lsadmin limrestart Command Fails.................................................................................................41
A Configuration Examples..............................................................................................43
A.1 Gigabit Ethernet Interconnect With Half-Height Server Blades....................................................43
A.2 InfiniBand Interconnect With Full-Height Server Blades..............................................................43
A.3 InfiniBand Interconnect With Mixed Height Server Blades...........................................................44
Index.................................................................................................................................47
4 Table of Contents
List of Figures
3-1 Administration Network Connections..........................................................................................16
3-2 Console Network Connections......................................................................................................17
3-3 Gigabit Ethernet Interconnect Connections..................................................................................18
3-4 InfiniBand Interconnect Connections............................................................................................18
3-5 External Network Connections: Full-Height Server Blades and NIC1 and NIC2 in Use.............20
3-6 External Network Connections: Half-Height Server Blades and NIC1 and NIC2 in Use............21
3-7 External Network Connections: Half and Full-Height Server Blades and NIC1 in Use...............22
A-1 Gigabit Ethernet Interconnect With Half-Height Server Blades...................................................43
A-2 InfiniBand Interconnect With Full-Height Server Blades.............................................................44
A-3 InfiniBand Interconnect With Mixed Height Server Blades..........................................................45
5
6
List of Tables
1-1 Minimum Requirements.................................................................................................................9
2-1 Installation and Configuration Checklist......................................................................................13
4-1 Head Node Installation Instructions.............................................................................................23
4-2 Boot Command Line Options Based on Hardware Model...........................................................25
6-1 Boot Order for HP ProLiant Server Blades...................................................................................34
6-2 Additional BIOS Setting for HP ProLiant BL685c Nodes.............................................................35
6-3 Adding a Boot Entry and Setting the Boot Order on HP Integrity Server Blades........................37
7-1 InfiniBand Switch Controller Card Naming Conventions and IP Addresses..............................39
7
8

1 Overview

HP Server Blade c-Class servers (hereafter called server blades) are perfectly suited to form HP XC systems. Physical characteristics make it possible to have many tightly interconnected nodes while at the same time reducing cabling requirements. Typically, server blades are used as compute nodes but they can also function as the head node and service nodes. The hardware and network configuration on an HP XC system with HP server blades differs from that of a traditional HP XC system, and those differences are described in this document.
This HowTo contains essential information about network cabling, hardware preparation tasks, and software installation instructions that are specific to configuring HP server blades for HP XC. HP recommends that you read this entire document before beginning.

1.1 Minimum Requirements

Table 1-1 lists the minimum requirements to accomplish the tasks described in this HowTo.
Table 1-1 Minimum Requirements
Minimum RequirementComponent
Software Version
Hardware Configuration
Knowledge and Experience Level
Documentation
Distribution media for HP XC System Software Version 3.1 or Version
3.2 that is appropriate for your cluster platform architecture
• A hardware configuration consisting of HP server blades to act as compute nodes and possibly as the head node and service nodes
• At least one ProCurve 2800 series switch, which is required at the root
• Optional ProCurve 2600 series switches
• Gigabit Ethernet or InfiniBand® interconnect switches
• A local PC or laptop computer that is running a recent version of Mozilla Firefox® or Internet Explorer®
• You must have previous experience with a Linux® operating system.
• You must be familiar with HP server blades and enclosures and related components by reading the documentation that came with your model of HP server blade.
• The most recent version of this HowTo
• The installation, administration, and user guides for the following components:
— HP (ProLiant or Integrity) C-Class Server Blades — HP BladeSystem c-Class Onboard Administrator — HP Server Blade c7000 Enclosure
HP XC System Software Release Notes
HP XC Hardware Preparation Guide
HP XC System Software Installation Guide

1.2 Read the Documentation Before You Begin

Before you begin, HP recommends that you read the related documentation listed in Table 1-1 to become familiar with the hardware components and overall system configuration process.
1.1 Minimum Requirements 9
If you do not have the required documentation in your possession, see the following sources:
The most current documentation for HP Server Blades, enclosures, and other server blade components is available at the following Web site:
http://www.hp.com/go/bladesystem/documentation
The most current edition of the Version 3.1 or Version 3.2 HP XC System Software Documentation Set is available at the following Web site:
http://www.docs.hp.com/en/highperfcomp.html
This HowTo is evolving, so go to http://www.docs.hp.com/en/highperfcomp.htmland make sure you have the latest version of this HowTo because it might have been updated since you downloaded the version you are reading now.

1.3 Supported Server Blade Combinations

The HP XC System Software supports the following server blade hardware configurations:
A hardware configuration composed entirely of HP server blades, that is, the head node, the service nodes, and all compute nodes are server blades.
A hardware configuration can contain a mixture of Opteron and Xeon server blades
A mixed hardware configuration of HP server blades and non-blade servers where: — The head node can be either a server blade or a non-blade server — Service nodes can be either server blades or non-blade servers — All compute nodes are server blades

1.4 c-Class Server Blade Hardware Components

This section describes the various server blade components in an HP XC hardware configuration.

1.4.1 Supported HP ProLiant C-Class Server Blade Models

HP ProLiant C-Class server blades offer an entirely modular computing system with separate computing and physical I/O modules that are connected and shared through a common chassis, called an enclosure. Full-height Opteron server blades can take up to four dual core CPUs and Xeon server blades can take up to two quad cores.
The following HP ProLiant hardware models are supported for use in an HP XC hardware configuration:
HP ProLiant BL460c (half-height) — Up to two quad core or dual core Intel® Xeon® processors — Two built-in network interface cards (NICs) — Two hot plug drives — Two mezzanine slots
HP ProLiant BL465c (half-height) — Up to two single or dual core AMD® Opteron® processors — Two built-in network interface cards (NICs) — Two hot plug drives — Two mezzanine slots
HP ProLiant BL480c (full-height) — Up to two quad core or dual core Xeon processors — Four built-in NICs — Four hot plug drives — Three mezzanine slots
10 Overview
HP ProLiant BL685c (full-height) — Up to four dual core Opteron processors — Four built-in NICs — Two hot plug drives — Three mezzanine slots
HP ProLiant BL860c (full-height)

1.4.2 Enclosures and Onboard Administrators

HP Server Blade c7000 Enclosure The HP Server Blade c7000 Enclosure is the enclosure model supported for use in an HP XC hardware configuration. An enclosure is a chassis that houses and connects blade hardware components. It can house a maximum of 16 half-height or 8 full-height server blades and contains a maximum of 6 power supplies and 10 fans.
The following are general guidelines for configuring enclosures:
Up to four enclosures can be mounted in a 42U rack.
If an enclosure is not fully populated with fans and power supplies, see the positioning guidelines in the HP Server Blade c7000 Enclosure documentation.
Enclosures are cabled together using their uplink and downlink ports.
The top uplink port in each rack is used as a service port to attach a laptop or other device for initial configuration or subsequent debugging .
The following enclosure setup guidelines are specific to HP XC:
On every enclosure, an Ethernet interconnect module (either a switch or pass-thru module) is installed in bay 1 for the administration network.
Hardware configurations that use Gigabit Ethernet as the interconnect require an additional Ethernet interconnect module (either a switch or pass-thru module) to be installed in bay 2 for the interconnect network.
Systems that use InfiniBand as the interconnect require a double-wide InfiniBand interconnect switch module installed in double-wide bay 5 and 6.
Some systems might need an additional Ethernet interconnect module to support server blades that require external connections. For more information about external connections, see “Cabling for the External Network” (page 19).
HP BladeSystem c-Class Onboard Administrator The Onboard Administrator is the management device for an enclosure, and at least one Onboard Administrator is installed in every enclosure. You can access the Onboard Administrator through a graphical Web-based user interface, a command-line interface, or the simple object access protocol (SOAP) to configure and monitor the enclosure. You can add a second Onboard Administrator to provide redundancy.
Insight Display The Insight Display is a small LCD panel on the front of an enclosure that provides instant access to important information about the enclosure such as the IP address and color-coded status. You can use the Insight Display panel to make some basic enclosure settings.
For more information about enclosures and their related components, see the HP Server Blade c7000 Enclosure Setup and Installation Guide .

1.4.3 iLO2 Console Management Device

Each HP ProLiant server blade has a built-in Integrated Lights Out (iLO2) device that provides full remote power control and serial console access. You can access the iLO2 device through the
1.4 c-Class Server Blade Hardware Components 11
Onboard Administrator. On server blades, iLO2 advanced features are enabled by default and include the following:
Full remote graphics console access including full keyboard, video, mouse (KVM) access through a Web browser
Support for remote virtual media which enables you to mount a local CD or diskette and serve it to the server blade over the network

1.4.4 Management Processor Console Management Device

Each HP Integrity server blade has a built-in management processor (MP) device that provides full remote power control and serial console access. You can access the MP device by connecting a serial terminal or laptop serial port to the local IO cable that is connected to the server blade.

1.4.5 Mezzanine Cards

The mezzanine slots on each server blade provide additional I/O capability. Mezzanine cards are PCI-Express cards that attach inside the server blade through a special connector and have no physical I/O ports on them. Card types include Ethernet, fibre channel, or 10 Gigabit Ethernet.

1.4.6 Interconnect Modules

An interconnect module provides the physical I/O for the built-in NICs or the supplemental mezzanine cards on the server blades. An interconnect module can be either a switch or a pass-thru module.
A switch provides local switching and minimizes cabling. Switch models that are supported as interconnect modules include, but are not limited to:
Nortel GbE2c Gigabit Ethernet switch
Cisco Catalyst Gigabit Ethernet switch
HP 4x DDR InfiniBand switch
Brocade SAN switch
A pass-thru module provides direct connections to the individual ports on each node and does not provide any local switching.
Bays in the back of each enclosure correspond to specific interfaces on the server blades. Thus, all I/O devices that correspond to a specific interconnect bay must be the same type.
Interconnect Bay Port Mapping
Connections between the server blades and the interconnect bays are hard wired. Each of the 8 interconnect bays in the back of the enclosure has a connection to each of the 16 server bays in the front of the enclosure. The built-in NIC or mezzanine card into which the interconnect blade connects depends on which interconnect bay it is plugged into. Because full-height blades consume two server bays, they have twice as many connections to each of the interconnect bays.
See the HP BladeSystem Onboard Administrator User Guide for illustrations of interconnect bay port mapping connections on half- and full-height server blades.
12 Overview

2 Task Summary and Checklist

This chapter contains a summary of the steps required to configure HP server blades in an HP XC cluster.

2.1 Best Practice for System Configuration

In order to function properly as an HP XC system, each component must be configured according to HP XC guidelines. To make configuration settings on certain components, an active network is required. However, on an HP XC system, the internal administration network is not operational until the head node is installed and running. Therefore, HP recommends that you install and configure the head node first and then, use the live administration network to make the configuration settings for the rest of the hardware components in the system.
Thus, the high level sequence of events is this:
1. Physically set up the enclosures, populate the enclosures with nodes, and cable all hardware components together.
2. Prepare the head node and any non-blade server nodes.
3. Install the HP XC System Software on the head node.
4. Run the cluster_prep command on the head node.
5. Run the discover command to discover the network components.
6. Connect to each Onboard Administrator and make required settings.
7. Run the discover command to discover the enclosures.
8. Run the discover command to discover the nodes.
9. Access all Onboard Administrators, console management devices (iLO2 or MP) and make required BIOS settings.

2.2 Installation and Configuration Checklist

Table 2-1 provides a checklist of tasks.
IMPORTANT: Hardware preparation is the key element in the successful installation and configuration of the system. If you do not prepare the hardware as described in this document, do not expect the cluster configuration process to be successful.
Table 2-1 Installation and Configuration Checklist
Cabling
the XC networks
Software Installation
installation
node
Discovery
and discover process.
the HP XC networks
Run the cluster_prep --enclosurebased command on the head node
Where DocumentedTask DescriptionTask Category
Chapter 3Cable the switches and enclosures to configure
Section 4.1Gather the information you need for the
Section 4.2Install the HP XC system software on the head
Section 5.1Gather information for the cluster preparation
Section 5.2Optionally change the default IP address base of
Section 5.3
2.1 Best Practice for System Configuration 13
Table 2-1 Installation and Configuration Checklist (continued)
Where DocumentedTask DescriptionTask Category
BIOS Settings on Nodes
System Configuration
Run the discover --enclosurebased
--network command on the head node to discover the switches
must match the passwords on the ProCurve switches and console management devices
Run the discover --enclosurebased
--enclosures and discover
--enclosurebased --nodes commands to
discover the remainder of the hardware components
Make BIOS settings on non-blade servers according to regular procedures
for the release of HP XC System Software you are installing
system environment
Run the cluster_config utility on the head node to configure the system and create the golden image
Section 5.4
Section 5.5Set the Onboard Administrator password, which
Section 5.6
Section 6.1 and the HP XC Hardware
Preparation Guide
Section 6.2 and Section 6.3Make BIOS settings on server blades
Section 7.1Install software patches that might be available
Section 7.2Perform various configuration tasks to set up the
Run the startsys command to start all nodes in the system and propagate the golden image to all nodes
operation verification program (OVP), to verify that the system is operating correctly
Section 7.2.2
Section 7.3Run system verification tasks, including the
14 Task Summary and Checklist

3 Cabling

The following topics are addressed in this chapter:
“Network Overview ” (page 15)
“Cabling for the Administration Network” (page 15)
“Cabling for the Console Network” (page 16)
“Cabling for the Interconnect Network” (page 17)
“Cabling for the External Network” (page 19)

3.1 Network Overview

An HP XC system consists of several networks: administration, console, interconnect, and external (public). In order for these networks to function, you must connect the enclosures, server blades, and switches according to the guidelines provided in this chapter.
The HP XC Hardware Preparation Guide guide describes specific instructions about which ports on each ProCurve switch are used for specific node connections on non-blade server nodes.
A hardware configuration with server blades does not have these specific cabling requirements; specific switch port assignments are not required. However, HP recommends a logical ordering of the cables on the switches to facilitate serviceability. Enclosures are discovered in port order, so HP recommends that you cable them in the order you want them to be numbered. Also, HP recommends that you cable the enclosures in lower ports and cable the external nodes in the ports above them.
Appendix A (page 43) provides several network cabling illustrations based on the interconnect
type and server blade height to use as a reference.

3.2 Cabling for the Administration Network

The HP XC administration network is a private network within an HP XC system that is used primarily for administrative operations. The administration network is created and connected through ProCurve model 2800 series switches. One switch is designated as the root administration switch and that switch can be connected to multiple branch administration switches, if required.
NIC1 on each server blade is dedicated as the connection to the administration network. NIC1 of all server blades connects to interconnect bay 1 on the enclosure.
The entire administration network is formed by connecting the device (either a switch or a pass-thru module) in interconnect bay 1 of each enclosure to one of the ProCurve administration network switches.
Non-blade server nodes must also be connected to the administration network. See the HP XC Hardware Preparation Guide to determine which port on the node is used for the administration network; the port you use depends on your particular hardware model.
Figure 3-1 illustrates the connections that form the administration network.
3.1 Network Overview 15
Loading...
+ 33 hidden pages