HP rp7440, Integrity rx7640, 9000 rp7440 Service Manual

Page 1
HP Integrity rx7640 and HP 9000 rp7440 Servers
User Service Guide
HP Part Number: AB312-9010A Published: November 2007 Edition: Fourth Edition
Page 2
Legal Notices
© Copyright 2007 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice.
The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services.
Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions
contained herein.
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. Linux is a U.S. registered trademark of Linus Torvalds. Intel
is a trademark or registered trademark of Intel Corporation or its susidaries in the United States and other countries.
Page 3
Table of Contents
About this Document.......................................................................................................15
Book Layout..........................................................................................................................................15
Intended Audience................................................................................................................................15
Publishing History................................................................................................................................15
Related Information..............................................................................................................................16
Typographic Conventions.....................................................................................................................17
HP Encourages Your Comments..........................................................................................................18
1 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview.....................19
Detailed Server Description..................................................................................................................19
Dimensions and Components.........................................................................................................20
Front Panel.......................................................................................................................................23
Front Panel Indicators and Controls..........................................................................................23
Enclosure Status LEDs...............................................................................................................23
Cell Board........................................................................................................................................24
PDH Riser Board........................................................................................................................25
Central Processor Units..............................................................................................................25
Memory Subsystem....................................................................................................................26
DIMMs........................................................................................................................................27
Cells and nPartitions........................................................................................................................27
Internal Disk Devices for the Server................................................................................................28
System Backplane............................................................................................................................29
System Bacplane to PCI-X Backplane Connectivity...................................................................29
Clocks and Reset........................................................................................................................29
I/O Subsystem..................................................................................................................................29
PCI-X/PCIe Backplane................................................................................................................32
PCI-X/PCIe Slot Boot Paths...................................................................................................33
MP/SCSI Board...........................................................................................................................34
LAN/SCSI Board........................................................................................................................34
Mass Storage (Disk) Backplane..................................................................................................34
2 Server Site Preparation................................................................................................35
Dimensions and Weights......................................................................................................................35
Electrical Specifications.........................................................................................................................36
Grounding.......................................................................................................................................36
Circuit Breaker.................................................................................................................................36
System AC Power Specifications.....................................................................................................36
Power Cords...............................................................................................................................36
System Power Specifications......................................................................................................37
Environmental Specifications...............................................................................................................38
Temperature and Humidity............................................................................................................38
Operating Environment.............................................................................................................38
Environmental Temperature Sensor..........................................................................................39
Non-Operating Environment.....................................................................................................39
Cooling.............................................................................................................................................39
Internal Chassis Cooling............................................................................................................39
Bulk Power Supply Cooling.......................................................................................................39
PCI/Mass Storage Section Cooling.............................................................................................39
Standby Cooling.........................................................................................................................39
Table of Contents 3
Page 4
Typical Power Dissipation and Cooling..........................................................................................39
Acoustic Noise Specification...........................................................................................................40
Airflow.............................................................................................................................................40
System Requirements Summary...........................................................................................................41
Power Consumption and Air Conditioning....................................................................................41
3 Installing the Server......................................................................................................43
Receiving and Inspecting the Server Cabinet.......................................................................................43
Unpacking the Server Cabinet.........................................................................................................43
Securing the Cabinet........................................................................................................................46
Standalone and To-Be-Racked Systems................................................................................................47
Rack-Mount System Installation.....................................................................................................47
Lifting the Server Cabinet Manually....................................................................................................47
Using the RonI Model 17000 SP 400 Lifting Device.............................................................................49
Wheel Kit Installation...........................................................................................................................52
Installing the Power Distribution Unit.................................................................................................57
Installing Additional Cards and Storage..............................................................................................58
Installing Additional Hard Disk Drives..........................................................................................58
Removable Media Drive Installation...............................................................................................59
PCI-X Card Cage Assembly I/O Cards............................................................................................60
Installing an Additional PCI-X Card..........................................................................................63
Installing an A6869B VGA/USB PCI Card in a Server....................................................................65
Troubleshooting the A6869B VGA/USB PCI Card..........................................................................66
No Console Display...................................................................................................................67
Reference URL............................................................................................................................67
Cabling and Power Up..........................................................................................................................67
Checking the Voltage.......................................................................................................................67
Preface........................................................................................................................................67
Voltage Range Verification of Receptacle...................................................................................67
Verifying the Safety Ground (Single Power Source)..................................................................68
Verifying the Safety Ground (Dual Power Source)....................................................................69
Voltage Check (Additional Procedure)...........................................................................................71
Connecting AC Input Power...........................................................................................................72
Installing The Line Cord Anchor (for rack mounted servers).........................................................73
Two Cell Server Installation (rp7410, rp7420, rp7440, rx7620, rx7640)......................................73
Core I/O Connections......................................................................................................................74
MP/SCSI I/O Connections .........................................................................................................74
LAN/SCSI Connections..............................................................................................................75
Management Processor Access..................................................................................................75
Setting Up the Customer Engineer Tool (PC) .................................................................................75
Setting CE Tool Parameters........................................................................................................75
Connecting the CE Tool to the Local RS232 Port on the MP .....................................................76
Turning on Housekeeping Power and Logging in to the MP.........................................................76
Configuring LAN Information for the MP......................................................................................77
Accessing the Management Processor via a Web Browser.............................................................79
Verifying the Presence of the Cell Boards.......................................................................................80
System Console Selection................................................................................................................81
VGA Consoles............................................................................................................................82
Interface Differences Between Itanium-based Systems.............................................................82
Other Console Types..................................................................................................................82
Additional Notes on Console Selection.....................................................................................82
Configuring the Server for HP-UX Installation...............................................................................83
Booting the Server ...........................................................................................................................83
Selecting a Boot Partition Using the MP ...................................................................................84
4 Table of Contents
Page 5
Verifying the System Configuration Using the EFI Shell...........................................................84
Booting HP-UX Using the EFI Shell...........................................................................................84
Adding Processors with Instant Capacity.......................................................................................84
Installation Checklist.......................................................................................................................85
4 Booting and Shutting Down the Operating System..................................................89
Operating Systems Supported on Cell-based HP Servers....................................................................89
System Boot Configuration Options.....................................................................................................90
HP 9000 Boot Configuration Options..............................................................................................90
HP Integrity Boot Configuration Options.......................................................................................90
Booting and Shutting Down HP-UX.....................................................................................................94
HP-UX Support for Cell Local Memory..........................................................................................94
Adding HP-UX to the Boot Options List.........................................................................................95
Booting HP-UX................................................................................................................................96
Standard HP-UX Booting...........................................................................................................96
Single-User Mode HP-UX Booting...........................................................................................100
LVM-Maintenance Mode HP-UX Booting...............................................................................102
Shutting Down HP-UX..................................................................................................................103
Booting and Shutting Down HP OpenVMS I64.................................................................................105
HP OpenVMS I64 Support for Cell Local Memory.......................................................................105
Adding HP OpenVMS to the Boot Options List............................................................................105
Booting HP OpenVMS...................................................................................................................107
Shutting Down HP OpenVMS.......................................................................................................108
Booting and Shutting Down Microsoft Windows..............................................................................109
Microsoft Windows Support for Cell Local Memory....................................................................109
Adding Microsoft Windows to the Boot Options List...................................................................110
Booting Microsoft Windows..........................................................................................................111
Shutting Down Microsoft Windows..............................................................................................113
Booting and Shutting Down Linux.....................................................................................................114
Linux Support for Cell Local Memory..........................................................................................114
Adding Linux to the Boot Options List.........................................................................................115
Booting Red Hat Enterprise Linux................................................................................................116
Booting SuSE Linux Enterprise Server .........................................................................................117
Shutting Down Linux....................................................................................................................119
5 Server Troubleshooting..............................................................................................121
Common Installation Problems..........................................................................................................121
The Server Does Not Power On.....................................................................................................121
The Server Powers On But Fails Power-On Self Test.....................................................................122
Server LED Indicators.........................................................................................................................122
Front Panel LEDs...........................................................................................................................122
Bulk Power Supply LEDs..............................................................................................................123
PCI-X Power Supply LEDs............................................................................................................124
System and PCI I/O Fan LEDs.......................................................................................................125
OL* LEDs.......................................................................................................................................126
PCI-X OL* Card Divider LEDs......................................................................................................127
Core I/O LEDs................................................................................................................................128
Core I/O Buttons............................................................................................................................129
PCI-X Hot-Plug LED OL* LEDs....................................................................................................131
Disk Drive LEDs............................................................................................................................131
Interlock Switches..........................................................................................................................132
Server Management Subsystem Hardware Overview.......................................................................132
Server Management Overview...........................................................................................................133
Table of Contents 5
Page 6
Server Management Behavior.............................................................................................................133
Thermal Monitoring......................................................................................................................134
Fan Control....................................................................................................................................134
Power Control................................................................................................................................135
Updating Firmware.............................................................................................................................135
Firmware Manager .......................................................................................................................135
Using FTP to Update Firmware.....................................................................................................135
Possible Error Messages.................................................................................................................136
PDC Code CRU Reporting..................................................................................................................136
Verifying Cell Board Insertion............................................................................................................138
Cell Board Extraction Levers.........................................................................................................138
6 Removing and Replacing Components...................................................................141
Customer Replaceable Units (CRUs)..................................................................................................141
Hot-plug CRUs..............................................................................................................................141
Hot-Swap CRUs.............................................................................................................................141
Other CRUs....................................................................................................................................141
Safety and Environmental Considerations ........................................................................................142
Communications Interference ......................................................................................................142
Electrostatic Discharge ..................................................................................................................142
Powering Off Hardware Components and Powering On the Server.................................................142
Powering Off Hardware Components...........................................................................................142
Powering On the System...............................................................................................................143
Removing and Replacing the Top Cover............................................................................................144
Removing the Top Cover...............................................................................................................144
Replacing the Top Cover................................................................................................................145
Removing and Replacing a Side Cover...............................................................................................145
Removing a Side Cover.................................................................................................................146
Replacing a Side Cover..................................................................................................................146
Removing and Replacing the Front Bezel...........................................................................................147
Removing the Front Bezel..............................................................................................................147
Replacing the Front Bezel..............................................................................................................147
Removing and Replacing PCA Front Panel Board.............................................................................147
Removing the PCA Front Panel Board..........................................................................................148
Replacing the Front Panel Board...................................................................................................149
Removing and Replacing a Front Smart Fan Assembly.....................................................................150
Removing a Front Smart Fan Assembly........................................................................................152
Replacing a Front Smart Fan Assembly........................................................................................152
Removing and Replacing a Rear Smart Fan Assembly......................................................................152
Removing a Rear Smart Fan Assembly.........................................................................................154
Replacing a Rear Smart Fan Assembly..........................................................................................154
Removing and Replacing a Disk Drive...............................................................................................154
Removing a Disk Drive..................................................................................................................155
Replacing a Disk Drive..................................................................................................................156
Removing and Replacing a Half-Height DVD/DAT Drive.................................................................156
Removing a DVD/DAT Drive........................................................................................................157
Installing a Half-Height DVD or DAT Drive......................................................................................158
Internal DVD and DAT Devices That Are Not Supported In HP Integrity rx7640.......................158
Removable Media Cable Configuration for a Half-height DVD or DAT Drive............................158
Installing the Half-Height DVD or DAT drive..............................................................................160
Removing and Replacing a Slimline DVD Drive................................................................................161
Removing a Slimline DVD Drive...................................................................................................162
Replacing a Slimline DVD Drive...................................................................................................162
Removing and Replacing a Dual Slimline DVD Carrier....................................................................162
6 Table of Contents
Page 7
Removing a Slimline DVD Carrier................................................................................................162
Installation of Two Slimline DVD+RW Drives..............................................................................163
Removable Media Cable Configuration for the Slimline DVD+RW Drives............................163
Installing the Slimline DVD+RW Drives..................................................................................165
Removing and Replacing a PCI/PCI-X Card......................................................................................165
Installing the New LAN/SCSI Core I/O PCI-X Card(s).................................................................166
PCI/PCI-X Card Replacement Preliminary Procedures................................................................167
Removing a PCI/PCI-X Card.........................................................................................................167
Replacing the PCI/PCI-X Card.......................................................................................................167
Option ROM..................................................................................................................................168
Removing and Replacing a PCI Smart Fan Assembly........................................................................168
Removing a PCI Smart Fan Assembly...........................................................................................169
Replacing a PCI Smart Fan Assembly...........................................................................................170
Removing and Replacing a PCI-X Power Supply...............................................................................170
Preliminary Procedures ................................................................................................................170
Removing a PCI-X Power Supply .................................................................................................171
Replacing the PCI Power Supply...................................................................................................171
Removing and Replacing a Bulk Power Supply.................................................................................171
Removing a BPS.............................................................................................................................172
Replacing a BPS.............................................................................................................................174
Configuring Management Processor (MP) Network Settings............................................................174
7 HP Integrity rp7440 Server .....................................................................................177
Electrical and Cooling Specifications .................................................................................................177
Boot Console Handler (BCH) for the HP Integrity rx7640 and HP 9000 rp7440 Servers...................178
Booting an HP 9000 sx2000 Server to BCH....................................................................................178
HP-UX for the HP Integrity rx7640 and HP 9000 rp7440 Servers......................................................178
HP 9000 Boot Configuration Options............................................................................................179
Booting and Shutting Down HP-UX.............................................................................................179
Standard HP-UX Booting..............................................................................................................179
Single-User Mode HP-UX Booting................................................................................................180
LVM-Maintenance Mode HP-UX Booting.....................................................................................181
Shutting Down HP-UX..................................................................................................................182
System Verification.............................................................................................................................183
A Replaceable Parts......................................................................................................185
Replaceable Parts................................................................................................................................185
B MP Commands...........................................................................................................187
Server Management Commands.........................................................................................................187
C Templates...................................................................................................................189
Equipment Footprint Templates.........................................................................................................189
Computer Room Layout Plan.............................................................................................................189
Index...............................................................................................................................193
Table of Contents 7
Page 8
8
Page 9
List of Figures
1-1 8-Socket Server Block Diagram.....................................................................................................20
1-2 Server (Front View With Bezel) ....................................................................................................21
1-3 Server (Front View Without Bezel)................................................................................................21
1-4 Right-Front View...........................................................................................................................22
1-5 Left-Rear View ..............................................................................................................................23
1-6 Front Panel LEDs and Power Switch.............................................................................................24
1-7 Cell Board......................................................................................................................................24
1-8 CPU Locations on Cell Board........................................................................................................26
1-9 Memory Subsystem.......................................................................................................................27
1-10 Disk Drive and DVD Drive Location............................................................................................28
1-11 System Backplane Block Diagram.................................................................................................29
1-12 PCI-X Board to Cell Board Block Diagram....................................................................................30
2-1 Airflow Diagram ..........................................................................................................................41
3-1 Removing the Polystraps and Cardboard.....................................................................................44
3-2 Removing the Shipping Bolts and Plastic Cover...........................................................................45
3-3 Preparing to Roll Off the Pallet.....................................................................................................46
3-4 Securing the Cabinet......................................................................................................................47
3-5 Inserting Rear Handle Tabs into Chassis......................................................................................48
3-6 Attaching the Front of Handle to Chassis.....................................................................................49
3-7 RonI Lifter......................................................................................................................................50
3-8 Positioning the Lifter to the Pallet.................................................................................................51
3-9 Raising the Server Off the Pallet Cushions....................................................................................52
3-10 Component Locations ...................................................................................................................53
3-11 Left Foam Block Position...............................................................................................................54
3-12 Right Foam Block Position............................................................................................................54
3-13 Foam Block Removal.....................................................................................................................55
3-14 Attaching a Caster to the Server....................................................................................................56
3-15 Securing Each Caster Cover to the Server.....................................................................................57
3-16 Completed Server..........................................................................................................................57
3-17 Disk Drive and DVD Drive Location............................................................................................59
3-18 Removable Media Location...........................................................................................................60
3-19 PCI I/O Slot Details........................................................................................................................65
3-20 PCI/PCI-X Card Location..............................................................................................................66
3-21 Voltage Reference Points for IEC 320 C19 Plug.............................................................................68
3-22 Safety Ground Reference Check....................................................................................................69
3-23 Safety Ground Reference Check....................................................................................................70
3-24 Wall Receptacle Pinouts................................................................................................................71
3-25 AC Power Input Labeling..............................................................................................................72
3-26 Distribution of Input Power for Each Bulk Power Supply............................................................73
3-27 Two Cell Line Cord Anchor (rp7410, rp7420, rp7440, rx7620, rx7640).........................................74
3-28 Line Cord Anchor Attach Straps...................................................................................................74
3-29 Front Panel Display ......................................................................................................................76
3-30 MP Main Menu..............................................................................................................................77
3-31 The lc Command Screen................................................................................................................78
3-32 The ls Command Screen................................................................................................................79
3-33 Example sa Command...................................................................................................................80
3-34 Browser Window...........................................................................................................................80
3-35 The du Command Screen..............................................................................................................81
3-36 Console Output Device menu.......................................................................................................82
5-1 Front Panel with LED Indicators.................................................................................................122
5-2 BPS LED Locations......................................................................................................................124
5-3 PCI-X Power Supply LED Locations...........................................................................................125
9
Page 10
5-4 Front, Rear and PCI I/O Fan LEDs..............................................................................................126
5-5 Cell Board LED Locations...........................................................................................................127
5-6 PCI-X OL* LED Locations...........................................................................................................128
5-7 Core I/O Card Bulkhead LEDs....................................................................................................129
5-8 Core I/O Button Locations...........................................................................................................130
5-9 Disk Drive LED Location.............................................................................................................132
5-10 Temperature States......................................................................................................................134
5-11 Firmware Update Command Sample..........................................................................................136
5-12 Server Cabinet CRUs (Front View)..............................................................................................137
5-13 Server Cabinet CRUs (Rear View)...............................................................................................138
5-14 de Command Output..................................................................................................................139
6-1 Top Cover....................................................................................................................................144
6-2 Top Cover Retaining Screws........................................................................................................144
6-3 Side Cover Locations ..................................................................................................................145
6-4 Side Cover Retaining Screws.......................................................................................................146
6-5 Side Cover Removal Detail..........................................................................................................146
6-6 Bezel hand slots...........................................................................................................................147
6-7 Front Panel Assembly Location...................................................................................................148
6-8 Front Panel Board Detail.............................................................................................................149
6-9 Front Panel Board Cable Location on Backplane........................................................................150
6-10 Front Smart Fan Assembly Locations .........................................................................................151
6-11 Front Fan Detail...........................................................................................................................152
6-12 Rear Smart Fan Assembly Locations ..........................................................................................153
6-13 Rear Fan Detail............................................................................................................................154
6-14 Disk Drive Location ....................................................................................................................155
6-15 Disk Drive Detail ........................................................................................................................155
6-16 DVD/DAT Location ....................................................................................................................157
6-17 DVD/DAT Detail..........................................................................................................................158
6-18 Single SCSI and Power Cable in Drive Bay.................................................................................159
6-19 SCSI and Power Cable Lengths...................................................................................................159
6-20 SCSI and Power Cable Lengths...................................................................................................160
6-21 SCSI and Power Cable Lengths...................................................................................................160
6-22 Power Cable Connection and Routing........................................................................................161
6-23 DVD Drive Location ...................................................................................................................161
6-24 Slimline DVD Carrier Location ..................................................................................................162
6-25 Data and Power Cable Configuration for Slimline DVD Installation.........................................163
6-26 Top DVD/DAT and Bottom DVD Cables Nested Together.........................................................164
6-27 SCSI and Power Cables for Slimline DVD+RW Installation........................................................164
6-28 SCSI and Power Cables for Slimline DVD Installation...............................................................165
6-29 PCI/PCI-X Card Location............................................................................................................166
6-30 PCI Smart Fan Assembly Location .............................................................................................169
6-31 PCI Smart Fan Assembly Detail..................................................................................................169
6-32 PCI-X Power Supply Location ....................................................................................................170
6-33 PCI Power Supply Detail.............................................................................................................171
6-34 BPS Location ...............................................................................................................................172
6-35 Extraction Levers.........................................................................................................................173
6-36 BPS Detail ...................................................................................................................................173
C-1 Server Space Requirements.........................................................................................................189
C-2 Server Cabinet Template..............................................................................................................190
C-3 Planning Grid..............................................................................................................................191
C-4 Planning Grid..............................................................................................................................192
10 List of Figures
Page 11
List of Tables
1-1 Cell Board CPU Module Load Order............................................................................................25
1-2 Server DIMMs...............................................................................................................................27
1-3 PCI-X paths for Cell 0....................................................................................................................30
1-4 PCI-X Paths Cell 1..........................................................................................................................31
1-5 PCI-X Slot Types............................................................................................................................32
1-6 PCI-X/PCIe Slot Types...................................................................................................................33
2-1 Server Dimensions and Weights...................................................................................................35
2-2 Server Component Weights...........................................................................................................35
2-3 Example Weight Summary............................................................................................................35
2-4 Weight Summary...........................................................................................................................36
2-5 Power Cords..................................................................................................................................37
2-6 AC Power Requirements...............................................................................................................37
2-7 System Power Requirements for the HP 9000 rp7440 Server........................................................37
2-8 Example ASHRAE Thermal Report..............................................................................................38
2-9 Typical Server Configurations for the HP Integrity rx7640 Server...............................................40
3-1 Wheel Kit Packing List..................................................................................................................52
3-2 Caster Part Numbers.....................................................................................................................55
3-3 HP Integrity rx7640 PCI-X and PCIe I/O Cards............................................................................60
3-4 Single Phase Voltage Examples.....................................................................................................68
3-5 Factory-Integrated Installation Checklist......................................................................................85
5-1 Front Panel LEDs.........................................................................................................................122
5-2 BPS LEDs.....................................................................................................................................124
5-3 PCI Power Supply LEDs..............................................................................................................125
5-4 System and PCI I/O Fan LEDs.....................................................................................................126
5-5 Cell Board OL* LED Indicators...................................................................................................127
5-6 Core I/O LEDs..............................................................................................................................129
5-7 Core I/O Buttons..........................................................................................................................131
5-8 OL* LED States............................................................................................................................131
5-9 Disk Drive LEDs..........................................................................................................................132
5-10 Ready Bit States...........................................................................................................................139
6-1 Front Smart Fan Assembly LED Indications...............................................................................151
6-2 Rear Smart Fan Assembly LED Indications................................................................................153
6-3 Unsupported Removable Media Devices....................................................................................158
6-4 Smart Fan Assembly LED Indications.........................................................................................169
6-5 PCI-X Power Supply LEDs..........................................................................................................171
6-6 Default Configuration for Management Processor LAN...........................................................174
7-1 System Power Requirements for the HP Integrity rx7640 and HP 9000 rp7440 Servers............177
7-2 Typical Server Configurations for the HP 9000 rp7440 Server....................................................177
A-1 Server CRU Descriptions and Part Numbers..............................................................................185
B-1 Service Commands......................................................................................................................187
B-2 Status Commands........................................................................................................................187
B-3 System and Access Config Commands.......................................................................................187
11
Page 12
12
Page 13
List of Examples
4-1 Single-User HP-UX Boot..............................................................................................................101
7-1 Single-User HP-UX Boot..............................................................................................................181
13
Page 14
14
Page 15
About this Document
This document covers the HP Integrity rx7640 and HP 9000 rp7440 Servers.
This document does not describe system software or partition configuration in any detail. For detailed information concerning those topics, refer to the HP System Partitions Guide: Administration for nPartitions.
Book Layout
This document contains the following chapters and appendices:
Chapter 1 - Overview
Chapter 2 - Site Preparation
Chapter 3 - Installing the Server
Chapter 4 - Operating System Boot and Shutdown
Chapter 5 - Server Troubleshooting
Chapter 6 - Removal and Replacement
Chapter 7 - HP 9000 rp7440 Server
Appendix A - Replaceable Parts
Appendix B - MP Commands
Appendix C - DIMM Slot Mapping
Appendix D - Templates
Index
Intended Audience
This document is intended to be used by customer engineers assigned to support the HP Integrity rx7640 and HP 9000 rp7440 Servers.
Publishing History
The Printing History below identifies the edition dates of this document. Updates are made to this publication on an unscheduled, as needed, basis. The updates will consist of a complete replacement document and pertinent on-line or CD-ROM documentation.
March 2006. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .First Edition
September 2006. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Second Edition
January 2007Minor edits throughout. Added Chapter 7 for PA release.Third Edition
November 2007Minor edits.Fourth Edition
Book Layout 15
Page 16
Related Information
You can access other information on HP server hardware management, Microsoft® Windows® administration, and diagnostic support tools at the following Web sites:
http://docs.hp.com The main Web site for HP technical documentation is http://docs.hp.com. Server Hardware Information: http://docs.hp.com/hpux/hw/ The
http://docs.hp.com/hpux/hw/ Web site is the systems hardware portion of docs.hp.com.
16 About this Document
Page 17
It provides HP nPartition server hardware management information, including site preparation, installation, and more.
Windows Operating System Information You can find information about administration of the Microsoft® Windows® operating system at the following Web sites, among others:
http://docs.hp.com/windows_nt/
http://www.microsoft.com/technet/ Diagnostics and Event Monitoring: Hardware Support Tools Complete information about HP
hardware support tools, including online and offline diagnostics and event monitoring tools, is at the http://docs.hp.com/hpux/diag/ Web site. This site hasdocuments, tutorials, FAQs, and other reference material.
Web Site for HP Technical Support: http://us-support2.external.hp.com HP IT resource center Web site at http://us-support2.external.hp.com/ provides comprehensive support information for IT professionals on a wide variety of topics, including software, hardware, and networking.
Books about HP-UX Published by Prentice Hall The http://www.hp.com/hpbooks/ Web site lists the HP books that Prentice Hall currently publishes, such as HP-UX books including:
HP-UX 11i System Administration Handbook and
Toolkithttp://www.hp.com/hpbooks/prentice/ptr_0130600814.html
HP-UX Virtual
Partitionshttp://www.hp.com/hpbooks/prentice/ptr_0130352128.html
HP books are available worldwide through bookstores, online booksellers, and office and computer stores.
Typographic Conventions
The following notational conventions are used in this publication.
WARNING! A warning lists requirements that you must meet to avoid personal injury.
CAUTION: A caution provides information required to avoid losing data or avoid losing system
functionality.
NOTE: A note highlights useful information such as restrictions, recommendations, or important details about HP product features.
Commands and options are represented using this font.
Text that you type exactly as shown is represented using this font.
Text to be replaced with text that you supply is represented using this font.
Example: “Enter the ls -l filename command” means you must replace filename with your own text.
Keyboard keys and graphical interface items (such as buttons, tabs, and menu items) are represented using this font.
Examples: The Control key, the OK button, the General tab, the Options menu.
Menu > Submenu represents a menu selection you can perform.
Example: “Select the Partition > Create Partition action” means you must select the Create Partition menu item from the Partition menu.
Example screen output is represented using this font.
Typographic Conventions 17
Page 18
HP Encourages Your Comments
Hewlett-Packard welcomes your feedback on this publication. Please address your comments to edit@presskit.rsn.hp.com and note that you will not receive an immediate reply. All comments are appreciated.
18 About this Document
Page 19
1 HP Integrity rx7640 Server and HP 9000 rp7440 Server
Overview
The HP Integrity rx7640 and HP 9000 rp7440 Servers are members of HP’s business-critical computing platform family in the mid-range product line.
The information in chapters one through six of this guide applies to the HP Integrity rx7640 and HP 9000 rp7440 Servers, except for a few items specifically denoted as applying only to the HP Integrity rx7640 Server. Chapter seven covers any information specific to the HP 9000 rp7440 Server only.
IMPORTANT: Ensure a valid UUID is either in place or available prior to maintenance of these servers. This step is vital when performing upgrades and is recommended for existing hardware service restoration. Specific information for upgrades is found in the Upgrade Guide, Mid-Range Two-Cell HP Servers to HP Integrity rx7640 Server, located at the following URL:http://docs.fc.hp.com.
The server is a 10U1high, 8-socket symmetric multiprocessor (SMP) rack-mount or standalone server. Features of the server include:
Up to 256 GB of physical memory provided by dual inline memory modules (DIMMs).
Dual-core processors.
Up to 16 processors with a maximum of 4 processor modules per cell board and a maximum of 2 cell boards.
One cell controller (CC) per cell board.
Turbo fans to cool CPUs and CCs on the cell boards.
Up to four embedded hard disk drives.
One half-height DVD drive, two slimline DVDs or one DAT drive.
Two front chassis mounted N+1 fans.
Two rear chassis mounted N+1 fans.
Six N+1 PCI-X card cage fans.
Two N+1 bulk power supplies.
N+1 hot-swappable system oscillators.
Sixteen PCI slots divided into two IO Chassis each. Each IO Chassis accommodates eight slots supporting PCI/PCI-X/PCI-X 2.0 device adapters or four PCI/PCI-X/PCI-X 2.0 and four PCIe device adapters.
Up to two core I/O card sets.
One manageability processor per core I/O card with failover capability when two or more core I/O cards are installed and properly configured.
Four 220 V AC power plugs. Two are required and the other two provide power source redundancy.
Detailed Server Description
The following section provides detailed information about the server components.
1. The U is a unit of measurement specifying product height. One U is equal to 1.75 inches.
Detailed Server Description 19
Page 20
Figure 1-1 8-Socket Server Block Diagram
Memor
y
Cell Board 0
CPU
CPU
CPU
CPU
C
C
PDH
SBA
LBALB
A
LBALB
A
LBALB
A
LBALB
A
LBA
LBA
LBALB
A
LBA
LBA
LBA
LBA
SBA
Backplane
System
Clo
cks
Bulk
Power
Supply
(x2)
PCI-X
(x2)
Power
Rese
t
Memor
y
C
C
CPU
PDH
CPU
CPU
CPU
Cell Board 0
CC Link
SCS
I
LAN
LBA
SCS
I
LAN
SBA Link
LAN/SCSI Board
s
PCI-X
M
P
SCS
I
Board
s
LAN/SCSI
M
P
SCS
I
LBA
Disk Backplan
e
D
isk
D
isk
D
isk
D
isk
DVD
/
PCI
SCS
I
Tap
e
Indicates ho
t
pluggable
link
or bus Indicates
cable
Dimensions and Components
The following section describes server dimensions and components.
20 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview
Page 21
Figure 1-2 Server (Front View With Bezel)
Figure 1-3 Server (Front View Without Bezel)
Power Switch
Removable Media Drive
PCI Power Supplies
Front OLR Fans
Bulk Power Supplies
Hard Disk Drives
Detailed Server Description 21
Page 22
The server has the following dimensions:
Depth: Defined by cable management constraints to fit into standard 36-inch deep rack:
25.5 inches from front rack column to PCI connector surface
26.7 inches from front rack column to MP Core I/O connector surface
30 inches overall package dimension, including 2.7 inches protruding in front of the front rack columns.
Width: 44.45 cm (17.5 inches), constrained by EIA standard 19 inch racks.
Height: 10U – 0.54 cm = 43.91 cm (17.287 inches). This is the appropriate height for a product that consumes 10U of rack height while allowing adequate clearance between products directly above and below this product. Fitting four server units per 2 m rack and upgrade of current 10U height products in the future are the main height constraints.
The mass storage section located in the front enables access to the 3.5-inch hard drives without removal of the bezel. This is especially helpful when the system is mounted in the lowest position in a rack. The mass storage bay also accommodates one 5.25-inch removable media device. The front panel display board, containing LEDs and the system power switch, is located directly above the 5.25-inch removable media bay.
Below the mass storage section and behind the removable front bezel are two, N+1 PCI-X power supplies.
The bulk power supply section ispartitioned by a sealed metallic enclosure located in the bottom of the package. This enclosure housesthe N+1 fully redundant BPSs. Install these power supplies from the front of the server after removing the front bezel.
Figure 1-4 Right-Front View
B
PCI Power
Supplies
PCI-X cards
Cell Boards
Bulk Power Supplies
Front Panel Display Board
Access the PCI-X card section, located toward the rear, by removing the top cover.
The PCI card bulkhead connectors are located at the rear top.
22 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview
Page 23
The PCI OLR fan modules are located in front of the PCI-X cards. These six 9.2-cm fans are housed in plastic carriers. They are configured in two rows of three fans.
Four OLR system fan modules, externally attached to the chassis, are 15-cm (6.5-inch) fans. Two fans are mounted on the front surface of the chassis and two are mounted on the rear surface.
The cell boards are accessed from the right side of the chassis behind a removable side cover.
The two MP/SCSI boards are positioned vertically at the rear of the chassis.
The two hot-pluggable N+1 redundant bulk power supplies provide a wide input voltage range. They are installed in the front of the chassis, directly under the front fans.
A cable harness that connects from the rear of the BPSs to the system backplane provides DC power distribution.
Access the system backplane by removing the left side cover. The system backplane hinges from the lower edge and is anchored at the top with two jack screws.
The SCSI ribbon-cable assembly routes from the mass storage area to the backside of the system backplane for connection to the MP/SCSI card, and to the AB290A LAN/SCSI PCI-X cards.
Figure 1-5 Left-Rear View
System backplane
MP/SCSI Core I/O
AC Power Receptacles
Jack Screws
Front Panel
Front Panel Indicators and Controls
The front panel, located on the front of the server, includes the power switch. See Figure 1-6
Enclosure Status LEDs
The following status LEDs are on the front panel:
Locate LED (blue)
Power LED (tri-color)
Management processor (MP) status LED (tri-color)
Cell 0, 1 status (tri-color) LEDs
Detailed Server Description 23
Page 24
Figure 1-6 Front Panel LEDs and Power Switch
Cell Board
The cell board, illustrated in Figure 1-7, contains the processors, main memory, and the CC application specific integrated circuit (ASIC) which interfaces the processors and memory with the I/O, and to the other cell board in the server. The CC is the heart of the cell board, enabling communication with the other cell board in the system. It connects to the processor dependent hardware (PDH) and micro controller hardware. Each cell board holds up to two processor modules and 16 memory DIMMs. One or two cell boards can be installed in the server. A cell board can be selectively powered off for adding processors, memory, or for maintenance of the cell board, without affecting the other cell board in a configured partition.
Figure 1-7 Cell Board
The server has a 48 V distributed power system and receives the 48 V power from the system backplane board. The cell board contains DC-to-DC converters to generate the required voltage rails. The DC-to-DC converters on the cell board do not provide N+1 redundancy.
The cell board contains the following major buses:
Two front side buses (FSB), each with up to two processors
Four memory buses (one going to each memory quad)
Incoming and outgoing I/O bus that goes off board to an SBA chip
Incoming and outgoing crossbar bus that goes off board to the other cell board
PDH bus that goes to the PDH and microcontroller circuitry
All of these buses come together at the CC chip.
24 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview
Page 25
Because of space limitations on the cell board, the PDH and microcontroller circuitry resides on a riser board that plugs into the cell board at a right angle. The cell board also includes clock circuits, test circuits, and de-coupling capacitors.
PDH Riser Board
The PDH riser board is a small card that plugs into the cell board at a right angle. The PDH riser interface contains the following components:
Microprocessor memory interface microcircuit
Hardware including the processor dependant code (PDH) flash memory
Manageability microcontroller with associated circuitry
The PDH obtains cell board configuration information from cell board signals and from the cell board local power module (LPM).
Central Processor Units
The cell board can hold up to four CPU modules. Each CPU module can contain up to two CPU cores on a single socket. Modules are populated in increments of one. On a cell board, the processor modules must be the same family, type, and clock frequencies. Mixing of different processors on a cell board or partition is not supported. Refer to Table 1-1 for the load order that must be maintained when adding processor modules to the cell board. Refer to Figure 1-8 for the locations on the cell board for installing processor modules.
NOTE: Unlike previous HP cell based systems, the HP Integrity rx7640 server cell board does not require that a termination module be installed at the end of an unused FSB. System firmware is allowed to disable an unused FSB in the CC. This enables both sockets of the unused bus to remain unpopulated.
Table 1-1 Cell Board CPU Module Load Order
Socket 0Socket 1Socket 3Socket 2Number of CPU Modules Installed
CPU installedEmpty slotEmpty slotEmpty slot1
CPU installedEmpty slotEmpty slotCPU installed2
CPU installedCPU installedEmpty slotCPU installed3
CPU installedCPU installedCPU installedCPU installed4
Detailed Server Description 25
Page 26
Figure 1-8 CPU Locations on Cell Board
Socket 2
Socket 3
Socket 1 Socket 0
Cell Controller
Memory Subsystem
Figure 1-9 shows a simplified view of the memory subsystem. It consists of two independent
access paths, each path having its own address bus, control bus, data bus, and DIMMs . Address and control signals are fanned out through register ports to the synchronous dynamic random access memory (SDRAM) on the DIMMs.
The memory subsystem comprises four independent quadrants. Each quadrant has its own memory data bus connected from the cell controller to the two buffers for the memory quadrant. Each quadrant also has two memory control buses; one for each buffer.
26 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview
Page 27
Figure 1-9 Memory Subsystem
DIMMs
The memory DIMMs used by the server are custom designed by HP. Each DIMM contains DDR-II SDRAM memory that operates at 533 MT/s. Industry standard DIMM modules do not support the high availability and shared memory features of the server. Therefore, industry standard DIMM modules are not supported.
The server supports DIMMs with densities of 1, 2, and 4 Gb. Table 1-2 (page 27) lists each supported DIMM size, the resulting total system capacity, and the memory component density. Each DIMM is connected to two buffer chips on the cell board.
See Appendix C for more information on DIMM slotmapping and valid memory configurations.
Table 1-2 Server DIMMs
Memory Component DensityTotal CapacityDIMM Size
128 Mb32 Gb1 Gb
256 Mb64 Gb2 Gb
512 Mb128 Gb4 Gb
Cells and nPartitions
An nPartition comprises one or more cells working as a single system. Any I/O chassis that is attached to a cell belonging to an nPartition is also assigned to the nPartition. Each I/O chassis has PCI card slots, I/O cards, attached devices, and a core I/O card assigned to the I/O chassis.
Detailed Server Description 27
Page 28
On the server, each nPartition has its own dedicated portion of the server hardware which can run a single instance of the operating system. Each nPartition can boot, reboot, and operate independently of any other nPartitions and hardware within the same server complex.
The server complex includes all hardware within an nPartition server: all cabinets, cells, I/O chassis, I/O devices and racks, management and interconnecting hardware, power supplies, and fans.
A server complex can contain one or two nPartitions, enabling the hardware to function as a single system or as multiple systems.
NOTE: Partition configuration information is available on the Web at:
http://docs.hp.com
Refer to HP System Partitions Guide: Administration for nPartitions for details.
Internal Disk Devices for the Server
As Figure 1-10 shows, in a server cabinet, the top internal disk drives connect to cell 1 through the core I/O for cell 1. Both of the bottom disk drives connect to cell 0 through the core I/O for cell 0.
The DVD/DAT drive connects to cell 1 through the core I/O card for cell 1.
Figure 1-10 Disk Drive and DVD Drive Location
Drive 1-1
Path: 1/0/0/3/0.6.0
Drive 1-2 Path: 1/0/1/1/0/4/1.6.0
Drive 0-2
Path: 0/0/1/1/0/4/1.5.0
Drive 0-1 Path: 0/0/0/3/0.6.0
DVD/DAT/ Slimline DVD Drive Path: 1/0/0/3/1.2.0
Slimline DVD Drive Path: 0/0/0/3/1.2.0
28 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview
Page 29
System Backplane
The system backplane contains the following components:
The system clock generation logic
The system reset generation logic
DC-to-DC converters
Power monitor logic
Two local bus adapter (LBA) chips that create internal PCI buses for communicating with the core I/O card
The backplane also contains connectors for attaching the cell boards, the PCI-X backplane, the core I/O board set, SCSI cables, bulk power, chassis fans, the front panel display, intrusion switches, and the system scan card. Unlike Superdome or the HP Integrity rx8640, there are no Crossbar Chips (XBC) on the system backplane. The “crossbar-less” back-to-back CC connection increases performance.
Only half of the core I/O board set connects to the system backplane. The MP/SCSI boards plug into the backplane, while the LAN/SCSI boards plug into the PCI-X backplane.
Figure 1-11 System Backplane Block Diagram
PCI-X backplane
Cell board 1
Cell board 0
System backplane
Bulk power supply
MP Core I/O MP/SCSI
MP Core I/O MP/SCSI
Cell boards are perpendicular to the system backplane.
System Bacplane to PCI-X Backplane Connectivity
The PCI-X backplane uses two connectors for the SBA link bus and two connectors for the high speed data signals and the manageability signals.
SBA link bus signals are routed through the system backplane to the cell controller on each corresponding cell board.
The high speed data signals are routed from the SBA chips on the PCI-X backplane to the two LBA PCI bus controllers on the system backplane.
Clocks and Reset
The system backplane contains reset and clock circuitry that propagates through the whole system. The system backplane central clocks drive all major chip set clocks. The system central clock circuitry features redundant, hot-swappable oscillators.
I/O Subsystem
The cell board to the PCI-X board path runs from the CC to the SBA, from the SBA to the ropes, from the ropes to the LBA, and from the LBA to the PCI slots seen in Figure 1-12. The CC on cell
Detailed Server Description 29
Page 30
board 0 and cell board 1 communicates through an SBA over the SBA link. The SBA link consists of both an inbound and an outbound link with an effective bandwidth of approximately 11.5 GB/sec. The SBA converts the SBA link protocol into “ropes.” A rope is defined as a high-speed, point-to-point data bus. The SBA can support up to 16 of these high-speed bi-directional rope links for a total aggregate bandwidth of approximately 11.5 GB/sec. Each LBA acts as a bus bridge, supporting either one or two ropes and capable of driving 33 MHz or 66 MHz for PCI cards. The LBAs can also drive at 66 MHz or 133 MHz for PCI-X cards, and at 266 MHz for PCI-X mode 2 cards installed in mode 2 capable slots.
Figure 1-12 PCI-X Board to Cell Board Block Diagram
PCI Slot 8
LBA 1
Cell
Controlle
r
(CC)
C
ell Boar
d
PCI Slot 7
LBA 2
PCI Slot 6
LBA 4
PCI Slot 5
LBA 6
PCI Slot 4
LBA
14
PCI Slot 3
LBA
12
PCI Slot 2
LBA
10
PCI Slot 1
LBA 8
System Bus
Adapter (SBA
)
System
L
BA
B
ackplan
e
Adapter (SBA
)
System Bus
PCI Slot 8
LBA 1
PCI Slot 2
PCI Slot 1
PCI Slot 3
LBA 8
LBA
10
LBA
12
PCI Slot 4
PCI Slot 5
PCI Slot 6
PCI Slot 7
LBA 4
LBA 6
LBA
14
LBA 2
L
BA
Controlle
r
C
ell Boar
d
Cell (CC)
Table 1-3 and Table 1-4 list the mapping of PCI-X slots to boot paths. The cell column refers to
the cell board installed in the server in cell slot 0 and in cell slot 1.
Table 1-3 PCI-X paths for Cell 0
PathIO ChassisPCI-X SlotCell
0/0/8/1010
0/0/10/1020
0/0/12/1030
0/0/14/1040
0/0/6/1050
0/0/4/1060
0/0/2/1070
0/0/1/1080
30 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview
Page 31
Table 1-4 PCI-X Paths Cell 1
PathI/O ChassisPCI-X SlotCell
1/0/8/1111
1/0/10/1121
1/0/12/1131
1/0/14/1141
1/0/6/1151
1/0/4/1161
1/0/2/1171
1/0/1/1181
The server supports two internal SBAs. Each SBA provides the control and interfaces for eight PCI-X slots. The interface is through the rope bus (16 ropes per SBA). For each SBA, the ropes are divided in the following manner:
A single rope is routed to support the core I/O boards through LBAs located on the system backplane.
A single rope is routed to an LBA on the PCI backplane to support a slot for PCI and PCI-X cards (slot 8).
Six ropes are bundled into double ropes to three LBAs. They support slots 1, 2, and 7 for PCI and PCI-X mode 1 cards.
Eight fat ropes are bundled into quad ropes to four LBAs. They support slots 3, 4, 5, and 6 for PCI and PCI-X mode 2 cards.
NOTE: PCI-X slots 1-7 are dual rope slots while slot 8 is a single rope slot. A rope is defined as a high speed point to point data bus.
The PCI-X backplane is the primary I/O interface for the server. It provides 16, 64-bit, hot-plug PCI/PCI-X slots. Fourteen of the slots havedual ropes connected to the LBA chips.The remaining two slots have a single rope connected to each LBA chip. Each of the sixteen slots are capable of 66 MHz/33 MHz PCI or 133 MHz/66 MHz PCI-X. Four slots in PCI-X support 266 MHz. All sixteen PCI slots are keyed for 3.3 volt connectors (accepting both Universal and 3.3 V cards). See Table 1-5 for more details.
The PCI-X backplane is physically one board, but it behaves like two independent partitions. SBA 0, its associated LBAs, and eight PCI-X slots form one I/O partition. SBA 1, its associated LBAs, and eight PCI-X slots form the other I/O partition. One I/O partition can be reset separately from the other I/O partition, but cannot be powered down independently.
IMPORTANT: Always refer to the PCI card’s manufacturer for the specific PCI card performance specifications. PCI, PCI-X mode 1, and PCI-X mode 2 cards are supported at different clock speeds. Select the appropriate PCI-X I/O slot for best performance.
Table 1-5 lists the PCI-X slot types supported on the server.
Detailed Server Description 31
Page 32
Table 1-5 PCI-X Slot Types
PCI Mode SupportedSupported CardsRopesMaximum Peak
Bandwidth
Maximum MHzSlot
1
I/O Partition
PCI or PCI-X Mode 1
3.3 V001533 MB/s13380
PCI or PCI-X Mode 1
3.3 V002/0031.06 GB/s1337
PCI-X Mode 23.3 V or 1.5 V004/0052.13 GB/s2666
PCI-X Mode 23.3 V or 1.5 V006/0072.13 GB/s2665
PCI-X Mode 23.3 V or 1.5 V014/0152.13 GB/s2664
PCI-X Mode 23.3 V or 1.5 V012/0132.13 GB/s2663
PCI or PCI-X Mode 1
3.3 V010/0111.06 GB/s1332
PCI or PCI-X Mode 1
3.3 V008/0091.06 GB/s1331
PCI or PCI-X Mode 1
3.3 V001533 MB/s13381
PCI or PCI-X Mode 1
3.3 V002/0031.06 GB/s1337
PCI-X Mode 23.3 V or 1.5 V004/0052.13 GB/s2666
PCI-X Mode 23.3 V or 1.5 V006/0072.13 GB/s2665
PCI-X Mode 23.3 V or 1.5 V014/0152.13 GB/s2664
PCI-X Mode 23.3 V or 1.5 V012/0152.13 GB/s2663
PCI or PCI-X Mode 1
3.3 V010/0111.06 GB/s1332
PCI or PCI-X Mode 1
3.3 V008/0091.06 GB/s1331
1 Each slot will auto select the proper speed for the card installed up to the maximum speed for the slot. Placing high
speed cards into slow speed slots will cause the card to be driven at the slow speed.
PCI-X/PCIe Backplane
The 16–slot (8 PCI and PCI-X; 8 PCI-Express) mixed PCI-X/PCI-Express (“PCI-X/PCIe”) I/O backplane was introduced for the Dual-Core Intel® Itanium® processor 9100 Series release and is heavily leveraged from the PCI-X backplane design. Only the differences will be descibed here.See “I/O Subsystem” (page 29) for common content between the two boards.
The PCI-Express I/O backplane comprises two logically independent I/O circuits (partitions) on one physical board.
The I/O chip in cell location zero (0) and its associated four PCI-X ASICs, four PCIe ASICs, and their respective PCI/PCI-X/PCIe slots form PCI-Express I/O partition 0 plus core I/O.
The I/O chip in cell location one (1) and its associated four PCI-X ASICs, four PCIe ASICs, and their respective PCI/PCI-X/PCIe slots form PCI-Express I/O partition 1 plus core I/O.
Each PCI/PCI-X slot has a host-to-PCI bridge associated with it, and each PCIe slot has a host-to-PCIe bridge associated with it. A dual slot hot swap controller chip and related logic is also associated with each pair of PCI or PCIe slots. The I/O chip on either cell location 0 or 1 is a primary I/O system interface. Upstream, the I/O chips communicate directly with the cell controller ASIC on the host cell board via a high bandwidth logical connection known as the HSS link.When installed in the SEU chassis within a fully configured system, the ASIC on cell location 0 connects
32 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview
Page 33
to the cell controller chip on cell board 2, and the ASIC on cell location 1 connects to the cell controller chip on cell board 3 through external link cables.
Downstream, the ASIC spawns 16 logical 'ropes' that communicate with the core I/O bridge on the system backplane, PCI interface chips, and PCIe interface chips. Each PCI chip produces a single 64–bit PCI-X bus supporting a single PCI or PCI-X add-in card. Each PCIe chip produces a single x8 PCI-Express bus supporting a single PCIe add-in card.
The ropes in each I/O partition are distributed as follows:
One PCI-X ASIC is connected to each I/O chip with a single rope capable of peak data rates of 533Mb/s (PCIX-66).
Three PCI-X ASICs are connected to each I/O chip with dual ropes capable of peak data rates of 1.06Gb/s (PCIX-133).
Four PCIe ASICs are connected to each I/O chip with dual fat ropes capable of peak data rates of 2.12Gb/s (PCIe x8).
In addition, each I/O chip provides an external single rope connection for the core I/O.
Each PCI-Express slot on the PCI-X/PCIe I/O board is controlled by its own ASIC and is also independently supported by its own half of the dual hot swap controller. All PCIe slots are designed to be compliant with PCIe Rev.1.0. The PCI-Express I/O backplane will provide slot support for VAUX3.3, SMB*, and JTAG.
PCI-X/PCIe Slot Boot Paths
PCI-X/PCIe slot boot paths are directly leveraged from the PCI-X backplane. See Table 1-3
(page 30) and Table 1-4 (page 31) for more details.
NOTE: The differences between the PCI X backplane and the PCI-X/PCIe backplane are as follows:
Twelve ropes are bundled in two rope pairs to 6 LBAs to support 6 slots for PCI and PCI-X cards instead of 14. These ropes are capable of 133MHz.
Sixteen ropes are bundled into dual fat ropes to 8 LBAs to support 8 additional slots for PCIe cards. These ropes are capable of 266MHz.
Table 1-6 PCI-X/PCIe Slot Types
PCI Mode Supported
Supported CardsRopes
Maximum Peak BandwidthMaximum MHz
Slot
1
I/O Partition
PCI or PCI-X Mode 1
3.3 V001533 MB/s668
2
0
PCI or PCI-X Mode 1
3.3 V002/0031.06 GB/s1337
PCI-e3.3 V004/0052.13 GB/s2666
PCI-e3.3 V006/0072.13 GB/s2665
PCI-e3.3 V014/0152.13 GB/s2664
PCI-e3.3 V012/0132.13 GB/s2663
PCI or PCI-X Mode 1
3.3 V010/0111.06 GB/s1332
PCI or PCI-X Mode 1
3.3 V008/0091.06 GB/s1331
Detailed Server Description 33
Page 34
Table 1-6 PCI-X/PCIe Slot Types (continued)
PCI Mode Supported
Supported CardsRopes
Maximum Peak BandwidthMaximum MHz
Slot
1
I/O Partition
PCI or PCI-X Mode 1
3.3 V001533 MB/s6681
PCI or PCI-X Mode 1
3.3 V002/0031.06 GB/s1337
PCI-e3.3 V004/0052.13 GB/s2666
PCI-e3.3 V006/0072.13 GB/s2665
PCI-e3.3 V014/0152.13 GB/s2664
PCI-e3.3 V012/0132.13 GB/s2663
PCI or PCI-X Mode 1
3.3 V010/0111.06 GB/s1332
PCI or PCI-X Mode 1
3.3 V008/0091.06 GB/s1331
1. Each slot will auto select the proper speed for the card installed up to the maximum speed for the slot. Placing
high speed cards into slow speed slots will cause the card to be driven at the slow speed.
2. Slot is driven by a single rope and has a maximum speed of 66 MHz.
MP/SCSI Board
Up to two MP/SCSI cards can be plugged into the server. At least one MP/SCSI board is required (independent of partitions). An additional MP/SCSI board is required in a dual partition system. Both MP/SCSI boards are oriented vertically and plug into the system backplane. The MP/SCSI board incorporates a dual channel Ultra320 SCSI controller and is hot-pluggable.
LAN/SCSI Board
At least one LAN/SCSI board is required for the minimum system configuration. Two are required in a dual partition system. The LAN/SCSI board is a standard PCI form factor card with PCI card edge connectors. The PCI-X backplane has one slot location reserved for the required board and another that can accommodate either a second LAN/SCSI board or any other supported add-in PCI-X card. The LAN/SCSI board is hot-pluggable.
Mass Storage (Disk) Backplane
Internal mass storage connections to disks are routed on the mass storage backplane, which has connectors and termination logic. All hard disks are hot-plug, but removable media disks are not. The servers accommodate one internal, half-height, removable media device, or two internal, slim line DVD+RW removable media devices.. The mass storage backplane incorporates a circuit that enables power to the internal removable media device to be programmatically cycled.
34 HP Integrity rx7640 Server and HP 9000 rp7440 Server Overview
Page 35
2 Server Site Preparation
This chapter describes the basic server configuration and its physical specifications and requirements.
Dimensions and Weights
This section provides dimensions and weights of the system components. Table 2-1 gives the dimensions and weights for a fully configured server.
Table 2-1 Server Dimensions and Weights
PackagedStandalone
35.75 (90.8)17.3 (43.9)Height- Inches (centimeters)
28.0 (71.1)17.5 (44.4)Width- Inches (centimeters)
28.38 (72.0)30.0 (76.2)Depth- Inches (centimeters)
665.02(302.0)220.01(100.0)Weight - Pounds (kilograms)
1 This weight represents a fully configured server before it is installed in a rack.
2 The packaged weight represents a server installed in a 2-m rack. The packaged weight includes a fully configured
server in a 2-m rack with a rear door, rail slide kit, line cord anchor kit, interlock assembly, cable management arm,
120-lb ballast kit, and a 60-A PDU. The shipping box, pallet, and container, not included in the packaged weight in
Table 2-1, addsapproximately 150.0-lb tothe total system weight when shipped. The sizeand number ofmiscellaneous
pallets will be determined by the equipment ordered by the customer.
Table 2-2 provides component weights for calculating the weight of a server notfully configured. Table 2-3 provides an example of how to calculate the weight. Table 2-4 is a blank worksheet for
calculating the weight of the server. To determine the overall weight, follow the example in
Table 2-3, and complete the worksheet in Table 2-4 for your system.
Table 2-2 Server Component Weights
Weight lb (kg)DescriptionQuantity
90.0 (41.0)Chassis1
27.80 (12.61) eachCell board1 - 2
12 (5.44) (estimate)System backplane1
20.4 (9.25)PCI-X card cage assemply1
18.0 (8.2) eachBulk power supply2
1.0 (0.45)Mass storage backplane1
5.0 (2.27) eachPCI-X power supplies2
1.60 (0.73) eachHard disk drive1 - 4
2.20 (1.00) eachRemovable media disk drive1
Table 2-3 Example Weight Summary
Weight (kg)MultiplyQuantityComponent
107.20 (48.64)27.8 (12.16)2Cell board
1.36 (0.61)0.34 (0.153)4PCI card (varies - used
sample value)
36.0 (16.4)18 (8.2)2Power supply (BPS)
Dimensions and Weights 35
Page 36
Table 2-3 Example Weight Summary (continued)
Weight (kg)MultiplyQuantityComponent
4.4 (2.0)2.2 (1.0)1DVD drive
6.40 (2.90)1.6 (0.73)4Hard disk drive
131.0 (59.42)90.0(41.0)1Chassis with skins and
front bezel cover
286.36 (129.89)Total weight
Table 2-4 Weight Summary
Weight (kg)Multiply ByQuantityComponent
27.8 (12.16)Cell Board
0.34 (0.153)PCI Card
18 (8.2)Power Supply (BPS)
2.2 (1.0)DVD Drive
1.6 (0.73)Hard Disk Drive
90.0 (41.0)Chassis with skins and
front bezel cover
Total weight
Electrical Specifications
This section provides electrical specifications for the server.
Grounding
The site building shall provide a safety ground and protective earth for each AC service entrance to all cabinets.
Install a protective earthing (PE) conductor that is identical in size, insulation material, and thickness to the branch-circuit supply conductors. The PE conductor must be green with yellow stripes. The earthing conductor must be connected from the unit to the building installation earth or if supplied by a separately derived system, at the supply transformer or motor-generator set grounding point.
Circuit Breaker
The Marked Electrical for the server is 15 amps per line cord. The recommended circuit breaker size is 20 amps for North America. For countries outside North America, consult your local electrical authority having jurisdiction for the recommended circuit breaker size.
The servercontains four C20 power receptacles located at the bottom rear bulkhead. A minimum of two power cords must be used to maintain normal operation of the server. A second set of two cords can be added to improve system availability by protecting, for example, against power source failures or accidentally tripped circuit breakers. The server can receive AC input from two different AC power sources.
System AC Power Specifications
Power Cords
Table 2-5 lists the various power cables available for use with the server. Each power cord is 15
feet (4.5 meters) in length with a IEC 60320-1 C19 female connector attached to one end.
36 Server Site Preparation
Page 37
Table 2-5 Power Cords
Where UsedDescriptionPart Number
International - OtherStripped end, 240 volt8120-6895
International - EuropeMale IEC309, 240 volt8120-6897
ChinaMale GB-1002, 240 volts8121-0070
North America/JapanMale NEMA L6-20, 240 volt8120-6903
System Power Specifications
Table 2-6 lists the AC power requirements for the HP Integrity rx7640 and HP 9000 rp7440
Serversservers. Table 2-7 lists the system power requirements for the HP 9000 rp7440 Server. For the system power requirements for the HP Integrity rx7640 Server, see Chapter 7. These tables provide information to help determine the amount of AC power needed for your computer room.
Table 2-6 AC Power Requirements
CommentsValueRequirements
200/208/220/230/240 (VAC rms)Nominal input voltage
50 - 60 (Hz)Frequency range (minimum - maximum)
1Number of phases
Per line cord12 ampsMaximum input current
Per line cord30 A peak for 15 msMaximum inrush current
At all loads of 50% - 100% of supply rating
At all loads 0f 25% - 50% of supply rating
>0.98
>0.95
Power factor correction
Per line cord<3.0 (ma)Ground leakage current (mA)
Table 2-7 System Power Requirements for the HP 9000 rp7440 Server
CommentsVAWattsPower Required (50–60 Hz)
See Note 132313166Maximum Theoretical Power
12A @ 220 VAC, See Note 22640– – –Marked Electrical Power
See Note 321712128User-Expected Maximum Power
Note 1: Maximum Theoretical Power: or “Maximum Configuration” (Input power at the ac input expressed in Watts and Volt-Amps to take into account Power factor correction.)The calculated sum of the maximum worst case power consumption for every subsystem in the server. This number will never be exceeded by a functioning server for any combination of hardware and software under any conditions.
Note 2: Marked Electrical Power: (Input power at the ac input expressed in Volt-Amps.)The Marked Electrical Power is the rating given on the chassis label and represents the input power required for facility ac power planning and wiring requirements. This number represents the expected maximum power consumption for the server based on the power rating of the bulk power supplies. This number can safely be used to size ac circuits and breakers for the system under all conditions.
Note 3: User-Expected Maximum Power: or User Expected Maximum Power, (Input power at the ac input expressed in Watts and Volt-Amps.)The measured maximum worst case power consumption. This number represents the larges power consumption that HP engineers were
Electrical Specifications 37
Page 38
able to produce for the server with any combination of hardware under laboratory conditions using aggressive software applications designed specifically to work the system at maximum load. This number can safely be used to compute thermal loads and power consumption for the system under all conditions.
Environmental Specifications
This section provides the environmental, power dissipation, noise emission, and airflow specifications for the server.
Temperature and Humidity
The cabinet is actively cooled using forced convection in a Class C1-modified environment. The recommended humidity level for Class C1 is 40 to 55% relative humidity (RH).
Operating Environment
The system is designed to run continuously and meet reliability goals in an ambient temperature of 5° to 35° C at sea level. The maximum allowable temperature is derated 1° C per 1,000 feet of elevation above 3,000 feet above sea level up to 25° C at 10,000 feet. For optimum reliability and performance, the recommended operating range is 20° to 25° C. This meets or exceeds the requirements for Class 2 in the corporate and ASHRAE standard. See Table 2-8 (page 38) for an example of the ASHRAE thermal report.
Table 2-8 Example ASHRAE Thermal Report
Condition
Voltage 208 Volts
Over System Dimensions
(W x D x H)
WeightAirflow, maximum at 35° C
Airflow, nominal
Typical Heat Release
mmIncheskglb(m3/hr)cfmWattsDescription
439.17
444.50
762.00
h=17.29
w=17.50
d=30.00
87.4192.21631960670Minimum
configuration
439.17
444.50
762.00
h=17.29
w=17.50
d=30.00
10022016319602128Full
configuration
439.17
444.50
762.00
h=17.29
w=17.50
d=30.00
N/AN/A16379601090Typical
configuration
1 cell board, 2 CPUs, 2 GB, 1 core I/O card
Minimum configurationASHRAE class
2 cell boards, 8 CPUs, 64 GB, 2 core I/O cards
Full configuration
1 cell board, 4 CPUs, 32 GB, 1 core I/O card, 8 I/O cards, 2 hard drives
Typical configuration
38 Server Site Preparation
Page 39
Environmental Temperature Sensor
To ensure that the system is operating within the published limits, the ambient operating temperature is measured using a sensor placed near the chassis inlet, between the cell boards. Data from the sensor is used to control the fan speed and to initiate system overtemp shutdown.
Non-Operating Environment
The system is designed to withstand ambient temperatures between -40° to 70° C under non-operating conditions.
Cooling
Internal Chassis Cooling
The cabinet incorporates front-to-back airflow across the cell boards and system backplane. Two 150 mm fans, mounted externally on the front chassis wall behind the cosmetic front bezel, push air into the cell section. Two 150 mm fans housed in cosmetic plastic fan carriers, mounted externally to the rear chassis wall, pull air through the cell section.
Each fan is controlled by a smart fan control board, embedded in the fan module plastic housing. The smart fan control board receives fan control input from the system fan controller on the system backplane and returns fan status information to the system fan controller. The smart fan control board also controls the power and the pulse width modulated control signal to the fan and monitors the speed indicator back from the fan. The fan status LED is driven by the smart fan control board.
Bulk Power Supply Cooling
Cooling for the bulk power supplies (BPS) is provided by two 60 mm fans contained within each BPS. Air flows into the front of the BPS and is exhausted out of the top of the power supply through upward facing vents near the rear of the supply. The air is then ducted out of the rear of the chassis with minimal leakage into the cell airflow plenum.
PCI/Mass Storage Section Cooling
Six 92 mm fans located between the mass storage devices and the PCI card cage provide airflow through these devices. The PCI fans are powered with housekeeping power and run at full speed at all times. The air is pulled through the mass storage devices and pushed through the PCI Card Cage. Perforation is provided between the PCI bulkheads to allow adequate exhaust ventilation.
Standby Cooling
Several components within the chassis consume significant amounts of power while the system is in standby mode. The system fans run at a portion of full speed during standby to remove the resulting heat from the cabinet. The fans within the power supply will operate at full speed during standby.
Typical Power Dissipation and Cooling
Table 2-9 provides calculations for configurations for the HP 9000 rp7440 Server. For calculations
for the HP Integrity rx7640 Server, see Chapter 7.
Environmental Specifications 39
Page 40
Table 2-9 Typical Server Configurations for the HP Integrity rx7640 Server
Typical Cooling
Typical Power
Bulk Power Supplies
Core I/OHard Disk
Drives
DVDsPCI Cards (assumes 10 watts each)
Memory Per Cell Board
Cell Boards
BTU/hrWattsQtyQtyQtyQtyQtyGBytesQty
72652128224216322
6685195822208162
655819212220882
430812622110881
The air conditioning data is derived using the following equations.
Watts x (0.860) = kcal/hour
Watts x (3.414) = Btu/hour
Btu/hour divided by 12,000 = tons of refrigeration required
NOTE: When determining power requirements you must consider any peripheral equipment that will be installed during initial installation or as a later update. Refer to the applicable documentation for such devices to determine the power and air-conditioning that is required to support these devices.
Acoustic Noise Specification
The acoustic noise specification for the server is 57.3 db (sound pressure level at bystander position) Itis appropriate for dedicated computer room environments but not office environments. The LwA is 7.5 Bels. Care should be taken to understand the acoustic noise specifications relative to operator positions within the computer room or when adding servers to computer rooms with existing noise sources.
Airflow
The recommended server cabinet air intake temperature is between 20° and 25° C (68° and 77° F) at 960 CFM.
Figure 2-1 illustrates the location of the inlet and outlet airducts on a single cabinet. Air is drawn
into the front of the server and forced out the rear.
40 Server Site Preparation
Page 41
Figure 2-1 Airflow Diagram
System Requirements Summary
This section summarizes the requirements that must be considered in preparing the site for the server.
Power Consumption and Air Conditioning
To determine the power consumed and the air conditioning required, follow the guidelines in
Table 2-9.
NOTE: When determining power requirements, consider any peripheral equipment that will be installed during initial installation or as a later update. Refer to the applicable documentation for such devices to determine the power and airconditioning that is required to support these devices.
Maximum power is the sum of the worst case power consumption of every subsystem in the box and should be used to size worst case power consumption. Typical power consumption numbers are what HP engineers have measured when running power-intensive applications. These are generally lower than maximum power numbers because all of the subsystems in the box to simultaneously drawing maximum power for long durations is uncommon.
System Requirements Summary 41
Page 42
42
Page 43
3 Installing the Server
Inspect shipping containers when the equipment arrives at the site. Check equipment after the packing has been removed. This chapter discusses how to inspect and install the server.
Receiving and Inspecting the Server Cabinet
This section contains information about receiving, unpacking and inspecting the server cabinet.
NOTE: The server will ship in one of three different configurations. The configurations are:
On a pallet installed in a server cabinet
On a pallet for rack mount into an existing cabinet on the customer site
On a pallet with a wheel kit for installation as a standalone server
HP shipping containers are designed to protect their contents under normal shipping conditions. A tilt indicator is installed on each carton shipped. The tilt indicator has two windows, and each window under normal conditions will show four beads present. If a carton has been mishandled, accidentally dropped, or knocked against something, the tilt indicator will indicate missing beads. If the container has been tilted to an angle that could cause equipment damage, the beads in the indicator will roll to the upper position.
After the equipment arrives at the customer site, carefully inspect each carton for signs of shipping damage. If the container is damaged, document the damage with photographs and contact the transport carrier immediately.
NOTE: The factory provides an installation warranty that is effective from the time the customer receives the shipment until Field Services turns the system over to the customer.
Upon inspection of a received system and during installation of the system, if any parts or accessories are missing or defective, they will be replaced directly from the factory by a priority process. To request replacement parts, the HP Installation Specialist must contact the local Order Fulfillment group which will coordinate the replacement with the factory.
Unpacking the Server Cabinet
This section contains information about unpacking the server cabinet.
WARNING! Wear protective glasses while cutting the plastic bands around the shipping container. These bands are under tension. When cut, they can spring back and cause serious eye injury.
NOTE: Position the pallet to allow enough space to roll the cabinet off the pallet before starting.
Remove the server cabinet using the following steps:
1. Cut the polystrap bands around the shipping container.
2. Lift the cardboard top cap from the shipping box. Refer to Figure 3-1,
Receiving and Inspecting the Server Cabinet 43
Page 44
Figure 3-1 Removing the Polystraps and Cardboard
3. Remove the corrugated wrap from the pallet.
4. Remove the packing materials.
CAUTION: Cut the plastic wrapping material off rather than pull it off. Pulling the plastic covering off represents an electrostatic discharge (ESD) hazard to the hardware.
5. Remove the four bolts holding down the ramps, and remove the ramps.
44 Installing the Server
Page 45
NOTE: Figure 3-2 shows one ramp attached to the pallet on either side of the cabinet with
each ramp secured to the pallet using two bolts. In an alternate configuration, the ramps are secured together on one side of the cabinet with one bolt.
Figure 3-2 Removing the Shipping Bolts and Plastic Cover
Receiving and Inspecting the Server Cabinet 45
Page 46
6. Remove the six bolts from the base that attaches the rack to the pallet.
Figure 3-3 Preparing to Roll Off the Pallet
WARNING! Be sure that the leveling feet on the rack are raised before you roll the rack
down the ramp, and any time you roll the rack on the casters. Use caution when rolling the cabinet off the ramp. A single server in the cabinet weighs approximately 508 lb. It is strongly recommended that two people roll the cabinet off the pallet.
After unpacking the cabinet, examine it for damage that might have been obscured by the shipping container. If you discover damage, document the damage with photographs and contact the transport carrier immediately.
If the equipment has any damage, the customer must obtain a damage claim form from the shipping representative. The customer must complete the form and return it to the shipping representative.
Securing the Cabinet
When in position, secure and stabilize the cabinet using the leveling feet at the corners of the base (Figure 3-4). Install the anti-tip mechanisms on the bottom front and rear of the rack.
46 Installing the Server
Page 47
Figure 3-4 Securing the Cabinet
Standalone and To-Be-Racked Systems
Servers shipped in a stand-alone or to-be-racked configuration must have the core I/O handles and the PCI towel bars attached at system installation. Obtain and install the core I/O handles and PCI towel bars from the accessory kit A6093-04046. The towel bars and handles are the same part. Refer to service note A6093A-11.
Rack-Mount System Installation
Information is available to help with rack-mounting the server. This list is intended to guide the HP Installation Specialist to the documentation that has been written by the Rack and Power team. The server can be installed in both the 10000 Series Rack and the Rack System/E.
The external Web site is:
http://h18004.www1.hp.com/products/servers/platforms/rackandpower.html
The internal Web site for10K Racks is:
http://isspit.corp.hp.com/application/view/ProdCenter.asp?OID=254479
The interal Web site for the System/E Racks is:
http://isspit.corp.hp.com/application/view/ProdCenter.asp?OID=1130382
Lifting the Server Cabinet Manually
Use this procedure only if no HP approved lift is available.
CAUTION: This procedure must only be performed by four qualified HP Service Personnel utilizing proper lifting techniques and procedures.
CAUTION: Observe all electrostatic discharge (ESD) safety precautions before attempting this procedure. Failure to follow ESD safety precautions can result in damage to the server.
1. Follow the instructions on the outside of the service packaging to remove the banding and cardboard top from the server pallet.
Standalone and To-Be-Racked Systems 47
Page 48
2. Reduce the weight by removing the bulk power supplies and cell boards. Place each on an ESD approved surface.
CAUTION: System damage can occur throughimproper removal andreinstallation of bulk power supplies and cell boards. Refer to Chapter 6: Removing and Replacing Components, for the correct procedures to remove and reinstall these components.
3. Remove the systems left and right side covers.
NOTE: The latest lift handles available for the 2-cell servers are symmetrical and can be installed on either side of the server.
4. Locate one handle and ensure the two thumbscrews are removed from its front flange.
5. Insert the 2 protruding tabs on rear flange of handle into the slotted keyways in the server’s chassis. See Figure 3-5.
Figure 3-5 Inserting Rear Handle Tabs into Chassis
6. Align the screw holes in the handle’s front flange with the rack mounting holes in the server’s rack mount flange. Secure with the two thumbscrews. See Figure 3-6 (page 49).
48 Installing the Server
Page 49
Figure 3-6 Attaching the Front of Handle to Chassis
Thumbscrews
7. Repeat steps 2—4 to install the other handle on the other side of the server.
8. After handles are secured, server is ready to lift.
9. Handles are removed in reverse order of steps 2—4.
10. After moving the server, remove the lift handles from the chassis.
11. After the server is secured, replace the previously removed cell boards and bulk power supplies.
12. Reinstall the side covers and front bezel.
Using the RonI Model 17000 SP 400 Lifting Device
Use the lifter designed by the RonI company to rack-mount the server. The lifter can raise 400 lb/182 kg to a height of 5 feet. The lifter can be broken down into several components. When completely broken down, no single component weighs more than 25 lb/12 kg. The ability to break the lifter down makes it easy to transport from the office to the car and then to the customer site.
Documentation for the RonI lifter has been written by RonI and is available on the HP Cybrary: http://cybrary.inet.cpqcorp.net/ARCHIVE/PUBS/USERS/LIFTOFLEX-17000.pdf. Complete details on how to assemble the lifter, troubleshoot the lifter, and maintain the lifter are provided by RonI.
Use the following procedure to unload the server from the pallet after the lifter is assembled.
Using the RonI Model 17000 SP 400 Lifting Device 49
Page 50
WARNING! Use caution when using the lifter. To avoid injury, because of the weight of the server, center the server on the lifter forks before raising it off the pallet.
Always rack the server in the bottom of a cabinet for safety reasons. Never extend more than one server from the same cabinet while installing or servicing another server product. Failure to follow these instructions could result in the cabinet tipping over.
Figure 3-7 RonI Lifter
1. Obtain the HP J1530C Rack Integration Kit Installation Guide before proceeding with the rack mount procedure. This guide covers these important steps:
Installing the anti-tip stabilizer kit (A5540A)
Installing the ballast kit (J1479A)
Installing the barrel nuts on the front and rear columns
Installing the slides
2. Follow the instructions on the outside of the server packaging to remove the banding and carton top from the server pallet.
3. Carefully roll the lift forward until it is fully positioned against the side of the pallet.
50 Installing the Server
Page 51
Figure 3-8 Positioning the Lifter to the Pallet
4. Carefully slide server onto lifter forks.
5. Slowly raise the server off the pallet until it clears the pallet cushions.
Using the RonI Model 17000 SP 400 Lifting Device 51
Page 52
Figure 3-9 Raising the Server Off the Pallet Cushions
6. Carefully roll the lifter and server away from the pallet. Do not raise the server any higher than necessary when moving it over to the rack.
7. Follow the HP J1530C Rack Integration Kit Installation Guide to complete these steps:
Mounting the server to the slides
Installing the cable management arm (CMA)
Installing the interlock device assembly (if two servers are in the same cabinet)
Wheel Kit Installation
Compare the packing list (Table 3-1) with the contents of the wheel kit before beginning the installation. For a more updated list of part numbers, go to the HP Part Surfer web site at: http://www.partsurfer.hp.com.
Table 3-1 Wheel Kit Packing List
QuantityDescriptionPart Number
1Wheel Kit consisting of the following components:A6753-04013
1Side coverA6753-04002
1Side coverA6753-04003
1Top coverA6753-04004
2Caster coverA6753-00007
1Right front caster assemblyA6753-04001
1Right rear caster assemblyA6753-04005
52 Installing the Server
Page 53
Table 3-1 Wheel Kit Packing List (continued)
QuantityDescriptionPart Number
1Left front caster assemblyA6753-04006
1Left rear caster assemblyA6753-04007
4M4 x 0.78mm T15 steel zinc machinescrew (Used to attach
each caster to the chassis)
0515-2478
1Plywood unloading rampA6093-44013
2Phillips head wood screw (Used to attach the ramp to the
pallet)
Not Applicable
Tools Required for Installation The following list provides the installer with the recommended tools to perform the wheel kit installation.
Diagonal side cutters
Safety glasses
Torx screwdriver with T-15 bit
Phillips head screwdriver
WARNING! Wear protective glasses while cutting the plastic bands around the shipping container. These bands are under tension. When cut, they can spring back and cause serious eye injury.
Use the following procedure to install the wheel kit.
1. Cut and remove the polystrap bands securing the HP server to the pallet.
2. Lift the carton top from the cardboard tray resting on the pallet.
3. Remove the bezel kit carton and the top cushions from the pallet.
Figure 3-10 Component Locations
Top Cushions
Cardboard Tray
Bezel Kit
4. Unfold bottom cardboard tray.
5. Carefully tilt the server and place one of the foam blocks (A6093-44002) under the left side of the server. Do not remove any other cushions until instructed to do so.
Wheel Kit Installation 53
Page 54
Figure 3-11 Left Foam Block Position
6. Carefully tilt the server and place the other foam block provided in the kit under the right side of the server.
Figure 3-12 Right Foam Block Position
7. Remove the cushions from the lower front and rear of the server. Do not disturb the side cushions.
54 Installing the Server
Page 55
Figure 3-13 Foam Block Removal
8. Locate and identify the caster assemblies. Use the following table to identify the casters.
NOTE: The caster part number is stamped on the caster mounting plate.
Table 3-2 Caster Part Numbers
Part NumberCaster
A6753-04001Right front
A6753-04005Right rear
A6753-04006Left front
A6753-04007Left rear
9. Locate and remove one of the four screws from the plastic pouch. Attach the a caster to the server.
Wheel Kit Installation 55
Page 56
Figure 3-14 Attaching a Caster to the Server
10. Attach the remaining casters to the server using the screws supplied in the plastic pouch.
11. Remove the foam blocks from the left and right side of the server.
12. Locate the plywood ramp.
13. Attach the ramp to the edge of the pallet.
NOTE: There are two pre-drilled holes in the ramp. Use the two screws taped to the ramp to attach the ramp to the pallet.
14. Carefully roll the server off the pallet and down the ramp.
15. Locate the caster covers.
NOTE: The caster covers are designed to fit on either side of the server.
16. Insert the slot on the caster cover into the front caster. Secure the cover to the server by tightening the captive screw on the cover at the rear of the server.
56 Installing the Server
Page 57
Figure 3-15 Securing Each Caster Cover to the Server
Caster Cover
Caster Cover
Rear Casters
Front Casters
17. Wheel kit installation is complete when both caster covers are attached to the server, and the front bezel and all covers are installed.
Figure 3-16 Completed Server
Installing the Power Distribution Unit
The server may ship with a power distribution unit (PDU). Two 60 A PDUs available for the server. Each PDU 3 U high and is mounted horizontally between the rear columns of the server cabinet. The 60 A PDUs are delivered with an IEC-309 60 A plug.
The 60 A NEMA2PDU has four 20 A circuit breakers and is constructed for North American use. Each of the four circuit breakers has two IEC3-320 C19 outlets providing a total of eight IEC-320 C19 outlets.
2. The acronym NEMA stands for National Electrical Manufacturers Association.
3. The acronym IEC stands for International Electrotechnical Commission.
Installing the Power Distribution Unit 57
Page 58
The 60A IEC PDU has four 16A circuit breakers and is constructed for International use. Each of the four circuit breakers has two IEC-320 C19 outlets providing a total of eight IEC-320 C19 outlets.
Each PDU is 3U high and is rack-mounted in the server cabinet.
Documentation for installation will accompany the PDU. The documentation can also be found at the external Rack Solutions Web site at:
http://www.hp.com/racksolutions
This PDU might be referred to as a Relocatable Power Tap outside HP.
The PDU installation kit contains the following:
PDU with cord and plug
Mounting hardware
Installation instructions
Installing Additional Cards and Storage
This section provides information on additional products ordered after installation and any dependencies for these add-on products.
The following options may be installed in the server.
Additional hard disk drive storage
Removable media device storage
PCI and PCI-X I/O cards
Installing Additional Hard Disk Drives
The disk drives are located in the front of the chassis (Figure 3-17). The hard disk drives are hot-plug drives.
A list of replacement disk drives for the server is in Appendix A of the HP Service Guide. The list contains both removable media disk drives and hard disk drives.
58 Installing the Server
Page 59
Figure 3-17 Disk Drive and DVD Drive Location
Drive 1-1
Path: 1/0/0/3/0.6.0
Drive 1-2 Path: 1/0/1/1/0/4/1.6.0
Drive 0-2
Path: 0/0/1/1/0/4/1.5.0
Drive 0-1 Path: 0/0/0/3/0.6.0
DVD/DAT/ Slimline DVD Drive Path: 1/0/0/3/1.2.0
Slimline DVD Drive Path: 0/0/0/3/1.2.0
Use the following procedure to install the disk drives:
1. Be sure the front locking latch is open, then position the disk drive in the chassis.
2. Slide the disk drive into the chassis, a slow firm pressure is needed to properly seat the connector.
3. Press the front locking latch to secure the disk drive in the chassis.
4. If the server OS is running, spin up the disk by entering one of the following commands:
#diskinfo -v /dev/rdsk/cxtxdx
#ioscan -f
Removable Media Drive Installation
The DVD drive, or DAT tape drive is located in the left front of the chassis. The server power must be turned off before installation. See Chapter 4: Booting and Shutting Down the Operating
System, see “PoweringOff Hardware Components and Powering On the Server” (page 142), and
see “Removing and Replacing a Half-Height DVD/DAT Drive” (page 156).
Installing Additional Cards and Storage 59
Page 60
Figure 3-18 Removable Media Location
Removable Media
1. Remove the front bezel.
2. Remove the filler panel from the server.
3. Install the left and right media rails and clips to the drive.
4. Connect the cables to the rear of the drive
5. Fold the cables out of the way and slide the drive into the chassis.
The drive easily slides into the chassis; however, a slow firm pressure is needed for proper seating.
The front locking tab will latch to secure the drive in the chassis.
6. Replace the front bezel.
7. Power on the server, and power up nPartitions.
8. Verify operation of the drive.
PCI-X Card Cage Assembly I/O Cards
The server supports a number of PCI and PCI-X I/O cards. Table 3-3 lists the cards currently supported on the server.
Several cards can lose boot functionality in the HP Integrity rx7640 server. The customer must use another I/O card to retain boot functionality if the customer’s card is not supported in the rx7640 server.
Table 3-3 HP Integrity rx7640 PCI-X and PCIe I/O Cards
VMS
Linux®Windows®
HP-UXCard DescriptionPart Number
Gigabit Ethernet (1000b-SX)A4926A
Gigabit Ethernet (1000b-T)A4929A
FCMS - TachliteA5158A
10/100b-TX (RJ45)A5230A
60 Installing the Server
Page 61
Table 3-3 HP Integrity rx7640 PCI-X and PCIe I/O Cards (continued)
VMS
Linux®Windows®
HP-UXCard DescriptionPart Number
4-port 10/100b-TXA5506B
2-port Ultra2 SCSI/2-Port 100b-T ComboA5838A
Hyperfabric IIA6386A
64-port Terminal MUXA6749A
B2G FC TachliteA6795A
bbNext Gen 1000b-TA6825A
BB2-port 2Gb FCA6826A
1
BB1-port U160 SCSIA6828A
BB2-port U160 SCSIA6829A
bbNext Gen 1000b-SXA6847A
BBObsidian 2 VGA/USBA6869B
2
bbb1000b-SX Dual PortA7011A
bbb1000b-T Dual PortA7012A
BBBB2-port U320 SCSIA7173A
BBb1000b-T GigE/2G FC ComboA9782A
1
BBbPCI-X 1000b-T GigE/2G FC ComboA9784A
1
BBBB2-port Smart Array 6402 (U320)A9890A
BBB4-port Smart Array 6402 (U320)A9891A
BEmulex 9802 Fibre ChannelAB232A
1
PCI-X 2-port 4X InfiniBand HCA (HPC)AB286A
PCI-X 2-Port 4X InfiniBand HCA (HPC)-RoHS
AB286C
bbb10 GbE - Fiber (PCI-X 133)AB287A
BBbBbBbU320 SCSI/GigE Combo CardAB290A
PCI-X 2-port 4X InfiniBand HCAAB345A
PCI-X 2-Port 4X InfiniBand HCA - RoHSAB345C
BBQLogic 1-port 4Gb FC (PCI-X 266)AB378A
1
BBQLogic 1-port 4Gb FC card (PCI-X 266)AB378B
1
BBBBQLogic 2-port 4Gb FC (PCI-X 266)AB379A
1
BBBBQLogic 2-port 4Gb FC card (PCI-X 266)AB379B
1
BB1-Port 4Gb FC QLogic – AB378A
equivalent
AB429A
1
BBb2-port 1000b-T 2Gb FC ComboAB465A
1
BEmulex 1050DC Fibre ChannelAB466A
1
BEmulex 1050D Fibre ChannelAB467A
1
b4-Port 1000b-T EthernetAB545A
Installing Additional Cards and Storage 61
Page 62
Table 3-3 HP Integrity rx7640 PCI-X and PCIe I/O Cards (continued)
VMS
Linux®Windows®
HP-UXCard DescriptionPart Number
BBEmulex 4Gb/sAD167A
1
BBEmulex 4Gb/s DCAD168A
1
BBb1 port 4Gb FC & 1 port GbE HBA PCI-XAD193A
BBb2 port 4Gb FC & 2 port GbE HBA PCI-XAD194A
8-Port Terminal MUXAD278A
64-Port Terminal MUXAD279A
BBLOA (USB/VGA/RMP)AD307A
2-port SerialJ3525A
BBSA P600 (Redstone)337972-B21
PCI-e Cards
BBEmulex 1–port 4Gb FC PCIeA8002A
BBEmulex 2–port 4Gb FC PCIeA8003A
BB1 port 4Gb FC HBA PCIe (Emulex)AD299A
BBBB2 Port 4Gb FC HBA PCIe (QLogic)AD300A
2 Ch 4X Infiniband HCA PCIeAD313A
bbb2 Port 1000bT NIC PCIeAD337A
bbb2 Port 1000bT-SX NIC PCIeAD338A
BB1 Port 4Gb FC HBA PCIe (QLogic)AE311A
BBE500 SAS HBA (Bumper)AH226A
BB2 port 4Gb FC HBA PCIe (Emulex)AD355A
B- Supports Mass Storage Boot
b- Supports LAN Boot
Bb- Supports Mass Storage and LAN Boot
1. Factory integration (software load) of the OpenVMS, Windows, and Linux operating systems
via Fibre Channel is NOT supported.
2. Boot support is limited to OS installation, updating, and repairing media.
62 Installing the Server
Page 63
IMPORTANT: The above list of part numbers is current and correct as of September 2007. Part numbers change often. Check the following website to ensure you have the latest part numbers associated with this server:
http://partsurfer.hp.com/cgi-bin/spi/main
Installing an Additional PCI-X Card
IMPORTANT: While the installation process for PCI/PCI-X cards and PCI-e cards is the same, PCI-e cards are physically smaller than PCI-X cards and are not interchangeable. See Table 3-3
(page 60) to verify the slot types and order.
NOTE: The PCI I/O card installation process varies depending on what version of the HP-UX operating system you are running on your system. PCI I/O card installation procedures should be downloaded from the http://docs.hp.com/ Web site. Background information and procedures for adding a new PCI I/O card using online addition are found in:
HP System Partitions Guide for HP-UX 11.11
Interface Card OL* Support Guide for HP-UX 11.23
NOTE: The Lights Out Advanced/KVM Card (LOA) is a PCI-X accessory card that can be installed into any sx2000–based Integrity server to enable the advanced virtual graphical console (vKVM) and virtual CD/DVD/ISO file (vMedia) featuresof the Integrity Lights Out Management Processor (iLO/MP). The LOA card is also a graphics/USB card that offers physical video functionality for servers running Windows, and USB functionality for servers running HP-UX, Windows and OpenVMS. All Lights Out Advanced features are fully enabled on the LOA card – there is no additional “advanced pack” license to purchase. At present, vKVM is only available for servers running Windows and vMedia is available for servers running HP-UX, Windows and OpenVMS. There are no current plans to support the LOA card under Linux.
The LOA card has specific slotting requirements that must be followed for full functionality. They are as follows:
Must be placed in a mode 1 PCI/PCI-X slot
Must be placed in an I/O chassis with a core I/O card
Must be only one LOA card on each partition
HP recommends that you place the LOA card in the lowest numbered slot possible.
The server implements manual release latch (MRL) hardware for use in online add or replacement (OLAR) operations. If an MRL is left open while the server is booting, HP-UX can incorrectly cache PCI slot power status causing OLAR operations to fail. To prevent this situation, ensure all the MRLs are closed before booting the server.
If OLAR reports that a slot is present and powered off, but no OLAR operations to turn power on to that slot have succeeded even after the MRL is closed, the MRL may have been left open during boot. To clear this condition, close the MRL for the PCI slot then power off the PCI slot using the rad -o command. This will allow future OLAR operations to succeed on this PCI slot.
Installing Additional Cards and Storage 63
Page 64
IMPORTANT: The installation process varies depending on what method for installing the PCI card is selected. PCI I/O card installation procedures should be downloaded from the http://docs.hp.com/ Web site. Background information and procedures for adding a new PCI I/O card using online addition are found in the Interface Card OL* Support Guide.
PCI I/O OL* Card Methods There are three methods for performing OL* operations on PCI I/O cards.
pdweb
The Peripheral Device Tool (pdweb) Web-based method of performing OL*.
olrad
The command line method of performing OL*.
Attention Button The hardware system slot based method of performing OL*. Adding a PCI I/O Card Using the Attention Button The following prerequisites for this procedure:
Drivers for the card have already been installed.
No drivers are associated with the slot.
The green power LED is steady OFF. Should the empty slot be in the ON state use the olrad command or the pdweb tool to power the slot OFF.
The yellow attention LED if steady OFF or is blinking if a user has requested the slot location.
Refer to the host bus adapter (HBA) documentation for details on card installation.
Run the olrad -q command to determine the status of all the PCI I/O slots.
Obtain a copy of the interface card guide for instructions on preparing the operating system for the online addition of the PCI I/O card before attempting to insert a PCI I/O card into the PCI-X card cage assembly backplane slot.
This procedure describes how to perform an online addition of a PCI card using the attention button for cards whose drivers support online add or replacement (OLAR). The attention button is also referred to as the doorbell.
1. Remove the top cover.
2. Remove the PCI bulkhead filler panel.
3. Flip the PCI manual retention latch (MRL) for the card slot to the open position. Refer to
Figure 3-19.
4. Install the new PCI card in the slot.
NOTE: Apply a slow, firm pressure to properly seat the card into the backplane.
5. Flip the PCI MRL for the card slot to the closed position.
CAUTION: Working out of sequence or not completing the actions within each step could cause the system to crash.
Do not press the attention button until the latch is locked.
6. Press the attention button.
The green power LED will start to blink.
64 Installing the Server
Page 65
Figure 3-19 PCI I/O Slot Details
Manual Release Latch Closed
Manual Release Latch Open
Power LED (Green)
Attention LED (Yellow)
OL* Attention Button
7. Wait for the green power LED to stop blinking.
8. Check for errors in the hotplugd daemon log file (default: /var/adm/hotplugd.log).
The critical resource analysis (CRA) performed while doing an attention button initiated add action is very restrictive and the action will not complete–it will fail–to protect critical resources from being impacted.
For finer control over CRA actions use pdweb or the olrad command. Refer to the Interface Card OL* Support Guide located on the Web at http://docs.hp.com for details.
9. Replace the top cover.
10. Connect all cables to the installed PCI card.
Installing an A6869B VGA/USB PCI Card in a Server
The A6869B VGA/USB PCI card is a dual function combo card, hosting VGA and universal serial bus (USB) controllers. Both of these devices sit behind a PCI-PCI bridge. The A6869B VGA/USB PCI card operates at the conventional 66MHz/64 bit PCI rate and is universally keyed. All signalling and form factors conform to the PCI Local Bus Specification 2.3. The VGA controller has 128Mbits of DDR-1 RAM for use as a frame buffer.
The A6869B VGA/USB PCI card can be installed into any slot in a PCI/PCI-X backplane.
Installing Additional Cards and Storage 65
Page 66
IMPORTANT: If you are installing the A6869B in HP servers based on the sx1000 chipset, such as HP Superdome, rx7620 or rx8620, the system firmware mustbe updated to a minimum revision of 3.88.
IMPORTANT: Search for available PCI slots that support the conventional clock speed to conserve availability of higher speed PCI-X card slots to PCI-X cards that utilize the higher bandwidth. This applies to mid-range as well as high-end HP server I/O PCI-X backplanes.
Figure 3-20 PCI/PCI-X Card Location
PCI/PCI-X Cards
IMPORTANT: Some PCI I/O cards, such as the A6869B VGA/USB PCI card, cannot be added or replaced online (while Windows® remains running). For these cards, you must shut down Windows® on the nPartition before performing the card replacement or addition. See the section
on Shutting Down nPartitions and Powering off Hardware Components in the appropriate service guide.
1. If the A6869B VGA/USB PCI card is currently not installed, follow proper procedures to shut down the nPartition and power-off the appropriate PCI power domain.
2. Locate a vacant conventional clock speed PCI slot where the A6869B VGA/USB PCI card will reside.
3. Position the PCI card over the empty slot, observing that edge connector keyways match on the PCI backplane connector.
4. Using a slow firm pressure, seat the card down into the slot.
5. Connect the monitor, mouse, and keyboard cables to the card.
6. Connect power, and turn on the monitor.
7. Follow proper procedures to power-on the PCI power domain and boot the Windows®
nPartition.
Once Windows® has completely booted, the video, keyboard and mouse are ready for use.
Troubleshooting the A6869B VGA/USB PCI Card
The following provides some troubleshooting solutions and a URL to a useful reference site.
66 Installing the Server
Page 67
No Console Display
Hardware problem.
* Must have supported power enabled.
* Must have a functional VGA/USB PCI card.
* Must have a functional PCI slot. Select another slot onsame partition/backplane.
* Must have the VGA/USB PCI card firmly seated in PCI backplane slot.
* Must have a supported monitor.
* Must have verified cable connections to VGA/USB PCI card.
Black Screen. No text displayed.
* Ensure system FW supports the VGA/USB PCI card.
* Ensure graphics resolution is compatible and set correctly.
Display unreadable.
Reference URL
There are many features available for HP Servers at this website including links to download Windows® Drivers.
HP Servers Technical Support
http://www.hp.com/support/itaniumservers
Cabling and Power Up
After the system has been unpacked and moved into position, it must be connected to a source of AC power. The AC power must be checked for the proper voltage before the system is powered up. This chapter describes these activities.
Checking the Voltage
This section provides voltage check information for use on the customer site. The empahsis focuses on measuring the voltages at the power cord plug and specified as an IEC 320 C19 type plug. This end plugs directly into the back of the server chassis.
NOTE: Perform these procedures for each power cord that will be plugged directly into the back of the server. If you do not obtain the expected results from this procedure during the voltage check, refer to the section titled “Voltage Check (Additional Procedure)” (page 71).”
Preface
The server requires a minimum of 2 power cords. To enable full power redundancy, 4 power cords may be used. When using 4 power cords, dual power sources may be used to provide additional power source protection.
Power cords aredesignated and labeled A0, A1, B0 and B1. Cords A0 and B0 should beenergized from the same power source and cords A1 and B1 should be energized from a second, independently qualified power surce. The cord labeling corresponds to the labeling at the servers power receptacles.
Voltage Range Verification of Receptacle
Use this procedure to measure the voltage between L1 and L2, L1 to ground, and L2 to ground. Refer to Figure 3-21 for voltage reference points when performing the following measurements.
Cabling and Power Up 67
Page 68
Figure 3-21 Voltage Reference Points for IEC 320 C19 Plug
IMPORTANT: Perform these measurements for every power cord that plugs into the server.
1. Measure the voltage between L1 and L2. This is considered to be a phase-to-phase measurement inNorth America. In Europe and certain parts of Asia-Pacific, this measurement is referred to as a phase-to-neutral measurement. The expected voltage should be between 200–240 V AC regardless of the geographic region.
2. Measure the voltage between L1 and ground. In North America, verify that this voltage is between 100–120 V AC. In Europe and certain parts of Asia-Pacific, verify that this voltage is between 200–240 V AC.
3. Measure the voltage between L2 and ground. In North America, verify that this voltage is between 100–120 VAC. In Europe and certain parts of Asia-Pacific, verify that this voltage is 0 (zero) V AC.
Table 3-4 provides single phase voltage measurement examples dependent on the geographic
region where these measurements are taken.
Table 3-4 Single Phase Voltage Examples
Europe
1
North AmericaJapan
230V208V or 240V210VL1-L2
230V120V105VL1-GND
0V120V105VL2-GND
1 In some European countries there may not be a polarization.
Verifying the Safety Ground (Single Power Source)
Use this procedure to measure the voltage level between A0 and A1. It also verifies the voltage level between B0 and B1. Take measurements between ground pins. Refer to Figure 3-22 for ground reference points when performing these measurements.
68 Installing the Server
Page 69
Figure 3-22 Safety Ground Reference Check
WARNING! SHOCK HAZARD
Risk of shock hazard while testing primary power.
Use properly insulated probes.
Be sure to replace access cover when finished testing primary power.
1. Measure the voltage between A0 and A1 as follows:
1. Take the AC voltage down to the lowest scale on the volt meter.
2. Insert the probe into the ground pin for A0.
3. Insert the other probe into the ground pin for A1.
4. Verify that the measurement is between 0-5 V AC.
If the measurement is 5 V or greater, escalate the situation. Do not attempt to plug the power cord into the server cabinet.
2. Measure the voltage between B0 and B1 as follows:
1. Take the AC voltage down to the lowest scale on the volt meter.
2. Insert the probe into the ground pin for B0.
3. Insert the other probe into the ground pin for B1.
4. Verify that the measurement is between 0-5 V AC.
If the measurement is 5 V or greater, escalate the situation. Do not attempt to plug the power cord into the server cabinet.
Verifying the Safety Ground (Dual Power Source)
Use this procedure to measure the voltagelevel between A0 and A1,between B0 and B1, between A0 and B0, and between A1 and B1. Take all measurements between ground pins. Refer to
Figure 3-23 for ground reference points when performing these measurements.
Cabling and Power Up 69
Page 70
Figure 3-23 Safety Ground Reference Check
WARNING! SHOCK HAZARD
Risk of shock hazard while testing primary power.
Use properly insulated probes.
Be sure to replace access cover when finished testing primary power.
1. Measure the voltage between A0 and A1 as follows:
1. Take the AC voltage down to the lowest scale on the volt meter.
2. Insert the probe into the ground pin for A0.
3. Insert the other probe into the ground pin for A1.
4. Verify that the measurement is between 0-5 V AC.
If the measurement is 5 V or greater, escalate the situation. Do not attempt to plug the power cord into the server cabinet.
2. Measure the voltage between B0 and B1 as follows:
1. Take the AC voltage down to the lowest scale on the volt meter.
2. Insert the probe into the ground pin for B0.
3. Insert the other probe into the ground pin for B1.
4. Verify that the measurement is between 0-5 V AC.
If the measurement is 5 V or greater, escalate the situation. Do not attempt to plug the power cord into the server cabinet.
3. Measure the voltage between A0 and B0 as follows:
1. Take the AC voltage down to the lowest scale on the volt meter.
2. Insert the probe into the ground pin for A0.
3. Insert the other probe into the ground pin for B0.
4. Verify that the measurement is between 0-5 V AC.
If the measurement is 5 V or greater, escalate the situation. Do not attempt to plug the power cord into the server cabinet.
70 Installing the Server
Page 71
4. Measure the voltage between A1 and B1 as follows:
1. Take the AC voltage down to the lowest scale on the volt meter.
2. Insert the probe into the ground pin for A1.
3. Insert the other probe into the ground pin for B1.
4. Verify that the measurement is between 0-5 V AC.
If the measurement is 5 V or greater, escalate the situation. Do not attempt to plug the power cord into the server cabinet.
Voltage Check (Additional Procedure)
The voltage check ensures that all phases (and neutral, for international systems) are connected correctly to the cabinet and that the AC input voltage is within limits.
Perform this procedure if the previous voltage check procedure did not yield the expected results.
NOTE: If you use an uninterrupted power supply (UPS), refer to the applicable UPS documentation for information on connecting the server and checking the UPS output voltage. UPS user documentation is shipped with the UPS. Documentation is also available at:
http://www.hp.com/racksolutions
1. Verify that site power is OFF.
2. Open the site circuit breakers.
3. Verify that the receptacle ground connector is connected to ground. Refer to Figure 3-24 for connector details.
4. Set the site power circuit breaker to ON.
Figure 3-24 Wall Receptacle Pinouts
WARNING! SHOCK HAZARD
Risk of shock hazard while testing primary power.
Use properly insulated probes.
Be sure to replace access cover when finished testing primary power.
5. Verify that the voltage between receptacle pins X and Y is between 200 and 240V AC.
6. Set the site power circuit breaker to OFF.
Cabling and Power Up 71
Page 72
7. Route and connect the server power connector to the site power receptacle.
For locking type receptacles, line up the key on the plug with the groove in the receptacle.
Push the plug into the receptacle and rotate to lock the connector in place.
WARNING! Do not set site AC circuit breakers serving the processorcabinets to ON before verifying that the cabinet has been wired into the site AC power supply correctly. Failure to do so may result in injury to personnel or damage to equipment when ACpower is applied to the cabinet.
8. Set the site power circuit breaker to ON.
9. Set the server power to ON.
10. Check that the indicator light on each power supply is lit.
Connecting AC Input Power
The server can receive AC input power from two different AC power sources. If two separate power sources are available, the server can be plugged into the separate power sources,increasing system realibility if one power source fails. The main power source is defined to be A0 and B0. The redundant power source is defined to be A1 and B1. See Figure 3-25 for the AC power input label scheme.
NOTE: When running the server with a single power source, either A0 and B0 or A1 and B1 should be used. All other combinations are not supported. Either combination requires two power cords.
Figure 3-25 AC Power Input Labeling
MP/SCSI Core I/O card Slot 0
MP/SCSI Core I/O card Slot 1
The server has two power cord configurations:
All four line cords (preferred configuration)
Cords A0 and B0 only
A single-line-cord configuration is not allowed.
The power cord configuration is passed to the operating system using the pwrgrd (Power Grid) command. Each of the five selections in the pwrgrd command matches one of the configurations. The appropriate option should be selected for the actual line cord configuration. With the correct configuration selected, the LEDs should be green. when the pwrgrd command is invoked, the following menu is displayed.
MP:CM> pwrgrd
72 Installing the Server
Page 73
The current power grid configuration is: Single gridPower grid configuration preference.
1. Single grid2. Dual grid
Select Option:
Figure 3-26 Distribution of Input Power for Each Bulk Power Supply
BPS 0 BPS 1
A0
A1
B0
B1
Power Source A
Power Source B
WARNING! Voltage is present at various locations within the server whenever a power source is connected. This voltage is present even when the main power switch is in the off position. To completely remove power, all power cords must be removed from the server. Failure to observe this warning could result in personal injury or damage to equipment.
CAUTION: Do not route data and power cables together in the same cable management arm.
Do not route data and power cables in parallel paths in close proximity to each other. The suggested minimum distance between the data and power cables is 3 inches (7.62 cm).
The power cord has current flowing through it, which creates a magnetic field. The potential to induce electromagnetic interference in the data cables exists, which can cause data corruption.
NOTE: Label the AC power cords during the installation. One suggestion is to use tie wraps that have the flag molded into the tie wrap. The flag can be labeled using the appropriate two characters to represent the particular AC power input (for example, A0). Another suggestion would be to use color coded plastic bands. Use one color to represent the first pair A0/A1 and another color to represent the second pair B0/B1 (provided a second power source is available at the customer site).
NOTE: System firmware will prevent boot when a single power cord configuration is detected.
Installing The Line Cord Anchor (for rack mounted servers)
The line cord anchor is attached to the rear of the server when rack mounted. It provides a method to secure the line cords to the server preventing accidental removal of the cords from the server.
Two Cell Server Installation (rp7410, rp7420, rp7440, rx7620, rx7640)
There are 3 studs with thumb nuts located at the rear of the server chassis. The line cord anchor installs on these studs.
Cabling and Power Up 73
Page 74
To install the line cord anchor:
1. Remove and retain the thumb nuts from the studs.
2. Install the line cord anchor over the studs. Refer to Figure 3-27: “Two Cell Line Cord Anchor
(rp7410, rp7420, rp7440, rx7620, rx7640)”,
3. Tighten the thumb nuts onto the studs.
4. Weave the power cables through the line cord anchor. Leave enough slack to allow the plugs
to be disconnected from the receptacles without removing the cords from the line cord anchor.
5. Use the supplied straps to attach the cords to the anchor. Refer to Figure 3-28: “Line Cord
Anchor Attach Straps”,
Figure 3-27 Two Cell Line Cord Anchor (rp7410, rp7420, rp7440, rx7620, rx7640)
Figure 3-28 Line Cord Anchor Attach Straps
Attach Straps
Core I/O Connections
Each server can have up to two core I/O board sets installed which allows for two partitions to operate, or MP core I/O redundancy in a single or dual partition configuration. Each core I/O board set consists of two boards: the MP/SCSI board and the LAN/SCSI board. The MP/SCSI board is oriented vertically and accessed from the back of the server. The LAN/SCSI is accessed from the PCI expansion card bay. Only the primary core I/O board set (MP/SCSI slot 1 and LAN/SCSI slot 8, chassis 1) is required for a single partition implementation. The secondary MP/SCSI board is not necessary for full operation; however, without the secondary MP/SCSI and LAN/SCSI boards, only the top two internal disks can be accessed.
MP/SCSI I/O Connections
The MP/SCSI board is required to update firmware, access the console, turn partition power on or off, access one of the HDDs and one of the removable media devices, and utilize other features
74 Installing the Server
Page 75
of the system. For systems running a single partition, one MP/SCSI board is required. A second MP/SCSI board is required for a dual-partition configuration, or if you want to enable primary or secondary MP failover for the server.
Connections to the MP/SCSI board include the following:
DB9 connector for Local Console
10/100 Base-T LAN RJ45 connector (for LAN and Web Console access)
This LAN uses standby power and is active when AC is present and the front panel power switch is off.
Internal LVD Ultra 320 SCSI channel for connections to internal mass storage
Internal SE Ultra SCSI channel for connection to an internal removable media device.
LAN/SCSI Connections
The LAN/SCSI board is a PCI form factor card that provides the basic external I/O connectivity for the system.
Connections to the LAN/SCSI board include the following:
PCI-X to PCI-X bridge for multi-device compatibility
Two LVD Ultra 320 SCSI channel controllers: one for internal connection to one of the HDD devices, and the other is available for connection to an external device
Two 10/100/1000 Base-T LAN RJ45 connectors
The primary LAN interface is located on the LAN/SCSI board installed in the right-most slot when viewing the system from the back.
Management Processor Access
NOTE: The primary MP/SCSI board is located in the lower MP/SCSI board slot.
Setting Up the Customer Engineer Tool (PC)
The CE Tool is usually a laptop. It allows communication with the Management Processor (MP) in the server. The MP monitors the activity of either a one-partition or a multiple-partition configuration.
During installation, communicating with the MP enables such tasks as:
Verifying that the components are present and installed correctly
Setting the MP LAN configurations
Shutting down cell board power
Establish communication with the MP by connecting the CE Tool to the local RS-232 port on the MP core I/O card.
Setting CE Tool Parameters
After powering on the CE Tool, ensure the communications settings are as follows:
8 data bits/ no parity
9600 baud
na (Receive)
na (Transmit)
If the CE Tool is a laptop using Reflection 1, ensure communications settings are in place, using the following procedure:
1. From the Reflection 1 Main screen, pull down the Connection menu and select Connection Setup.
2. Select Serial Port.
Cabling and Power Up 75
Page 76
3. Select Com1.
4. Check the settings and change, if required.
Go to More Settings to set Xon/Xoff. Click OK to close the More Settings window.
5. Click OK to close the Connection Setup window.
6. Pull down the Setup menu and select Terminal (under the Emulation tab).
7. Select the VT100 HP terminal type.
8. Click Apply.
This option is not highlighted if the terminal type you want is already selected.
9. Click OK.
Connecting the CE Tool to the Local RS232 Port on the MP
This connection enables direct communications with the MP. Only one window can be created on the CE Tool to monitor the MP. When enabled, it provides direct access to the MP and any partition.
Use the following procedure to connect the CE Tool to the Local RS-232 Port on the MP:
1. Connect one end of a null modem cable (9-pin to 9-pin) (Part Number 5182-4794) to the cable
connector labeled CONSOLE.
2. Connect the other end of the RS-232 cable to the CE Tool.
Turning on Housekeeping Power and Logging in to the MP
After connecting the serial device, it is possible to log in to the Management Processor (MP). +3.3 DC Housekeeping power (HKP) (also known as standby power) is active as soon as AC power is applied to the server. As the MP uses housekeeping power, it is possible to log in to the MP even when the power switch is in the OFF position. The power switch is a DC power switch that controls +48V DC.
Before powering up the server for the first time:
1. Verify that the AC voltage at the input source is within specifications for each server being
installed.
2. If not already done so, power on the serial display device.
The preferred tool is the CE tool running Reflection 1.
To set up a communications link and log in to the MP:
1. Apply power to the server cabinet.
On the front of the server, a solid green Power LED and a solid green MP Status LED will illuminate after about 30 seconds. Refer to Figure 3-29.
Figure 3-29 Front Panel Display
2. Check the bulk power supply LED for each BPS.
When on, the breakers distribute power to the BPSs. AC power is present at the BPSs:
When power is first applied. The BPS LEDs will be flashing amber.
After 30 seconds has elapsed. The flashing amber BPS LED for each BPS becomes a flashing green LED.
76 Installing the Server
Page 77
Refer to power cord policies to interpret LED indicators.
3. Log in to the MP:
a. Enter Admin at the login prompt. The login is case sensitive.
It takes a few moments for the MP prompt to display. If it does not, be sure the laptop serial device settings are correct: 8 bits, no parity, 9600 baud, and na for both Receive and Transmit. Then, try again.
b. Enter Admin at the password prompt. The password is case sensitive.
The MP Main Menu is displayed:
Figure 3-30 MP Main Menu
Configuring LAN Information for the MP
This section describes how to set and verify the server management processor (MP) LAN port information. LAN information includes the MP network name, the MP IP address, the subnet mask, and the gatway address. This information is provided by the customer.
To set the MP LAN IP address:
1. At the MP Main Menu prompt (MP>) enter cm to enter the MP Command.
NOTE: If the Command Menu is not shown, enter q to return to the MP Main Menu, then enter cm..
2. From the MP Command Menu prompt (MP:CM>) enter lc (for LAN configuration).
The screen displays the default values and asks if you want to modify them. Write down the information or log it in a file, as it may be required for future troubleshooting. See .Figure 3-31
Cabling and Power Up 77
Page 78
Figure 3-31 The lc Command Screen
MP:CM> lc
This command modifies the LAN parameters.
Current configuration of MP customer LAN interface MAC address : 00:12:79:b4:03:1c IP address : 15.11.134.222 0x0f0b86de Hostname : metro-s Subnet mask : 255.255.248.0 0xfffff800 Gateway : 15.11.128.1 0x0f0b8001 Status : UP and RUNNING Link : Connected 100Mb Half Duplex
Do you want to modify the configuration for the MP LAN (Y/[N]) q
NOTE: The value in the IP address field has been set at the factory. Obtain the LAN IP address from the customer.
3. At the prompt, Do you want to modify the configuration for the MP LAN?, enter Y.
The current IP address is shown; and the following prompt displays: Do you want to
modify it? (Y/[N])
4. Enter Y.
5. Enter the new IP address.
The customer must provide this address for network interface 0.
6. Confirm the new address.
7. Enter the MP Hostname.
This is the host name for the customer LAN. The name can be as many as 64 characters in length, and include alphanumeric characters, - (dash), _ (under bar),. (period), or a space. HP recommends that the name be a derivative of the complex name. For example, Acme.com_MP.
8. Enter the LAN parameters for the Subnet mask and Gateway address fields.
This information must come from the customer.
When this step is completed, the system will indicate that the parameters have been updated and return to the MP Command Menu prompt (MP:CM>)
9. To check the LAN parameters and status, enter the ls command at the MP Command Menu prompt (MP:CM>).
78 Installing the Server
Page 79
10. A screen similar to the following is displayed, allowing verification of the settings:
Figure 3-32 The ls Command Screen
11. To return to the MP main menu, enter ma.
12. To exit the MP, enter x at the MP main menu.
Accessing the Management Processor via a Web Browser
Web browser access is an embedded feature of the MP/SCSI card. The Web browser enables access to the server through the LAN port on the core I/O card. MP configuration must be done from an ASCII console connected to the Local RS232 port..
NOTE: The MP/SCSI card has a separate LAN port from the system LAN port. It requires a separate LAN drop, IP address, and networking information from that of the port used by HP-UX.
Before starting this procedure, the following information is required:
IP address for the MP LAN
Subnet mask
Gateway address
Host name (this is used when messages are logged or printed)
To configure the LAN port for a Web browser, perform the following steps:
1. Connect to the MP using a serial connection.
2. Configure the MP LAN. Refer to “Configuring LAN Information for the MP”.
3. Type CMto enter the Command Menu.
4. Type SA at the MP:CM> prompt to display and set MP remote access.
Cabling and Power Up 79
Page 80
Figure 3-33 Example sa Command
5. Enter W to modify web access mode.
6. Enter option 2 to enable web access.
7. Launch a Web browser on the same subnet using the IP address for the MP LAN port.
Figure 3-34 Browser Window
Zoom In/Out Title Bar
8. Select the emulation type you want to use.
9. Click anywhere on the Zoom In/Out title bar to generate a full screen MP window.
10. Login to the MP when the login window appears.
Access to the MP via a Web browser is now possible.
Verifying the Presence of the Cell Boards
To perform this activity, either connect to the MP using a console, or connect the CE Tool (laptop) to the RS-232 Local port on the MP/SCSI card.
80 Installing the Server
Page 81
After logging in to the MP, verify that the MP detects the presence of all the cells installed in the cabinet. It is important for the MP to detect the cell boards. If it does not, the partitions will not boot.
To determine if the MP detects the cell boards:
1. At the MP prompt, enter cm.
This displays the Command Menu. The Command Menu enables viewing or modifying the configuration and viewing the utilities controlled by the MP.
To view a list of the commands available, enter he. Press Enter to see more than one screen of commands. Use the Page Up and Page Down keys to view the previous or next screen of commands. To exit the Help Menu, enter q.
2. From the command prompt (MP:CM>), enter du.
The du command displays the MP bus topology. A screen similar to the following is displayed:
Figure 3-35 The du Command Screen
There will be an asterisk (*) in the column marked MP.
3. Verify that there is an asterisk (*) for each of the cells installed in the cabinet, by comparing what is in the Cells column with the cells physically located inside the cabinet.
Figure 3-35 shows that cells are installed in slots 0 and 1. In the cabinet, cells should be
physically located in slots 0 and 1.
System Console Selection
Each operating system requires that the correct console type be selected from the firmware selection menu. The following section describes how to determine the correct console device.
If an operating system is being installed or the system configuration is being changed the system console setting must be checked to ensure it matches the hardware and OS. Not checking the console selection can result in the system using an unexpected device as a console, which can appear as a system hang when booting.
1. Determine the console you want to use.
Depending on your operating system and your hardware you can select one of two possible devices as your system console. The possibilities are:
Management Processor (MP) Serial Port
VGA device
Cabling and Power Up 81
Page 82
2. Select the appropriate console device (deselect unused devices):
a. Choose the “Boot option maintenance menu” choice from the main Boot Manager Menu. b. Select the Console Output, Input or Error devices menu item for the device type you
are modifying:
“Select Active Console Output Devices”
“Select Active Console Input Devices”
“Select Active Console Error Devices”
c. Available devices will be displayed for each menu selection. Figure 3-36 shows a typical
output of selecting the Console Output Devices menu.
Figure 3-36 Console Output Device menu
d. Choose the correct device for your system and deselect others. See “Interface Differences
Between Itanium-based Systems” for details about choosing the appropriate device.
e. Select “Save Settings to NVRAM” and then “Exit” to complete the change. f. A system reset is required for the changes to take effect.
VGA Consoles
Any device that has a Pci section in its path and does not have a Uart section will be a VGA device. If you require a VGA console, choose the device and unmark all others. Figure 3-36 shows that a VGA device is selected as the console.
Interface Differences Between Itanium-based Systems
Each Itanium-based system has a similar interface with minor differences. Some devices may not be available on all systems depending on system design or installed options.
Other Console Types
Any device that has a Uart section but no Pci section is a system serial port. To use the system serial port (if available) as your console device, select the system serial device entry that matches your console type(PcAnsi, Vt100, Vt100+, VtUtf8) and deselect everything else.
If you choose either a system or MP serial port HP recommends that you use a vt100+ capable terminal device.
Additional Notes on Console Selection
Each Operating System makes decisions based on the EFI Boot Maintenance Manager menu’s Select Active Console selections to determine where to send its output. If incorrect console devices
82 Installing the Server
Page 83
are chosen the OS may fail to boot or will boot with output directed to the wrong location. Therefore, any time new potential console devices are added to the system or anytime NVRAM on the system is cleared console selections should be reviewed to ensure that they are correct.
Configuring the Server for HP-UX Installation
Installation of the HP-UX operating system requires the server hardware to have a specific configuration. If the server’s rootcell value is incorrectly set an install of HP-UX will fail.
To verify and set the proper rootcell value:
1. At the EFI Shell interface prompt enter the rootcell command with no arguments. The current value for rootcell will be displayed. If the value is ‘1’ continue with installing HP-UX.
2. To set the rootcell value to ‘1’, at the EFI Shell interface prompt, enter ‘rootcell 1’.
3. At the EFI Shell interface prompt, enter reset to save the new rootcell value.
4. Continue with installation of HP-UX.
Booting the Server
Power on the server either by pressing the power switch on the front panel or by using the PE command to power on the cabinet or complex at the management processor Command Menu.
If you are using a LAN crossover cable with the laptop, review server activity for each partition configured while the server powers on and boots. You can open Windows for the complex and for each partition. HP recommends that at least two windows be opened:
1. A window showing all activity in the complex. Following the installation procedure in this document causes a window to be open at startup.
To display activity for the complex:
1. Open a separate Reflection window and connect to the MP.
2. From the MP Main Menu, select the VFP command with the s option.
2. A window showing activity for a single partition.
To display activity for each partition as it powers on:
1. Open a separate Reflection window and connect to the MP.
2. Select the VFP command and select the desired partition to view.
There should be no activity on the screen at this point in the installation process.
NOTE: You cannot open more than one window using a serial display device.
To power on the server:
1. At the MP:CM> prompt, use the PE X command to power on the complex, or the PE T command for each cabinet. The following events occur:
Power is applied to the server.
Processor-dependent code (PDC) starts to run on each cell.
The cell self-test executes.
Hardware initializes for the server.
Console communication is established.
2. After the cell has joined the partition or after boot is blocked (BIB) is displayed at the Virtual Front Panel (VFP), return to the MP Main Menu by pressing Ctrl+B.
3. Enter co to enter console mode.
4. Enter the partition number of the partition to boot.
5. Press Enter.
Cabling and Power Up 83
Page 84
Selecting a Boot Partition Using the MP
At this point in the installation process, the hardware is set up, the MP is connected to the LAN, the AC and DC power have been turned on, and the self-test is completed. Now the configuration can be verified.
After the DC power on and the self-test is complete, use the MP to select a boot partition.
1. From the MP Main Menu, enter cm.
2. From the MP Command Menu, enter bo.
3. Select the partition to boot. Partitions can be booted in any order.
4. Return to the MP Main Menu by entering ma from the MP Command Menu.
5. Enter the console by typing co at the MP Main Menu.
Exit the MP to return automatically to the Extensible Firmware Interface (EFI) Shell menu.
Verifying the System Configuration Using the EFI Shell
From the EFI main menu, enter the POSSE shell by entering co. Typing help will list all the command categories available in the shell:
configuration -- goes to the configuration menu, where system configuration can be reset, configured or viewed.
memory -- memory related commands.
Once the parameters have been verified, enter x to return to the EFI Main Menu.
Booting HP-UX Using the EFI Shell
If the Instant Ignition was ordered, HP-UX will have been installed in the factory at the Primary Path address. If HP-UX is at a path other than the Primary Path, do the following:
1. Type cmto enter the Command Menu from the Main Menu.
2. MP:CM> bo This command boots the selected partition.
Select a partition to boot:
3. Return to the Main Menu: MP:CM> ma
4. From the Main Menu, go to the Consoles Menu: MP> co
Select a partition number.
5. Return to the Main Menu by pressing Ctrl+B.
6. At the EFI Shell prompt, select the file system to boot. Generally this is fs0.
Shell> fs0:
7. At the fs0 prompt, type HPUX to boot the HP-UX operating system:
fso:\> hpux
NOTE: If the partition fails to boot or if the server was shipped without Instant Ignition, booting from a DVD that contains the operating system and other necessary software might be required.
Adding Processors with Instant Capacity
The Instant Capacity program provides access to additional CPU resources beyond the amount that was purchased for the server. This provides the ability to activate additional CPU power for unexpected growth and unexpected spikes in workloads.
Internally,Instant Capacity systems physically have more CPUs, called Instant Capacity CPUs, than thenumber of CPUs actually purchased. These Instant Capacity CPUs reside in the purchased system, but they belong to HP and therefore are HP assets. A nominal “Right-To-Access Fee” is paid to HP for each Instant Capacity CPU in the system. At any time, any number of Instant
84 Installing the Server
Page 85
Capacity CPUs can be “activated.” Activating an Instant Capacity CPU automatically and instantaneously transforms the Instant Capacity CPU into an instantly ordered and fulfilled CPU upgrade that requires payment. After the Instant Capacity CPU is activated and paid for, it is no longer an Instant Capacity CPU, but is now an ordered and delivered CPU upgrade for the system.
The following list offers information needed to update to iCAP version 8.x:
HP-UX HWEnable11i - Hardware Enablement Patches for HP-UX11i v2, June 2006
B9073BA - B.11.23.08.00.00.95 - HP-UX iCOD Instant Capacity (iCAP)
Kernel entry - diag2 - module diag2 best [413F2ED6]
B8465BA - A.02.00.04 - HP WBEM Services for HP-UX
NPar Provider - B.11.23.01.03.00.06 - nPartition Provider
Current information on installing, configuring, and troubleshooting iCAP version 8.x is available at: http://docs.hp.com/en/B9073-90129/index.html.
Information on the latest release notes for iCAP version 8.x can be found at:
http://docs.hp.com/en/B9073-90134/index.html.
NOTE: Ensure that the customer is aware of the Instant Capacity email requirements. Refer to http://docs.hp.com for further details.
Installation Checklist
The checklist in Table 3-5 is an installation aid. Use it only after you have installed several systems by following the detailed procedures described in the body of this document. This checklist is a compilation of the tasks described in this manual, and is organized as follows:
Procedures The procedures outlined in this document in order In-process The portion of the checklist that allows you to comment on the current status
of a procedure
Completed The final check to ensure that a step has been completed and comments
Major tasks are in bold type, sub tasks are indented.
Table 3-5 Factory-Integrated Installation Checklist
CompletedIn-processProcedure
CommentsInitialsCommentsInitials
Obtain LAN information
Verify site preparation
Site grounding verified
Power requirements verified
Check inventory
Inspect shipping containers for damage
Unpack SPU cabinet
Cabling and Power Up 85
Page 86
Table 3-5 Factory-Integrated Installation Checklist (continued)
CompletedIn-processProcedure
Allow proper clearance
Cut polystrap bands
Remove cardboard top cap
Remove corrugated wrap from the pallet
Remove four bolts holding down the ramps and remove the ramps
Remove antistatic bag
Check for damage (exterior and interior)
Position ramps
Roll cabinet off ramp
Unpack the peripheral cabinet (if ordered)
Unpack other equipment
Remove and dispose of packaging material
Move cabinet(s) and equipment to computer room
Move cabinets into final position
Position cabinets next to each other (approximately 1/2 inch)
Adjust leveling feet
Install anti-tip plates
Inspect cables for proper installation
Set up CE tool and connect to Remote RS-232 port on MP
Apply power to cabinet (Housekeeping)
Check power to BPSs
Log in to MP
Set LAN IP address on MP
Connect customer console
Set up network on customer console
Verify LAN connection
Verify presence of cells
Power on cabinet (48 V)
86 Installing the Server
Page 87
Table 3-5 Factory-Integrated Installation Checklist (continued)
CompletedIn-processProcedure
Verify system configuration and set boot parameters
Set automatic system restart
Boot partitions
Configure remote login (if required). See Appendix B.
Verify remote link (if required)
Install non-factory, integrated I/O cards (if required)
Select PCI card slot
Install PCI card
Verify installation
Route cables using the cable management arm
Install other peripherals (if required)
Perform visual inspection and complete installation
Set up network services (if required)
Enable iCOD (if available)
Final inspection of circuit boards
Final inspection of cabling
Area cleaned and debris and packing materials disposed of
Account for tools
Dispose of parts and other items
Make entry in Gold Book (recommended)
Customer acceptance and signoff (if required)
Cabling and Power Up 87
Page 88
88
Page 89
4 Booting and Shutting Down the Operating System
This chapter presents procedures for booting an operating system (OS) on an nPartition (hardware partition) and procedures for shutting down the OS.
Operating Systems Supported on Cell-based HP Servers
HP supports nPartitions on cell-based HP 9000 servers and cell-based HP Integrity servers. The following list describes the OSes supported oncell-based servers based on theHP sx2000 chipset.
HP 9000 servers have PA-RISC processors and include the following cell-based models based on based on the HP sx2000 chipset:
HP 9000 Superdome (SD16B, SD32B, and SD64B models) — HP rp8440 — HP rp7440
These HP 9000 servers run HP-UX 11i Version 1 (B.11.11). Refer to “Booting and Shutting
Down HP-UX” (page 94) for details on booting an OS on these servers.
HP Integrity servers have Intel® Itanium® processors and include the following cell-based models based on the HP sx2000 chipset:
HP Integrity Superdome (SD16B, SD32B, and SD64B models) — HP rx8640 — HP rx7640
All HP Integrity servers based on the HP sx2000 chipset run the following OSes:
HP-UX 11i Version 2 (B.11.23) — Refer to “Booting and Shutting Down HP-UX”
(page 94) for details.
Microsoft® Windows® Server 2003 — Refer to “Booting and Shutting Down Microsoft
Windows” (page 109) for details.
HP Integrity servers based on the HP sx2000 chipset run the following OSes only in nPartitions that have dual-core Intel® Itanium® processors:
HP OpenVMS I64 8.3 — Supported only in nPartitions that have dual-core Intel®
Itanium® processors.Prior releases of OpenVMS I64 are not supported on servers based on the HP sx2000 chipset.
Refer to “Booting and Shutting Down HP OpenVMS I64” (page 105) for details.
Red Hat Enterprise Linux 4 Update 4— On servers based on the HP sx2000 chipset, is
supported only in nPartitions that have dual-core Intel® Itanium® processors. Prior releases of Red Hat Enterprise Linux are not supported on servers based on the HP sx2000 chipset.
NOTE: Red Hat Enterprise Linux 4 will be supported soon after the release of cell-based HP Integrity servers with the Intel® Itanium® dual-core processor. It is not supported
on these servers when they first release.
Refer to “Booting and Shutting Down Linux” (page 114) for details.
SuSE Linux Enterprise Server 10 — On servers based on the HP sx2000 chipset, is
supported only in nPartitions that have dual-core Intel® Itanium® processors. Prior releases of SuSE Linux Enterprise Server are not supported on servers based on the HP sx2000 chipset.
Operating Systems Supported on Cell-based HP Servers 89
Page 90
NOTE: SuSE Linux Enterprise Server 10 is supported on HP rx8640 servers, and will be supported on other cell-based HP Integrity servers with the Intel® Itanium® dual-core
processor (rx7640 and Superdome) soon after the release of those servers.
Refer to “Booting and Shutting Down Linux” (page 114) for details.
NOTE: On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM) parameter, which determines how firmware may interleave memory residing on the cell. The supported and recommended CLM setting for the cells in an nPartition depends on the OS running in the nPartition. Some OSes support using CLM, and some do not. For details on CLM support for the OS you will boot in an nPartition, refer to the booting section for that OS.
System Boot Configuration Options
This section briefly discusses the system boot options you can configure on cell-based servers. You can configure boot options that are specific to each nPartition in the server complex.
HP 9000 Boot Configuration Options
On cell-based HP 9000 servers the configurable system boot options include boot device paths (PRI, HAA, and ALT) and the autoboot setting for the nPartition. To set these options from HP-UX, use the setboot command. From the BCH system boot environment, use the PATH command at the BCH Main Menu to set boot device paths, and use the PATHFLAGS command at the BCH
Configuration menu to set autoboot options. For details, issue HELP command at the appropriate BCH menu, where command is the command for which you want help.
HP Integrity Boot Configuration Options
On cell-based HP Integrity servers, you must properly specify the ACPI configuration value, which affects the OS startup process and on some servers can affect the shutdown behavior. You also can configure boot device paths and the autoboot setting for the nPartition. The following list describes each configuration option:
Boot Options List The boot options list is a list of loadable items available for you to select
from the EFI Boot Manager menu. Ordinarily, the boot options list includes the EFI Shell and one or more OS loaders.
The followingexample includes boot options for HP OpenVMS, Microsoft Windows, HP-UX, and the EFI Shell. The final item in the EFI Boot Manager menu, the Boot Configuration menu, is not a boot option. The Boot Configuration menu enables system configuration through a maintenance menu.
EFI Boot Manager ver 1.10 [14.61] Please select a boot option
HP OpenVMS 8.3 EFI Shell [Built-in] Windows Server 2003, Enterprise HP-UX Primary Boot: 4/0/1/1/0.2.0 Boot Option Maintenance Menu
Use ^ and v to change option(s). Use Enter to select an option
NOTE: In some versions of EFI, the Boot Configuration menu is listed as the Boot Option Maintenance Menu.
To manage the boot options list for each system usethe EFI Shell, the EFI Boot Configuration menu, or OS utilities.
90 Booting and Shutting Down the Operating System
Page 91
At the EFI Shell, the bcfg command supports listing and managing the boot options list for all OSs except Microsoft Windows. On HP Integrity systems with Windows installed the \MSUtil\nvrboot.efi utility is provided for managing Windows boot options from the EFI Shell. On HP Integrity systems with OpenVMS installed, the \efi\vms\vms_bcfg.efi and \efi\vms\vms_show utilities are provided for managing OpenVMS boot options.
The EFI Boot Configuration menu provides the Add a Boot Option, Delete Boot Option(s), and Change Boot Order menu items. (If you must add an EFI Shell entry to the boot options list, use this method.)
To save and restore boot options, use the EFI Shell variable command. The variable
-save file command saves the contents of the boot options list to the specified file on an EFI disk partition. The variable -restore file command restores the boot options list
from the specified file that was previously saved. Details also are available by entering help variable at the EFI Shell.
OS utilities for managing the boot options list include the HP-UX setboot command and the HP OpenVMS @SYS$MANAGER:BOOT_OPTIONS.COM command.
The OpenVMSI64 installation and upgrade procedures assist you in setting up and validating a boot option for your system disk. HP recommends that you allow the procedure to do this. Alternatively, you can use the @SYS$MANAGER:BOOT_OPTIONS.COM command (also referred to as the OpenVMS I64 Boot Manager utility) to manage boot options for your system disk. The OpenVMS I64 Boot Manager (BOOT_OPTIONS.COM) utility is a menu-based utility and is easier to use than EFI. To configure OpenVMS I64 booting on Fibre Channel devices, you must use the OpenVMS I64 Boot Manager utility (BOOT_OPTIONS.COM). For more information on this utility and other restrictions, refer to the HP OpenVMS for Integrity Servers Upgrade and Installation Manual.
For details, refer to the following sections.
To set HP-UX boot options refer to “Adding HP-UX to the Boot Options List” (page 95). To set OpenVMS boot options refer to “Adding HP OpenVMS to the Boot Options List”
(page 105).
To set Windows boot options refer to “Adding Microsoft Windows to the Boot Options
List” (page 110).
To set Linux boot options refer to “Adding Linux to the Boot Options List” (page 115).
Hyper-Threading nPartitions that have dual-core Intel® Itanium® processors can support Hyper-Threading. Hyper-Threading provides the ability for processors to create a second virtual core that allows additional efficiencies of processing. For example, a dual-core processor with Hyper-Threading active can simultaneously run four threads.
The EFI Shell cpuconfig command can enable and disable Hyper-Threading for an nPartition whose processors support it. Recent releases of the nPartition Commands and Partition Manager also support Hyper-Threading.
Details of the cpuconfig command are given below and are available by entering help cpuconfig at the EFI Shell.
cpuconfig threads — Reports Hyper-Threading status for the nPartition — cpuconfig threads on — Enables Hyper-Threading forthe nPartition. Afterenabling
Hyper-Threading the nPartition must be reset for Hyper-Threading to be active.
cpuconfig threads off — Disables Hyper-Threading for the nPartition. After
disabling Hyper-Threading the nPartition must be reset for Hyper-Threading to be inactive
After enabling or disabling Hyper-Threading, the nPartition must be reset for the Hyper-Threading change to take effect. Use the EFI Shell reset command.
System Boot Configuration Options 91
Page 92
Enabled means that Hyper-Threading will be active on the next reboot of the nPartition. Active means that each processor core in the nPartition has a second virtual core that enables
simultaneously running multiple threads.
Autoboot Setting You can configure the autoboot setting for each nPartition either by
using the autoboot command at the EFI Shell, or by using the Set Auto Boot TimeOut menu item at the EFI Boot Option Maintenance menu.
To set autoboot from HP-UX, use the setboot command.
ACPI Configuration Value—HP Integrity Server OS Boot On cell-based HP Integrity servers
you must set the proper ACPI configuration for the OS that will be booted on the nPartition.
To check the ACPI configuration value, issue the acpiconfig command with no arguments at the EFI Shell.
To set the ACPI configuration value, issue the acpiconfig value command at the EFI Shell, where value is either default or windows. Then reset the nPartition by issuing the reset
EFI Shell command for the setting to take effect.
The ACPI configuration settings for the supported OSes are in the following list. — HP-UX ACPI Configuration: default On cell-based HP Integrity servers, to boot or
install the HP-UX OS, you must set the ACPI configuration value for the nPartition to default.
For details, refer to “ACPI Configuration for HP-UX Must Be default” (page 96).
HP OpenVMS I64 ACPI Configuration: default On cell-based HP Integrity servers, to
boot or install the HP OpenVMS I64 OS, you must set the ACPI configuration value for the nPartition to default.
For details, refer to “ACPI Configuration for HP OpenVMS I64 Must Be default”
(page 107).
Windows ACPI Configuration: windows On cell-based HP Integrity servers, to boot
or install the Windows OS, you must set the ACPI configuration value for the nPartition to windows.
For details, refer to “ACPI Configuration for Windows Must Be windows” (page 112).
Red Hat Enterprise Linux ACPI Configuration: default On cell-based HP Integrity
servers, to boot or install the Red Hat Enterprise Linux OS, you must set the ACPI configuration value for the nPartition to default.
For details, refer to “ACPI Configuration for Red Hat EnterpriseLinux Must Be default”
(page 116).
SuSE Linux Enterprise Server ACPI Configuration: default On cell-based HP Integrity
servers, to boot or install the SuSE Linux Enterprise Server OS, you must set the ACPI configuration value for the nPartition to default.
For details, refer to “ACPI Configuration for SuSE Linux Enterprise Server Must Be
default” (page 118).
ACPI Softpowerdown Configuration—OS Shutdown Behavior On HP rx7620, rx7640, rx8620,
and rx8640 servers, you can configure the nPartition behavior when an OS is shut down and halted. The two options are to have hardware power off when the OS is halted, or to have the nPartition be made inactive (all cells are in a boot-is-blocked state). The normal OS shutdown behavior on these servers depends on the ACPI configuration for the nPartition.
You can run the acpiconfig command with no arguments to check the current ACPI configuration setting;however, softpowerdown information is displayed onlywhen different from normal behavior.
92 Booting and Shutting Down the Operating System
Page 93
To change the nPartition behavior when an OS is shut down and halted, use either the
acpiconfig enable softpowerdown EFI Shell command or the acpiconfig disable softpowerdown command, and then reset the nPartition to make the ACPI configuration
change take effect. — acpiconfig enable softpowerdown Whenset on HP rx7620, rx7640, rx8620, and rx8640
servers, acpiconfig enable softpowerdown causes nPartition hardware to be powered off when the OS issues a shutdown for reconfiguration command (for example, shutdown -h or shutdown /s).
This is the normal behavior on HP rx7620, rx7640, rx8620, and rx8640 servers with a windows ACPI configuration setting.
When softpowerdown is enabled on HP rx7620, rx7640, rx8620, and rx8640 servers, if one nPartition is defined in the server, thenhalting theOS powers off the server cabinet, including all cells and I/O chassis. On HP rx7620, rx7640, rx8620, and rx8640 servers with multiple nPartitions, halting the OS from an nPartition with softpowerdown enabled causes only the resources on the local nPartition to be powered off.
To power on hardware that has been powered off, use the PE command at the management processor Command Menu.
acpiconfig disable softpowerdown When set on HP rx7620, rx7640, rx8620, and rx8640
servers, acpiconfig disable softpowerdown causes nPartition cells to remain at a boot-is-blocked state when the OS issues a shutdown for reconfiguration command (for example, shutdown -h or shutdown /s). In this case, an OS shutdown for reconfiguration makes the nPartition inactive.
This is the normal behavior on HP rx7620, rx7640, rx8620, and rx8640 servers with an ACPI configuration setting of default.
To make an inactive nPartition active, use the management processor BO command to boot the nPartition past the boot-is-blocked state.
Boot Modes on HP Integrity nPartitions: nPars and vPars Modes On cell-based HP Integrity servers, each nPartition can be configured in either of two boot modes:
nPars Boot Mode
In nPars boot mode, an nPartition is configured to boot any single operating system in the standard environment. When an nPartition is in nPars boot mode, it cannot boot the vPars monitor and therefore does not support HP-UX virtual partitions.
vPars Boot Mode
In vPars boot mode, an nPartition is configured to boot into the vPars environment. When an nPartition is in vPars boot mode, it can only boot the vPars monitor and therefore it only supports HP-UX virtual partitions and it does not support booting HP OpenVMS I64, Microsoft Windows, or other operating systems. On an nPartition in vPars boot mode, HP-UX can boot only within a virtual partition (from the vPars monitor) and cannot boot as a standalone, single operating system in the nPartition.
CAUTION: An nPartition on an HP Integrity server cannot boot HP-UX virtual partitions when in nPars boot mode. Likewise, an nPartition on an HP Integrity server cannot boot an operating system outside of a virtual partition when in vPars boot mode.
To display or set the boot mode for an nPartition on a cell-based HP Integrity server, use any of the following tools as appropriate. Refer to Installing and Managing HP-UX Virtual Partitions (vPars), Sixth Edition, for details, examples, and restrictions.
System Boot Configuration Options 93
Page 94
parconfig EFI shell command
The parconfig command is a built-in EFI shell command. Refer to the help parconfig command for details.
\EFI\HPUX\vparconfig EFI shell command
The vparconfig command is deliveredin the \EFI\HPUX directory onthe EFI system partition of the disk where HP-UX virtual partitions has been installed on a cell-based HP Integrity server. For usage details, enter the vparconfig command with no options.
vparenv HP-UX command
On cell-based HP Integrity servers only, the vparenv HP-UX command is installed on HP-UX 11iv2 (B.11.23) systems that have the HP-UX virtual partitions software. Refer to vparenv(1m) for details.
NOTE: On HP Integrity servers, nPartitions that do not have the parconfig EFI shell command do not support virtual partitions and are effectively in nPars boot mode.
HP recommends that you do not use the parconfig EFI shell command and instead use the \EFI\HPUX\vparconfig EFI shell command to manage the boot mode for nPartitions on cell-based HP Integrity servers.
Refer to Installing and Managing HP-UX Virtual Partitions (vPars), Sixth Edition, for details.
Booting and Shutting Down HP-UX
This section presents procedures forbooting and shutting down HP-UXon cell-based HP servers and a procedure for adding HP-UX to the boot options list on HP Integrity servers.
To determine whether the cell local memory (CLM) configuration is appropriate for HP-UX, refer to “HP-UX Support for Cell Local Memory” (page 94).
To add an HP-UX entry to the nPartition boot options list on an HP Integrity server, refer to “Adding HP-UX to the Boot Options List” (page 95).
To boot HP-UX, refer to “Booting HP-UX” (page 96).
To shut down HP-UX, refer to “Shutting Down HP-UX” (page 103).
HP-UX Support for Cell Local Memory
On servers based on the HP sx2000 chipset, each cell has a cell local memory (CLM) parameter, which determines how firmware interleaves memory residing on the cell.
IMPORTANT: HP-UX 11i Version 2 (B.11.23) supports using CLM. The optimal CLM settings for HP-UX B.11.23 depend on the applications and workload the OS is running.
To check CLM configuration details from an OS, use Partition Manager or the parstatus command. For example, the parstatus -V -c# command and parstatus -V -p# command
report the CLM amount requested and CLM amount allocated for the specified cell (-c#, where # is the cell number) or the specified nPartition (-p#, where # is the nPartition number). For
details, refer to the HP System Partitions Guide or the Partition Manager Web site (http://docs.hp.com/en/PARMGR2/).
To display CLM configuration details from the EFI Shell on a cell-based HP Integrity server, use the info mem command. If the amount of noninterleaved memory reported is less than 512 MB, then no CLM is configured for any cells in the nPartition (and the indicated amount of noninterleaved memory is used by system firmware). If the info mem command reports more than 512MB of noninterleaved memory, then use Partition Manager or the parstatus command to confirm the CLM configuration details.
94 Booting and Shutting Down the Operating System
Page 95
To set the CLM configuration, use Partition Manager or the parmodify command. For details, refer to the HP System Partitions Guide or the Partition Manager Web site (http://docs.hp.com/en/PARMGR2/).
Adding HP-UX to the Boot Options List
This section describes how to add an HP-UX entry to the system boot options list.
You can add the \EFI\HPUX\HPUX.EFI loader to the boot options list from the EFI Shell or EFI Boot Configuration menu (or in some versions of EFI, the Boot Option Maintenance Menu).
See “Boot Options List” (page 90) for additional information about saving, restoring, and creating boot options.
NOTE: On HP Integrity servers, the OS installer automatically adds an entry to the boot options list.
Procedure 4-1 Adding an HP-UX Boot Option
This procedure adds an HP-UX item to the boot options list from the EFI Shell.
To addan HP-UX boot option when logged in to HP-UX, use the setboot command. For details, refer to the setboot(1M) manpage.
1. Access the EFI Shell environment.
Log in to the management processor, and enter CO to access the system console.
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu). If you are at another EFI menu, select the Exit option from the submenus until you return to the screen with the EFI Boot Manager heading.
From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell environment.
2. Access the EFI System Partition for the device from which you want to boot HP-UX (fsX:
where X is the file system number) .
For example, enter fs2: to access the EFI System Partition for the bootable file system number 2. The EFI Shell prompt changes to reflect the file system currently accessed.
The full path for the HP-UX loader is \EFI\HPUX\HPUX.EFI, and it should be on the device you are accessing.
3. At the EFI Shell environment, use the bcfg command to manage the boot options list.
The bcfg command includes the following options for managing the boot options list:
bcfg boot dump — Display all items in the boot options list for the system.
bcfg boot rm # — Remove the item number specified by # from the boot options
list.
bcfg boot mv #a #b — Move the item number specified by #a to theposition specified
by #b in the boot options list.
bcfg boot add # file.efi "Description" — Add a new boot option to the position in
the boot options list specified by #. The new boot option references file.efi and is listed with the title specified by Description.
For example, bcfg boot add 1 \EFI\HPUX\HPUX.EFI "HP-UX 11i"adds an HP-UX 11i item as the first entry in the boot options list.
Refer to the help bcfg command for details.
Booting and Shutting Down HP-UX 95
Page 96
4. Exit the console and management processor interfaces if you are finished using them.
To exit the EFI environment press ^B (Control+B); this exits the system console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu.
Booting HP-UX
This section describes the following methods of booting HP-UX:
“Standard HP-UX Booting” (page 96) — The standard ways to boot HP-UX. Typically, this results in booting HP-UX in multiuser mode.
“Single-User Mode HP-UX Booting” (page 100) — How to boot HP-UX in single-user mode.
“LVM-Maintenance Mode HP-UX Booting” (page 102) — How to boot HP-UX in LVM-maintenance mode.
Refer to “Shutting Down HP-UX” (page 103) for details on shutting down the HP-UX OS.
CAUTION: ACPI Configuration for HP-UX Must Be default On cell-based HP Integrity servers, to boot the
HP-UX OS, an nPartition ACPI configuration value must be set to default.
At the EFI Shell interface, enter the acpiconfig command with no arguments to list the current ACPI configuration. If the acpiconfig value is not set to default, then HP-UX cannot boot. In this situation you must reconfigure acpiconfig; otherwise, booting will be interrupted with a panic when the HP-UX kernel is launched.
To set the ACPI configuration for HP-UX:
1. At the EFI Shell interface, enter the acpiconfig default command.
2. Enter the reset command for the nPartition to reboot with the proper (default)
configuration for HP-UX.
Standard HP-UX Booting
This section describes how to boot HP-UX on cell-based HP 9000 servers and cell-based HP Integrity servers.
On HP 9000 servers, to boot HP-UX refer to “HP-UX Booting (BCH Menu)” (page 96).
On HP Integrity servers, to boot HP-UX use either of the following procedures: — “HP-UX Booting (EFI Boot Manager)” (page 98) “HP-UX Booting (EFI Shell)” (page 98)
Procedure 4-2 HP-UX Booting (BCH Menu)
From the BCH Menu, use the BOOT command to boot the HP-UX OS. The BCH Menu is available only on HP 9000 servers.
1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX.
Log in to the management processor, and enter CO to access the Console list. Select the nPartition console. When accessing the console, confirm that you are at the BCH Main Menu (the Main Menu: Enter command or menu> prompt). If you are at a BCH menu other than the Main Menu, then enter MA to return to the BCH Main Menu.
2. Choose which device to boot.
From the BCH Main Menu, use the PATH command to list any boot path variable settings. The primary (PRI) boot path normally is set to the main boot device for the nPartition. You also can use the SEARCH command to find and list potentially bootable devices for the nPartition.
Main Menu: Enter command or menu > PATH
96 Booting and Shutting Down the Operating System
Page 97
Primary Boot Path: 0/0/2/0/0.13 0/0/2/0/0.d (hex)
HA Alternate Boot Path: 0/0/2/0/0.14 0/0/2/0/0.e (hex)
Alternate Boot Path: 0/0/2/0/0.0 0/0/2/0/0.0 (hex)
Main Menu: Enter command or menu >
3. Boot the device by using the BOOT command from the BCH interface.
You can issue the BOOT command in any of the following ways:
BOOT
Issuing the BOOT command with no arguments boots the device at the primary (PRI) boot path.
BOOT bootvariable
This command boots the device indicated by the specified boot path, where bootvariable is the PRI, HAA, or ALT boot path.
For example, BOOT PRI boots the primary boot path.
BOOT LAN INSTALL or BOOT LAN.ip-address INSTALL
The BOOT... INSTALL commands boot HP-UX from the default HP-UX install server or from the server specified by ip-address.
BOOT path
This command boots the device at the specified path. You can specify the path in HP-UX hardware path notation (for example, 0/0/2/0/0.13) or in path label format (for example, P0 or P1) .
If you specify the path in path label format, then path refers to a device path reported by the last SEARCH command.
After you issue the BOOT command, the BCH interface prompts you to specify whether you want to stop at the ISL prompt.
To boot the /stand/vmunix HP-UX kernel from the device without stopping at the ISL prompt, enter n to automatically proceed past ISL and execute the contents of the AUTO file on the chosen device. (By default the AUTO file is configured to load /stand/vmunix.)
Main Menu: Enter command or menu > BOOT PRI
Primary Boot Path: 0/0/1/0/0.15
Do you wish to stop at the ISL prompt prior to booting? (y/n) >> n
ISL booting hpux
Boot : disk(0/0/1/0/0.15.0.0.0.0.0;0)/stand/vmunix
To boot an HP-UX kernel other than /stand/vmunix, or to boot HP-UX in single-user or LVM-maintenance mode, stop at the ISL prompt and specify the appropriate arguments to the hpux loader.
Booting and Shutting Down HP-UX 97
Page 98
4. Exit the console and management processor interfaces if you are finished using them.
To exit the BCH environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu.
Procedure 4-3 HP-UX Booting (EFI Boot Manager)
From the EFI Boot Manager menu, select an item from the boot options list to boot HP-UX using that boot option. The EFI Boot Manager is available only on HP Integrity servers.
Refer to “ACPI Configuration for HP-UX Must Be default” (page 96) for required configuration details.
1. Access the EFI Boot Manager menu for the nPartition on which you want to boot HP-UX.
Log in to the management processor, and enter CO to access the Console list. Select the nPartition console.
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu). If you are at another EFI menu, select the Exit option from the submenus until you return to the screen with the EFI Boot Manager heading.
2. At the EFI Boot Manager menu, select an item from the boot options list.
Each item in the boot options list references a specific boot device and provides a specific set of boot options or arguments to be used when booting the device.
3. Press Enter to initiate booting using the chosen boot option.
4. Exit the console and management processor interfaces if you are finished using them.
To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu.
Procedure 4-4 HP-UX Booting (EFI Shell)
From the EFI Shell environment, to boot HP-UX on a device first access the EFI System Partition for the root device (for example fs0:) and then enter HPUX to initiate the loader. The EFI Shell is available only on HP Integrity servers.
Refer to “ACPI Configuration for HP-UX Must Be default” (page 96) for required configuration details.
1. Access the EFI Shell environment for the nPartition on which you want to boot HP-UX.
Log in to the management processor, and enter CO to access the Console list. Select the nPartition console.
When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu). If you are at another EFI menu, select the Exit option from the submenus until you return to the screen with the EFI Boot Manager heading.
From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell environment.
98 Booting and Shutting Down the Operating System
Page 99
2. At the EFI Shell environment, issue the acpiconfig command to list the current ACPI
configuration for the local nPartition.
On cell-based HP Integrity servers, to boot the HP-UX OS, an nPartition ACPI configuration value must be set to default. If the acpiconfig value is not set to default, then HP-UX cannot boot; in this situation you must reconfigure acpiconfig or booting will be interrupted with a panic when launching the HP-UX kernel.
To set the ACPI configuration for HP-UX:
a. At the EFI Shell interface enter the acpiconfig default command. b. Enter the reset command for the nPartition to reboot with the proper (default)
configuration for HP-UX.
3. At the EFI Shell environment, issue the map command to list all currently mapped bootable
devices.
The bootable file systems of interest typically are listed as fs0:, fs1:, and so on.
4. Access the EFI System Partition for the device from which you want to boot HP-UX (fsX:
where X is the file system number).
For example, enter fs2: to access the EFI System Partition for the bootable file system number 2. The EFI Shell prompt changes to reflect the file system currently accessed.
The file system number can change each time it is mapped(for example, when the nPartition boots, or when the map -r command is issued).
5. When accessingthe EFISystem Partition for the desired boot device, issue the HPUX command to initiate the HPUX.EFI loader on the device you are accessing.
The full path for the loader is \EFI\HPUX\HPUX.EFI. When initiated, HPUX.EFI references the \EFI\HPUX\AUTO file and boots HP-UX using the default boot behavior specified in the AUTO file.
You are given 10 seconds to interrupt the automatic booting of the default boot behavior. Pressing any key during this 10-second period stops the HP-UX boot process and enables you to interact with the HPUX.EFI loader. To exit the loader (the HPUX> prompt), enter exit (this returns you to the EFI Shell).
To boot the HP-UXOS, do not type anything duringthe 10-second period given for stopping at the HPUX.EFI loader.
Shell> map Device mapping table fs0 : Acpi(000222F0,269)/Pci(0|0)/Scsi(Pun8,Lun0)/HD(Part1,Sig72550000) blk0 : Acpi(000222F0,269)/Pci(0|0)/Scsi(Pun8,Lun0) blk1 : Acpi(000222F0,269)/Pci(0|0)/Scsi(Pun8,Lun0)/HD(Part1,Sig72550000) blk2 : Acpi(000222F0,269)/Pci(0|0)/Scsi(Pun8,Lun0)/HD(Part2,Sig72550000) blk3 : Acpi(000222F0,2A8)/Pci(0|0)/Scsi(Pun8,Lun0) blk4 : Acpi(000222F0,2A8)/Pci(0|1)/Scsi(Pun2,Lun0)
Shell> fs0:
fs0:\> hpux
(c) Copyright 1990-2002, Hewlett Packard Company. All rights reserved
HP-UX Boot Loader for IA64 Revision 1.723
Press Any Key to interrupt Autoboot \efi\hpux\AUTO ==> boot vmunix Seconds left till autoboot - 9
Booting and Shutting Down HP-UX 99
Page 100
6. Exit the console and management processor interfaces if you are finished using them.
To exit the EFI environment, press ^B (Control+B); this exits the nPartition console and returns to the management processor Main Menu. To exit the management processor, enter X at the Main Menu.
Single-User Mode HP-UX Booting
This section describes how to boot HP-UX in single-user mode on cell-based HP 9000 servers and cell-based HP Integrity servers.
On HP 9000 servers, to boot HP-UX in single-user mode, refer to “Single-User Mode HP-UX
Booting (BCH Menu)” (page 100).
On HP Integrity servers, to boot HP-UX in single-user mode, refer to “Single-User Mode
HP-UX Booting (EFI Shell)” (page 101).
Procedure 4-5 Single-User Mode HP-UX Booting (BCH Menu)
From the BCH Menu, you can boot HP-UX in single-user mode by issuing the BOOT command, stopping at the ISL interface, and issuing hpux loader options. The BCH Menu is available only on HP 9000 servers.
1. Access the BCH Main Menu for the nPartition on which you want to boot HP-UX in single-user mode.
Log in to the management processor, and enter CO to access the Console list. Select the nPartition console. When accessing the console, confirm that you are at the BCH Main Menu (the Main Menu: Enter command or menu> prompt). If you are at a BCH menu other than the Main Menu, then enter MA to return to the BCH Main Menu.
2. Boot the desired device by using the BOOT command at the BCH interface, and specify that the nPartition stop at the ISL prompt prior to booting (reply y to the “stop at the ISL prompt” question).
Main Menu: Enter command or menu > BOOT 0/0/2/0/0.13
BCH Directed Boot Path: 0/0/2/0/0.13
Do you wish to stop at the ISL prompt prior to booting? (y/n) >> y
Initializing boot Device.
....
ISL Revision A.00.42 JUN 19, 1999
ISL>
3. From the ISL prompt, issue the appropriate Secondary System Loader (hpux) command to boot the HP-UX kernel in the desired mode.
Use the hpux loader to specify the boot mode options and to specify which kernel to boot on the nPartition (for example, /stand/vmunix).
To boot HP-UX in single-user mode:
ISL> hpux -is boot /stand/vmunix
Example 4-1 (page 101) shows output from this command.
To boot HP-UX at the default run level:
ISL> hpux boot /stand/vmunix
To exit the ISL prompt and return to the BCH interface, issue the EXIT command instead of specifying one of the hpux loader commands.
100 Booting and Shutting Down the Operating System
Loading...