HP Rp7410, rp7420, rp7440, rp8400, rp7405 Administrator's Manual

...
Page 1
nPartition Administrator's Guide
HP Part Number: 5991-1247B_ed2 Published: February 2009 Edition: Second Edition
Page 2
© Copyright 2007–2009 Hewlett-Packard Development Company, L.P
Legal Notices
Confidential computer software. Valid license from HP requiredforpossession,useor copying. Consistent with FAR 12.211 and 12.212, Commercial
Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP
shall not be liable for technical or editorial errors or omissions contained herein.
Intel and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a U.S. registered trademark of Linus Torvalds. Microsoft, Windows, and Windows NT are U.S. registered trademarks of Microsoft
Corporation.
Restricted Rights Legend Use, duplication or disclosure by the U.S. Government is subject to restrictions as set forth in subparagraph (c) (1) (ii)
of the Rights in Technical Data and Computer Software clause at DFARS 252.227-7013 for DOD agencies, and subparagraphs (c) (1) and (c) (2)
of the Commercial Computer Software Restricted Rights clause at FAR 52.227-19 for other agencies.
HEWLETT-PACKARD COMPANY
3000 Hanover Street
Palo Alto, California 94304 U.S.A.
Page 3
Table of Contents
About This Document.......................................................................................................11
New and Changed Information in This Edition...................................................................................11
Document Organization.......................................................................................................................11
Typographic Conventions.....................................................................................................................12
Related Information..............................................................................................................................13
Publishing History................................................................................................................................13
HP Encourages Your Comments..........................................................................................................13
1 Getting Started with nPartitions..................................................................................15
Introduction to nPartitions....................................................................................................................15
Operating Systems Supported on nPartitions.................................................................................15
HP Server Support for nPartitions...................................................................................................16
HP Superdome Hybrid Servers: Intel® Itanium® 2 and PA-RISC nPartition Mixing..............17
Hardware Components of nPartition-Capable Servers...................................................................18
Administration Tools for nPartitions....................................................................................................18
Commands for Configuring nPartitions.........................................................................................19
Availability of nPartition Commands........................................................................................21
Enhanced nPartitions Commands for Windows.......................................................................21
Enhanced nPartition Commands for Linux...............................................................................22
Partition Manager............................................................................................................................22
Partition Manager Version 2.0 for Windows..............................................................................23
nPartition Properties.............................................................................................................................23
Partition Numbers...........................................................................................................................24
Assigned and Unassigned Cells......................................................................................................24
Base Cells.........................................................................................................................................24
Core Cells.........................................................................................................................................24
Active and Inactive Cells.................................................................................................................25
Cell Local Memory..........................................................................................................................25
Cell Property Details.......................................................................................................................25
Active and Inactive nPartition Boot States......................................................................................27
Overview of Managing nPartitions......................................................................................................27
Basics of Listing nPartition and Complex Status.............................................................................27
Basics of nPartition Creation...........................................................................................................29
Genesis Partition.........................................................................................................................30
Basics of nPartition Modification....................................................................................................30
nPartition Modification Tasks....................................................................................................31
Basics of nPartition Booting and Resetting......................................................................................32
Boot Process for Cells and nPartitions.......................................................................................32
Common nPartition Boot Commands and Tasks.......................................................................33
Complex Profile....................................................................................................................................36
Changing the Server Complex Profile.............................................................................................37
How the Complex Profile is Updated........................................................................................37
Complex Profile Entry Locking and Unlocking..............................................................................38
Complex Profile Group Details.......................................................................................................39
Remote and Local Management of nPartitions....................................................................................41
Intelligent Platform Management Interface (IPMI).........................................................................41
IPMI Block Transfer (IPMI BT)...................................................................................................42
nPartition Configuration Privilege.......................................................................................42
IPMI over LAN...........................................................................................................................42
Web-Based Enterprise Management (WBEM)................................................................................43
Table of Contents 3
Page 4
Local Management..........................................................................................................................43
Remote Management Using WBEM................................................................................................44
WBEM Remote Management Files............................................................................................44
nPartition Commands Support for Remote Management Using WBEM..................................45
Partition Manager Support for Remote Management Using WBEM........................................45
Remote Management Using IPMI over LAN..................................................................................46
nPartition Commands Support for Remote Management Using IPMI over LAN....................46
Partition Manager Support for Remote Management Using IPMI over LAN...........................47
Licensing Information: Getting Server Product Details.......................................................................47
nPartition and Virtual Partition Unique Identifiers........................................................................47
2 nPartition Server Hardware Overview.......................................................................49
sx1000 Chipset for HP Servers..............................................................................................................49
sx2000 Chipset for HP Servers..............................................................................................................49
Model Identifiers for Machine Hardware.............................................................................................49
Server Hardware Details: Cell-Based HP Servers.................................................................................51
Two-Cell nPartition Server Model...................................................................................................55
Four-Cell nPartition Server Model..................................................................................................56
Superdome Server Models..............................................................................................................57
HP Superdome 16-/32-Way Servers: SD16000, SD16A, and SD16B...........................................58
HP Superdome 32-/64-Way Servers: SD32000, SD32A, and SD32B...........................................58
HP Superdome 64-/128-Way Servers: SD64000, SD64A, and SD64B.........................................59
HP Superdome I/O Expansion Cabinet.....................................................................................59
3 Planning nPartitions......................................................................................................61
nPartition Hardware Requirements for Operating Systems................................................................61
Configuration Requirements for nPartitions........................................................................................62
Recommended nPartition Configurations............................................................................................63
Recommended HP Superdome nPartition Configurations.............................................................64
4 Using Management Interfaces and Tools..................................................................67
SMS (Support Management Station) for HP Superdome Servers........................................................67
Overview of nPartition Service Processor (MP or GSP) Interfaces......................................................67
Service Processor (MP or GSP) Features...............................................................................................68
Service Processor Accounts and Access Levels...............................................................................69
nPartition Console Features..................................................................................................................70
nPartition Console Access versus Direct OS Login.........................................................................71
Boot Console Handler System Boot Environment................................................................................71
Extensible Firmware Interface System Boot Environment...................................................................72
Windows Special Administration Console (SAC)................................................................................72
Accessing and Using the Service Processor..........................................................................................74
Using Service Processor Menus.......................................................................................................76
Navigating through Service Processor Menus...........................................................................76
Network Configuration for a Service Processor..............................................................................77
Viewing Console Logs..........................................................................................................................77
Viewing Chassis Codes or Event Logs.................................................................................................78
Virtual Front Panel (VFP) nPartition Views..........................................................................................79
Command Reference for Service Processor Commands......................................................................80
Command Reference for EFI Shell Commands....................................................................................81
Command Reference for BCH Menu Commands................................................................................84
4 Table of Contents
Page 5
5 Booting and Resetting nPartitions...............................................................................87
Overview of nPartition System Booting...............................................................................................87
Boot Process Differences for nPartitions on HP 9000 servers and HP Integrity servers.................88
Types of Booting and Resetting for nPartitions...............................................................................89
System Boot Configuration Options................................................................................................91
HP 9000 Boot Configuration Options........................................................................................91
HP Integrity Boot Configuration Options..................................................................................91
Tools for Booting nPartitions................................................................................................................95
Task Summaries for nPartition Boot and Reset....................................................................................96
Troubleshooting Boot Problems..........................................................................................................100
Accessing nPartition Console and System Boot Interfaces.................................................................101
Monitoring nPartition Boot Activity...................................................................................................104
Finding Bootable Devices....................................................................................................................106
Performing a Transfer of Control Reset..............................................................................................107
Booting and Shutting Down HP-UX...................................................................................................108
HP-UX Support for Cell Local Memory........................................................................................108
Adding HP-UX to the Boot Options List.......................................................................................109
Booting HP-UX..............................................................................................................................110
HP-UX Booting.........................................................................................................................110
Single-User Mode HP-UX Booting...........................................................................................114
LVM-Maintenance Mode HP-UX Booting...............................................................................116
Shutting Down HP-UX..................................................................................................................117
Booting and Shutting Down HP OpenVMS I64.................................................................................119
HP OpenVMS I64 Support for Cell Local Memory.......................................................................120
Adding HP OpenVMS to the Boot Options List............................................................................120
Booting HP OpenVMS...................................................................................................................122
Shutting Down HP OpenVMS.......................................................................................................123
Booting and Shutting Down Microsoft Windows..............................................................................124
Microsoft Windows Support for Cell Local Memory....................................................................124
Adding Microsoft Windows to the Boot Options List...................................................................125
Booting Microsoft Windows..........................................................................................................126
Shutting Down Microsoft Windows..............................................................................................128
Booting and Shutting Down Linux.....................................................................................................129
Linux Support for Cell Local Memory..........................................................................................129
Adding Linux to the Boot Options List.........................................................................................130
Booting Red Hat Enterprise Linux................................................................................................131
Booting SuSE Linux Enterprise Server..........................................................................................132
Shutting Down Linux....................................................................................................................134
Rebooting and Resetting nPartitions..................................................................................................135
Performing a Reboot for Reconfig.......................................................................................................139
Shutting Down to a Shutdown for Reconfig (Inactive) State..............................................................141
Booting an Inactive nPartition.............................................................................................................146
Booting over a Network......................................................................................................................147
Booting to the HP-UX Initial System Loader (ISL).............................................................................149
Booting to the HP-UX Loader (HPUX.EFI).........................................................................................150
Using HP-UX Loader Commands......................................................................................................151
HPUX.EFI Boot Loader Commands..............................................................................................151
HPUX Boot Loader Commands Issued from ISL..........................................................................151
Booting to the Linux Loader (ELILO.EFI)...........................................................................................152
Linux Boot Option Management...................................................................................................153
Linux Loader Configuration File (elilo.conf)...........................................................................153
Using Linux Loader (ELILO) Commands..........................................................................................154
Configuring Boot Paths and Options..................................................................................................155
Configuring Autoboot Options...........................................................................................................158
Table of Contents 5
Page 6
Configuring Boot-Time System Tests..................................................................................................161
6 Creating and Configuring nPartitions......................................................................165
Tools for Configuring nPartitions.......................................................................................................165
Task Summaries for Creating and Configuring nPartitions...............................................................165
Creating a Genesis Partition................................................................................................................170
Creating a New nPartition..................................................................................................................172
Removing (Deleting) an nPartition.....................................................................................................176
Assigning (Adding) Cells to an nPartition..........................................................................................179
Unassigning (Removing) Cells from an nPartition.............................................................................182
Renaming an nPartition......................................................................................................................185
Renaming a Server Complex...............................................................................................................187
Setting Cell Attributes.........................................................................................................................189
Setting nPartition Core Cell Choices...................................................................................................194
Unlocking Complex Profile Entries....................................................................................................198
Canceling Pending Changes to the Complex Profile..........................................................................199
7 Managing Hardware Resources..............................................................................201
Tools for Managing Hardware............................................................................................................201
Task Summaries for Hardware Resource Management.....................................................................201
Powering Server Cabinets On and Off................................................................................................205
Powering Cells and I/O Chassis On and Off.......................................................................................206
Turning Attention Indicators (LEDs) On and Off...............................................................................209
Configuring and Deconfiguring Cells................................................................................................213
Configuring and Deconfiguring Processors.......................................................................................217
Enabling and Disabling Hyper-Threading on Dual-Core Intel® Itanium® 2 Processors..................218
Configuring and Deconfiguring Memory (DIMMs)...........................................................................220
Complex Health Analysis of a Server.................................................................................................222
8 Listing nPartition and Hardware Status....................................................................223
Tools for Listing Status........................................................................................................................223
Task Summaries for nPartition and Hardware Status........................................................................223
Listing Cell Configurations.................................................................................................................226
Listing Processor Configurations........................................................................................................227
Listing Memory Configurations.........................................................................................................229
Listing Input/Output (I/O) Configurations.........................................................................................231
Listing Cabinets in a Server Complex.................................................................................................234
Listing Product and Serial Numbers..................................................................................................235
Listing nPartition Configurations.......................................................................................................236
Listing the Local nPartition Number..................................................................................................237
Listing Power Status and Power Supplies..........................................................................................238
Listing Fan and Blower Status............................................................................................................240
A nPartition Commands................................................................................................243
Specifying Cells and I/O Chassis to Commands.................................................................................243
Cell Specification Formats.............................................................................................................243
I/O Specification Format................................................................................................................244
Specifying Remote Management Options to Commands..................................................................247
parcreate Command............................................................................................................................249
parmodify Command.........................................................................................................................252
parremove Command.........................................................................................................................256
parstatus Command............................................................................................................................258
6 Table of Contents
Page 7
parunlock Command..........................................................................................................................260
fruled Command.................................................................................................................................262
frupower Command...........................................................................................................................264
cplxmodify Command........................................................................................................................266
Table of Contents 7
Page 8
List of Figures
1-1 Partition Manager Version 2.0 Switch Complexes Dialog............................................................46
2-1 Two-Cell HP Server Cabinet..........................................................................................................55
2-2 Four-Cell HP Server Cabinet.........................................................................................................56
2-3 HP Superdome Server Cabinet......................................................................................................57
8 List of Figures
Page 9
List of Tables
1-1 nPartition Operating System Support...........................................................................................15
1-2 HP Servers Supporting nPartitions...............................................................................................17
1-3 nPartition Commands Releases.....................................................................................................20
1-4 nPartition Commands Descriptions..............................................................................................20
1-5 Complex Profile Group Details.....................................................................................................40
2-1 Models of Cell-Based HP Servers..................................................................................................51
3-1 Operating System Hardware Requirements.................................................................................61
4-1 Windows SAC Commands............................................................................................................73
4-2 Service Processor (MP or GSP) Command Reference...................................................................80
4-3 EFI Shell Command Reference......................................................................................................81
4-4 Boot Console Handler (BCH) Command Reference.....................................................................84
5-1 nPartition Boot and Reset Task Summaries...................................................................................96
6-1 nPartition Configuration Task Summaries..................................................................................166
7-1 Hardware Management Task Summaries...................................................................................202
7-2 Attention Indicator (LED) States and Meanings.........................................................................209
8-1 Hardware and nPartition Status Task Summaries......................................................................224
A-1 Cell IDs in Global Cell Number Format......................................................................................243
A-2 Cell IDs in Hardware Location Format.......................................................................................244
9
Page 10
List of Examples
1-1 Unique IDs for an nPartition and Complex..................................................................................48
1-2 Unique IDs for Virtual Partitions (vPars)......................................................................................48
4-1 Overview of a Service Processor Login Session............................................................................75
5-1 Single-User HP-UX Boot..............................................................................................................115
7-1 Turning Attention Indicators On and Off...................................................................................212
7-2 Checking the Hyper-Threading Status for an nPartition............................................................219
7-3 Enabling Hyper-Threading for an nPartition..............................................................................220
A-1 I/O Specification Formats for Cabinets, Bays, and Chassis.........................................................247
10 List of Examples
Page 11
About This Document
This book describes nPartition system administration procedures, concepts, and principles for the HP servers that support nPartitions.
New and Changed Information in This Edition
This edition includes changes and additions related to the Superdome SX1000 PA and SX2000 PA.
Document Organization
This book contains the following chapters and appendix.
Chapter 1. “Getting Started with nPartitions” (page 15)
This chapter introduces HP nPartition system features, server models, supported operating systems, and administration tools, and outlines the basic information needed for managing nPartitions.
Chapter 2. “nPartition Server Hardware Overview” (page 49)
This chapter describes HP nPartition server models and features.
Chapter 3. “Planning nPartitions” (page 61)
This chapter describes how you can plan nPartition configurations. Details include the nPartition configuration requirements and recommendations.
Chapter 4. “Using Management Interfaces and Tools” (page 67)
This chapter presents the system management interfaces and tools available on HP nPartition servers. Also described here are the nPartition boot environments, management access procedures, and detailed command references.
Chapter 5. “Booting and Resetting nPartitions” (page 87)
This chapter introduces nPartition system boot and reset concepts, configuration options, and procedures for booting and resetting nPartitions.
Chapter 6. “Creating and Configuring nPartitions” (page 165)
This chapter presents the procedures for creating, configuring, and managing nPartitions on HP servers that support them.
Chapter 7. “Managing Hardware Resources” (page 201)
This chapter explains the procedures for managing the hardware resources in nPartitions and their server complexes. It describes power and LED (attention indicator) management, hardware configuration and deconfiguration, and analysis of the current status of the server complex.
Chapter 8. “Listing nPartition and Hardware Status” (page 223)
This chapter describes procedures for listing the current status of nPartitions and server hardware components.
Appendix A. “nPartition Commands” (page 243)
This appendix contains details and command-line syntax for the HP nPartition configuration commands.
New and Changed Information in This Edition 11
Page 12
Typographic Conventions
This document uses the following typographical conventions:
audit(5) A manpage. The manpage name is audit, and it is located in
Section 5.
Command
A command name or qualified command phrase.
Computer output
Text displayed by the computer.
Ctrl+x A key sequence. A sequence such as Ctrl+x indicates that you
must hold down the key labeled Ctrl while you press another key or mouse button.
ENVIRONMENT VARIABLE The name of an environment variable, for example, PATH.
[ERROR NAME]
The name of an error, usually returned in the errno variable.
Key The name of a keyboard key. Return and Enter both refer to the
same key.
User input
Commands and other text that you type.
Variable
The name of a placeholder in a command, function, or other syntax display that you replace with an actual value.
[] The contents are optional in syntax. If the contents are a list
separated by |, you must choose one of the items.
{} The contents are required in syntax. If the contents are a list
separated by |, you must choose one of the items.
... The preceding element can be repeated an arbitrary number of
times.
| Separates items in a list of choices.
WARNING A warning calls attention to important information that if not
understood or followed will result in personal injury or nonrecoverable system problems.
CAUTION A caution calls attention to important information that if not
understood or followed will result in data loss, data corruption, or damage to hardware or software.
IMPORTANT This alert provides essential information to explain a concept or
to complete a task
NOTE A note contains additional information to emphasize or
supplement important points of the main text.
12
Page 13
Related Information
You can find information on nPartition server hardware management, operating system administration, and diagnostic support tools in the following publications and Web sites.
Web Site for HP Technical Documentation: http://docs.hp.com The HP Technical Documentation Web site is at http://docs.hp.com and has complete information available for free.
Server Hardware Information: http://docs.hp.com/hpux/hw/ The systems hardware portion of the docs.hp.com Web site is at http://docs.hp.com/hpux/hw/. It provides server hardware management information, including site preparation and installation.
Diagnostics and Event Monitoring: Hardware Support Tools Complete information about HP hardware support tools, including online and offline diagnostics and event monitoring tools, is at the http://docs.hp.com/hpux/diag/ Web site. This site has manuals, tutorials, FAQs, and other reference material.
Web Site for HP Technical Support: http://us-support2.external.hp.com The HP IT resource center Web site at http://us-support2.external.hp.com/ provides comprehensive support information for IT professionals on a wide variety of topics, including software, hardware, and networking.
Publishing History
This is the second edition of the nPartition Administrator's Guide.
This book replaces the nPartition Administrator's Guide (5991–1247B) and HP System Partitions Guide (5991–1247).
HP Encourages Your Comments
HP welcomes your feedback on this publication. Address your comments to edit@presskit.rsn.hp.com and note that you will not receive an immediate reply. All comments are appreciated.
Related Information 13
Page 14
14
Page 15
1 Getting Started with nPartitions
This chapter introduces cell-based HP server features, server models, supported operating systems, and administration tools, and outlines the basic information needed for managing nPartitions.
Introduction to nPartitions
Cell-based HP servers enable you to configure a single server complex as one large system or as multiple smaller systems by configuring nPartitions.
Each nPartition defines a subset of server hardware resources to be used as an independent system environment. An nPartition includes one or more cells assigned to it (with processors and memory) and all I/O chassis connected to those cells.
All processors, memory, and I/O in an nPartition are used exclusively by software running in the nPartition. Thus, each nPartition has its own system boot interface, and each nPartition boots and reboots independently.
Each nPartition provides both hardware and software isolation, so that hardware or software faults in one nPartition do not affect other nPartitions within the same server complex.
You can reconfigure nPartition definitions for a server without physically modifying the server hardware configuration by using the HP software-based nPartition management tools.
For procedures for creating and reconfiguring nPartitions, see Chapter 6 (page 165).
Operating Systems Supported on nPartitions
Table 1-1 lists the operating systems that can run on nPartitions.
For an overview of the server models that support nPartitions, see “HP Server Support for
nPartitions” (page 16).
For details on operating system boot and reset procedures, see Chapter 5 (page 87).
Table 1-1 nPartition Operating System Support
Supported Cell-Based ServersOperating System
HP-UX 11i v1 (B.11.11) is supported on HP 9000 servers, including the cell-based HP 9000 servers.
The HP-UX 11i v1 (B.11.11) December 2003 release and later supports rp7420, rp8420, and HP 9000 Superdome (SD16A, SD32A, SD64A models), based on the HP sx1000 chipset.
The HP-UX 11i v1 (B.11.11) December 2006 release and later supports rp7440, rp8440, and HP 9000 Superdome (SD16B, SD32B, SD64B models), based on the HP sx2000 chipset.
HP-UX 11i v1 does not support cell local memory.
Also see “nPartition Hardware Requirements for Operating Systems”
(page 61).
HP-UX 11i v1 (B.11.11)
HP-UX 11i v2 (B.11.23) is supported on HP Integrity servers, including the cell-based HP Integrity servers.
The HP-UX 11i v2 (B.11.23) September 2004 release and later also supports cell-based HP 9000 servers based on the HP sx1000 chipset.
HP-UX 11i v2 supports cell local memory.
Also see “nPartition Hardware Requirements for Operating Systems”
(page 61).
HP-UX 11i v2 (B.11.23)
Introduction to nPartitions 15
Page 16
Table 1-1 nPartition Operating System Support (continued)
Supported Cell-Based ServersOperating System
HP-UX 11i v3 (B.11.31) is supported on HP Integrity servers and HP 9000 servers.
HP-UX 11i v3 is supported on all servers based on the HP sx1000 chipset, and on HP Integrity servers based on the HP sx2000 chipset.
HP-UX 11i v3 supports cell local memory.
Also see “nPartition Hardware Requirements for Operating Systems”
(page 61).
HP-UX 11i v3 (B.11.31)
OpenVMS I64 8.2-1 is supported on cell-based HP Integrity servers based on the HP sx1000 chipset.
OpenVMS I64 8.3 is supported on HP Integrity servers based on the HP sx1000 and sx2000 chipsets.
OpenVMS I64 does not support cell local memory.
Also see “nPartition Hardware Requirements for Operating Systems”
(page 61).
HP OpenVMS I64 8.2-1 and 8.3
Windows Server 2003 is supported on HP Integrity servers, including the cell-based HP Integrity servers.
Windows Server 2003 supports cell local memory.
Also see “nPartition Hardware Requirements for Operating Systems”
(page 61).
Microsoft® Windows® Server 2003
Red Hat Enterprise Linux 3 and Red Hat Enterprise Linux 4 are supported on HP Integrity servers, including the cell-based HP Integrity servers.
Red Hat Enterprise Linux does not support cell local memory.
Also see “nPartition Hardware Requirements for Operating Systems”
(page 61).
Red Hat Enterprise Linux 3 Update 2
Red Hat Enterprise Linux 3 Update 3
Red Hat Enterprise Linux 4
SuSE Linux Enterprise Server 9 and SuSE Linux Enterprise Server 10 are supported on HP Integrity servers, including the cell-based HP Integrity servers.
SuSE Linux Enterprise Server 9 and SuSE Linux Enterprise Server 10 support cell local memory.
Also see “nPartition Hardware Requirements for Operating Systems”
(page 61).
SuSE Linux Enterprise Server 9
SuSE Linux Enterprise Server 10
HP Server Support for nPartitions
HP supports nPartition capabilities on cell-based servers, listed in Table 1-2.
On HP Superdome servers based on the HP sx1000 chipset, you can mix both PA-RISC nPartitions and Intel® Itanium® 2 nPartitions in the same server complex under specific system configurations. For details, refer to “HP Superdome Hybrid Servers: Intel® Itanium® 2 and
PA-RISC nPartition Mixing” (page 17).
The same basic nPartition features are supported for cell-based HP 9000 servers and cell-based HP Integrity servers, though some differences exist in the sets of supported tools and management capabilities. Where such differences exist, this document notes them.
16 Getting Started with nPartitions
Page 17
Table 1-2 HP Servers Supporting nPartitions
The first-generation cell-based HP 9000 servers include the following models:
• HP 9000 Superdome servers, including the SD16000, SD32000, and SD64000 models. These models support up to 16 cells in a server complex.
• HP 9000 rp8400 model, which supports up to four cells in a server complex.
• HP 9000 rp7405/rp7410, which supports up to two cells in a server complex.
For details see “Server Hardware Details: Cell-Based HP Servers” (page 51).
HP 9000 Servers HP 9000 servers have PA-RISC processors.
The following second-generation cell-based HP 9000 servers use the HP sx1000 chipset. For details see “sx1000 Chipset for HP Servers” (page 49).
• HP 9000 Superdome servers, including the SD16A, SD32A, and SD64A models. These models support up to 16 cells in a server complex.
• HP 9000 rp8420 model, which supports up to four cells in a server complex.
• HP 9000 rp7420 model, which supports up to two cells in a server complex.
The following third-generation cell-based HP 9000 servers use the HP sx2000 chipset. For details see “sx2000 Chipset for HP Servers” (page 49).
• HP 9000 Superdome servers, including the SD16B, SD32B, and SD64B models. These models support up to 16 cells in a server complex.
• HP 9000 rp8440 model, which supports up to four cells in a server complex.
• HP 9000 rp7440 model, which supports up to two cells in a server complex.
For details see “Server Hardware Details: Cell-Based HP Servers” (page 51).
Cell-based HP Integrity servers use either the HP sx1000 chipset or the HP sx2000 chipset.
The following cell-based HP Integrity servers use the HP sx1000 chipset; for details see
“sx1000 Chipset for HP Servers” (page 49).
• HP Integrity Superdome servers include the SD16A, SD32A, and SD64A models. These models support up to 16 cells in a server complex.
• The HP Integrity rx8620 model supports up to four cells in a server complex.
• The HP Integrity rx7620 model supports up to two cells in a server complex.
The following cell-based HP Integrity servers use the HP sx2000 chipset; for details see
“sx2000 Chipset for HP Servers” (page 49).
• HP Integrity Superdome servers include the SD16B, SD32B, and SD64B models. These models support up to 16 cells in a server complex.
• The HP Integrity rx8640 model supports up to four cells in a server complex.
• The HP Integrity rx7640 model supports up to two cells in a server complex.
For details see “Server Hardware Details: Cell-Based HP Servers” (page 51).
HP Integrity Servers HP Integrity
servers have Intel® Itanium® 2 processors.
The Intel® Itanium® processor family architecture was co-developed by Hewlett-Packard and Intel.
HP Superdome Hybrid Servers: Intel® Itanium® 2 and PA-RISC nPartition Mixing
HP Superdome servers based on the HP sx1000 chipset can support hybrid configurations with both PA-RISC nPartitions and Intel® Itanium® 2 nPartitions in the same server complex.
NOTE: For details and restrictions on mixing PA-RISC nPartitions and Intel® Itanium® 2 nPartitions on HP Superdome servers based on the HP sx2000 chipset, see the HP Integrity Superdome/sx2000 Service Guide.
On Superdome hybrid servers based on the HP sx1000 chipset, each nPartition must have only PA-RISC processor or Intel® Itanium® 2 processors. However, both types of nPartitions can reside in the same server complex. Within each PA-RISC nPartition, all cells must have the same processor revision level. Within each Intel® Itanium® 2 nPartition, all cells must have the same cell compatibility value.
Introduction to nPartitions 17
Page 18
NOTE: Specific firmware, operating systems, and management tools are required to supported mixing PA-RISC nPartitions and Intel® Itanium® 2 nPartitions on Superdome hybrid servers.
For details, refer to HP Superdome Hybrid Servers, which is available from the http://docs.hp.com/
en/hw.html Web site under the HP 9000 Superdome Server and HP Integrity Superdome
Server links.
Hardware Components of nPartition-Capable Servers
All hardware within a cell-based server—including all cells, I/O chassis, cables, cabinet hardware, fans, and power and utility components—is considered to be a server complex.
Within each cell-based server cabinet are one or more cells, each of which contains processors and memory.
Each cell-based server cabinet can have multiple I/O chassis that provide PCI slots for I/O cards. I/O resources also include any I/O devices attached to I/O cards within the I/O chassis.
Core I/O is required for each nPartition to provide console services and other boot and management abilities. On first-generation cell-based servers and HP sx1000 chipset-based servers, core I/O is provided by a PCI card residing in an I/O chassis. On HP sx2000 chipset-based servers, core I/O is provided on each cell. On all cell-based servers, each nPartition has only one core I/O active at a time.
Each I/O chassis connects to only one cell in the server. Some cell-based servers also support optional I/O expansion cabinets to provide additional I/O chassis. An HP Superdome complex can consist of one cabinet or two server cabinets, and can also include one or two I/O expansion cabinets (to provide additional I/O chassis). The two-cell HP servers consists of a single server cabinet only. The four-cell servers consists of a single server cabinet and can optionally include one I/O expansion cabinet to provide two additional I/O chassis.
For details on listing and managing nPartition hardware components, see Chapter 7 (page 201).
Administration Tools for nPartitions
The main administration tools for nPartitions are Partition Manager, which provides a graphical interface, and the nPartition Commands, which provide a command-line interface.
Some nPartition configuration and management also can be accomplished using the service processor interface to a cell-based server or by using the boot interface available through an nPartition console.
Slightly different toolsets and capabilities are available the different server models. However, the same basic administration tasks are supported on all cell-based servers.
The following tools can perform nPartition administration tasks:
Service Processor (MP or GSP) Menus
Service processor menus provide a service interface for the entire complex. It allows access to all hardware and nPartitions defined within the complex. The service processor is always available, regardless of whether any nPartitions are configured or booted in the complex.
The service processor includes the Command menu, nPartition consoles, nPartition Virtual Front Panels, nPartition console logs and the Chassis Log Viewer or Event Log Viewer (HP 9000 servers with HP PA-8700 processors have chassis logs, and servers based on the HP sx1000 or sx2000 chipset have event logs).
See Chapter 4 (page 67).
18 Getting Started with nPartitions
Page 19
For service processor commands, see “Command Reference for Service Processor Commands”
(page 80).
EFI Boot Manager and EFI Shell Commands
On cell-based HP Integrity servers, the Extensible Firmware Interface (EFI) supports nPartition management. The EFI is accessible from an nPartition console when the nPartition is in an active state but has not booted an operating system.
See “Command Reference for EFI Shell Commands” (page 81) for details.
BCH Menu Commands
On cell-based PA-RISC servers, the Boot Console Handler (BCH) interface supports management from an nPartition console when the nPartition is in an active state but has not booted an operating system. See “Command Reference for BCH Menu Commands” (page 84) for details.
nPartition Commands
You can configure, manage, and monitor nPartitions and hardware using the nPartition commands such as parstatus, parcreate, parmodify, and others.
Two versions of the nPartition commands are available: the Original nPartition Commands and Enhanced nPartition Commands. The Original nPartition Commands are used only on HP-UX 11i v1 (B.11.11) releases prior to the December 2004 release. The Enhanced nPartition Commands are supported for HP-UX, Windows, and Linux.
The same base set of features is available in both nPartition commands versions. However, the Enhanced nPartition Commands include new options, such as remote administration abilities, and include the cplxmodify command.
See “Commands for Configuring nPartitions” (page 19) for details.
Partition Manager (/opt/parmgr/bin/parmgr)
Partition Manager provides a graphical interface for configuring, modifying, and managing nPartitions and hardware within a server complex.
Two versions of Partition Manager are available: Version 1.0 and Version 2.0. Partition Manager Version 1.0 is used only on HP-UX 11i v1 (B.11.11) releases prior to the December 2004 release and relies in part on the Original nPartition Commands. Partition Manager Version 2.0 is supported for HP-UX and Windows and relies in part on the Enhanced nPartition Commands.
Although both Partition Manager versions support a similar set of tasks, the Partition Manager Version 2.0 release provides a significantly improved graphical interface, a new Web-based management interface, and remote administration abilities.
See “Partition Manager” (page 22) for details.
Commands for Configuring nPartitions
You can use the nPartition commands to create, modify, monitor, and remove nPartitions; get detailed server hardware information; manipulate attention indicators (LEDs) and power; and modify server complex attributes such as the complex name.
Table 1-3 describes the two nPartition commands releases, the Original nPartition Commands
and the Enhanced nPartition Commands.
The nPartition commands include: parcreate, parmodify, parremove, parstatus, parunlock, fruled, frupower, and cplxmodify. Table 1-4 “nPartition Commands
Descriptions” briefly describes each of the commands.
When using these commands, specify cells and I/O chassis using the notations in “Specifying
Cells and I/O Chassis to Commands” (page 243).
Administration Tools for nPartitions 19
Page 20
Remote management using the commands is supported as described in “Specifying Remote
Management Options to Commands” (page 247).
Table 1-3 nPartition Commands Releases
Enhanced nPartition CommandsOriginal nPartition Commands
• Support both local management and remote management of nPartitions and complexes.
• Distributed with the HP-UX 11i v3 (B.11.31) release. Installed and supported for use on all systems that run HP-UX 11i Version 3.
• Distributed with the HP-UX 11i v2 (B.11.23) release. Installed and supported for use on all systems that run HP-UX 11i Version 2.
• Distributed with the HP-UX 11i v1 (B.11.11) December 2004 release and later.
• Available for Windows (32-bit) and Windows (64-bit). Distributed with the Smart Setup CD.
• Available for Red Hat Enterprise Linux and SuSE Linux Enterprise Server. Distributed with the HP Integrity Essentials Foundation Pack for Linux.
• Support only local management of nPartitions and complexes.
• Were distributed with HP-UX 11i v1 (B.11.11) releases prior to the December 2004 release.
• Supported by HP-UX kernels built with nPartition support enabled (the hd_fabric driver) and use the libfab.1 library.
• Installed as part of the HP-UX 11i Version 1 operating system installation prior to the December 2004 release.
Table 1-4 describes the nPartition configuration commands and lists sections where you can find
each command's syntax and details.
Table 1-4 nPartition Commands Descriptions
DescriptionCommand
Create a new nPartition; root or IPMI LAN access is required.
See “parcreate Command” (page 249).
parcreate
Modify an existing nPartition; root or IPMI LAN access is required.
See “parmodify Command” (page 252).
parmodify
Remove an existing nPartition; root or IPMI LAN access is required.
See “parremove Command” (page 256).
parremove
Display nPartition information and hardware details for a server complex.
See “parstatus Command” (page 258).
parstatus
Unlock Complex Profile data (use this command with caution); root or IPMI LAN access is required.
See “parunlock Command” (page 260).
parunlock
Blink the attention indicators (LEDs) or turn them off. This command can control these indicators for cells, I/O chassis, and cabinet numbers.
See “fruled Command” (page 262).
fruled
Display status or turn power on or off for cells and I/O chassis; root or IPMI LAN access is required.
See “frupower Command” (page 264).
frupower
Only distributed with the Enhanced nPartition Commands.
Modify server complex attributes. Supports changing the name of a complex; root or IPMI LAN access is required
See “cplxmodify Command” (page 266).
cplxmodify
20 Getting Started with nPartitions
Page 21
Availability of nPartition Commands
The Original nPartition Commands were distributed as part of HP-UX 11i v1 (B.11.11) releases prior to the December 2004 release.
The Enhanced nPartition Commands are distributed with current HP-UX releases, including the HP-UX 11i v3 (B.11.31) release, all releases of HP-UX 11i v2 (B.11.23), and releases of HP-UX 11i v1 (B.11.11) beginning with the December 2004 release.
The Enhanced nPartition Commands also are distributed as bundles on the HP Smart Setup CD for Windows, and as bundles on the HP Integrity Essentials Foundation Pack for Linux.
You can download the nPartition Commands for Windows and Linux from the http://
www.hp.com/ Web site. See “Downloading Enhanced nPartition Commands for Windows” and “Downloading the HP Integrity Essentials Foundation Pack for Linux”.
Enhanced nPartitions Commands for Windows
The Enhanced nPartition Commands for Windows are available in both 32-bit and 64-bit versions.
The Windows (32-bit) Enhanced nPartition Commands are designed for any 32-bit system running Windows XP, Windows 2000 with Service Pack 3, or Windows Server 2003 (32-bit).
The 32-bit nPartition Commands enable you to use a 32-bit system as a remote management station for nPartition administration.
The Windows (64-bit) Enhanced nPartition Commands are designed for HP Integrity servers running Windows Server 2003, 64-bit, Enterprise Edition or Datacenter Edition.
The 64-bit nPartition Commands enable you to perform local and remote management of nPartitions when running on a cell-based server with Windows Server 2003, and enable you to perform remote management when running on other HP Integrity servers with Windows Server 2003.
The Smart Setup CD has both both the 32-bit and 64-bit versions of the nPartition Commands. You also can download the nPartition Commands bundles for Windows from the http://
www.hp.com/ Web site.
Procedure 1-1 Downloading Enhanced nPartition Commands for Windows
You can download the Enhanced nPartition Commands for Windows from the http://
www.hp.com/ Web site.
1. Go to the http://www.hp.com/ Web site and choose Software & Driver Downloads.
2. At the Software & Driver Downloads page, in the for product box, enter the name of a cell-based HP Integrity server, such as:
Integrity Superdome
Integrity rx8620
Integrity rx7620
3. At the Downloads for HP Business Support Center Web page, choose Microsoft Windows Server 2003 64-Bit from the select operating system list.
4. At the next Downloads for HP Business Support Center Web page, choose HP nPartition
Commands Bundle (Windows Server 2003 64-Bit) or choose HP nPartition Commands Bundle (Windows Server 2003 32-Bit) from the Utility Partition Management heading.
After you choose the nPartition commands bundle, the HP nPartition Commands Bundle Web page displays information about the software bundle and provides options for downloading the software and for viewing the release notes.
5. To view the release notes, choose the Release Notes tab.
Read the Installation instructions section of the release notes and the features summary before downloading and installing the software.
Administration Tools for nPartitions 21
Page 22
6. To download the Enhanced nPartition Commands for Windows, choose download from the Web page.
Enhanced nPartition Commands for Linux
The HP Integrity Essentials Foundation Pack for Linux is a CD that includes Enhanced nPartition Commands for use with Red Hat Enterprise Linux or SuSE Linux Enterprise Server.
Procedure 1-2 Downloading the HP Integrity Essentials Foundation Pack for Linux
You can download the HP Integrity Essentials Foundation Pack for Linux from the http://
www.hp.com/go/softwaredepot Web site.
The downloadable CD image is an .iso file that you can use to record a usable CD.
1. Go to the http://www.hp.com/go/softwaredepot Web site.
2. At the Software Depot home page, enter Foundation Pack for Linux in the Search field to search the Software Depot.
3. At the product catalog page that displays the search results, choose the HP Integrity Essentials Foundation Pack for Linux on Itanium (R) 2-based Servers entry from the list of products.
4. At the HP Integrity Essentials Foundation Pack for Linux product details page, in the to order section of the page (where it states "Click here to download"), choose the word here.
5. To download the HP Integrity Essentials Foundation Pack for Linux CD image file, choose download from the Web page.
You will use the .iso file that you downloaded to create a usable CD.
6. Go to the http://docs.hp.com/linux/ Web site to view documentation for the HP Integrity Essentials Foundation Pack for Linux.
Under the Linux for Itanium 2-based Servers and Workstations heading of the Linux documentation Web site, view the HP Integrity Essentials Foundation Pack for Linux documentation.
7. Record a CD using the HP Integrity Essentials Foundation Pack for Linux CD image file.
The CD image (.iso file) is a complete CD image in one file. Copying the file to a CD does not create a usable CD. Instead, use a software application that supports recording a CD from a CD image.
Partition Manager
Partition Manager provides a graphical interface for managing cell-based servers.
Partition Manager Version 1.0 was distributed with HP-UX 11i v1 (B.11.11) releases prior to the December 2004 release. Starting with the HP-UX 11i v1 December 2004 release, Partition Manager Version 2.0 replaces Partition Manager Version 1.0 for HP-UX 11i v1 systems.
Partition Manager Version 2.0 is distributed and installed with the HP-UX 11i v3 (B.11.31) release and all HP-UX 11i v2 (B.11.23) releases. It also is available for Windows (32-bit) and Windows (64-bit) as part of the the Smart Setup CD, and is available for Linux on the HP Integrity Essentials Foundation Pack for Linux.
Partition Manager Version 2.0 for HP-UX provides the /opt/parmgr/bin/parmgr command to run, stop, and restart Partition Manager. Refer to the parmgr -h command or the parmgr(1M) manpage for command-line options.
Also see the Partition Manager Version 2.0 online help available at the following Web site:
http://docs.hp.com/en/PARMGR2/
To view the online help without running Partition Manager, you can open the help files using a Web browser either on a system where Partition Manager is installed, or on a system that has a downloaded copy of the help files.
22 Getting Started with nPartitions
Page 23
Partition Manager Version 2.0 for Windows
The Partition Manager Version 2.0 for Windows can be installed and run on either 32-bit or 64-bit Windows systems. (A single Partition Manager bundle is provided for both 32-bit and 64-bit systems.)
Using PartitionManager for Windows on any 32-bit system running Windows XP, Windows 2000 with Service Pack 3, or Windows 2003, 32-bit Edition enables you to use a 32-bit system as a remote management station for nPartition administration.
Using Partition Manager for Windows on an HP Integrity servers running Windows Server 2003, 64-bit, Enterprise Edition or Datacenter Edition enables you to perform local and remote management of nPartitions when running on a cell-based server with Windows Server 2003, and enables you to perform remote management when running on other HP Integrity servers with Windows Server 2003.
NOTE: Before installing the Partition Manager bundle for Windows you must download and install the nPartition Commands bundle (either the 32-bit or 64-bit version, depending on the platform where the installation occurs).
You also must download and install the Java 2 SE SDK v1.42 from http://java.sun.com/downloads. For details refer to the release notes.
The Smart Setup CD includes Partition Manager Version 2.0 for Windows. You also can download the Partition Manager bundles for Windows from the http://www.hp.com/ Web site.
Procedure 1-3 Downloading Partition Manager Version 2.0 for Windows
You can download Partition Manager Version 2.0 for Windows from the http://www.hp.com/ Web site.
1. Go to the http://www.hp.com/ Web site and choose Software & Driver Downloads.
2. At the Software & Driver Downloads page, in the for product box, enter the name of a cell-based HP Integrity server, such as:
Integrity Superdome
Integrity rx8620
Integrity rx7620
3. At the Downloads for HP Business Support Center Web page, choose Microsoft Windows Server 2003 64-Bit from the select operating system list.
4. At the next Downloads for HP Business Support Center Web page, choose HP Partition Manager Bundle from the Utility Partition Management heading.
After you choose the Partition Manager bundle, the HP Partition Manager Bundle Web page displays information about the software bundle and provides options for downloading the software and for viewing the release notes.
5. To view the release notes, choose the Release Notes tab.
Read the Installation instructions section of the release notes and the features summary before downloading and installing the software.
6. To download Partition Manager for Windows, choose download from the Web page.
nPartition Properties
This section describes the nPartition properties you work with when performing nPartition administration tasks.
The following nPartitions details are covered here:
“Partition Numbers”
“Assigned and Unassigned Cells”
nPartition Properties 23
Page 24
“Base Cells”
“Core Cells”
“Active and Inactive Cells”
“Cell Local Memory”
“Cell Property Details”
“Active and Inactive nPartition Boot States”
Partition Numbers
Each nPartition has its own unique partition number that the nPartition administration tools use for identifying the nPartition.
When you create an nPartition, the tool you use assigns the nPartition the lowest available partition number. For example, the first nPartition always is partition number 0, and the second nPartition to be created is partition number 1.
After you remove an nPartition, no cells are assigned to the nPartition. As a result, the nPartition tools can reuse the partition number when creating a new nPartition.
For example, after you remove partition number 2, the next time you create a new nPartition the parcreate command or Partition Manager will assign cells to partition number 2 when creating a new nPartition, if all lower-numbered nPartitions (partition numbers 0 and 1) already are defined.
Assigned and Unassigned Cells
Each cell in a server complex either is assigned to one of the nPartitions in the complex, or it is unassigned and thus is not used by any of the nPartitions. If an I/O chassis is attached to an unassigned cell, then the chassis likewise is not assigned to an nPartition.
Cells that are unassigned are considered to be available resources; they are free to be assigned to any of the existing nPartitions, or can be used to create new nPartitions.
Base Cells
On both HP 9000 servers and HP Integrity servers, all cells within an nPartition are base cells.
The nPartitions administration tools automatically set the cell type to base cell, if you do not specify the cell type.
Core Cells
One cell in each nPartition must serve as the active core cell. The core cell controls the nPartition until an operating system has booted, and it provides console services and other boot and management abilities for the nPartition. The monarch processor on the core cell runs the Boot Console Handler (BCH) or Extensible Firmware Interface (EFI) code while all other processors are idle until an operating system is booted.
On first-generation cell-based servers and HP sx1000 chipset-based servers, core I/O is provided by a PCI card residing in an I/O chassis. On these servers, to be eligible as a core cell, a cell must be assigned to the nPartition, it must be active, and it must be attached to an I/O chassis containing functional core I/O.
On HP sx2000 chipset-based servers, core I/O is provided on each cell, so any cell assigned to an nPartition can be a core cell.
Although an nPartition can have multiple core-capable cells, only one core I/O is actively used in an nPartition: the core I/O belonging to the active core cell.
For details about setting and using the core cell choices (or "alternates") for an nPartition see
“Setting nPartition Core Cell Choices” (page 194). When none of the core cell choices can serve
24 Getting Started with nPartitions
Page 25
as the active core cell, or if no core cell choices are specified, the nPartition attempts to select an eligible cell using a default process.
Active and Inactive Cells
Cells that are assigned to an nPartition and have booted to form an nPartition are active cells whose resources (processors, memory, and any attached I/O) can be actively used by software running in the nPartition.
Cells that are inactive either are not assigned to an nPartition, or they have not participated in partition rendezvous to form an nPartition with any other cells assigned to the nPartition. (Partition rendezvous is the point during the nPartition boot process when all available cells in an nPartition join together to establish which cells are active for the current boot of the nPartition.)
For example, a cell is inactive when it is powered off, has booted with a "n" use-on-next-boot value, or is assigned to an nPartition that has been reset to the shutdown for reconfig state.
The resources belonging to inactive cells are not actively used by an nPartition. For a cell and its resources to be actively used the cell must boot and participate in partition rendezvous.
Cell Local Memory
On cell-based servers that are based on the HP sx1000 or sx2000 chipset, a portion of the memory in each cell can be configured as cell local memory (CLM), which is non-interleaved memory that can be quickly accessed by processors residing on the same cell as the memory.
CAUTION: Memory configured as cell local memory only can be used by operating systems that support it.
Any memory configured as cell local memory is unusable when an nPartition runs an operating system that does not support it.
The nPartition management tools enable you to configure CLM for each cell either as a percentage of the total memory in the cell, or as an absolute number of gigabytes of memory.
For details about configuring CLM see Chapter 3 (page 61).
Cell Property Details
Each cell has various properties that determine how the cell can be used and managed.
To list the properties of cells in a server complex, you can use the parstatus -C command, parstatus -V -c# command, or Partition Manager.
The parstatus -C command output includes cell property summaries such as the current assignments, usage, and I/O details for all cells in the complex.
# parstatus -C [Cell] CPU Memory Use OK/ (GB) Core On Hardware Actual Deconf/ OK/ Cell Next Par Location Usage Max Deconf Connected To Capable Boot Num ========== ============ ======= ========= =================== ======= ==== === cab0,cell0 active core 4/0/4 8.0/ 0.0 cab 0,bay0,chassis1 yes yes 0 cab0,cell1 active base 4/0/4 8.0/ 0.0 - no yes 0 cab0,cell2 active base 4/0/4 8.0/ 0.0 cab 0,bay1,chassis3 yes yes 0 cab0,cell3 absent - - - - - ­cab0,cell4 active core 2/0/4 4.0/ 0.0 cab 0,bay0,chassis3 yes yes 1 cab0,cell5 active base 2/0/4 4.0/ 0.0 - no yes 1 cab0,cell6 active base 2/0/4 4.0/ 0.0 cab 0,bay1,chassis1 yes yes 1 cab0,cell7 absent - - - - - -
#
nPartition Properties 25
Page 26
The parstatus -V -c# command gives detailed information about the properties and status for the cell (-c#) that you specify.
# parstatus -V -c0 [Cell] Hardware Location : cab0,cell0 Global Cell Number : 0 Actual Usage : active core Normal Usage : base Connected To : cab0,bay0,chassis0 Core Cell Capable : yes Firmware Revision : 20.1 Failure Usage : activate Use On Next Boot : yes Partition Number : 0 Partition Name : Partition 0
[CPU Details] Type : 8820 Speed : 900 MHz CPU Status === ====== 0 ok 1 ok 2 ok 3 ok 4 ok 5 ok 6 ok 7 ok CPUs =========== OK : 8 Deconf : 0 Max : 8
[Memory Details] DIMM Size (MB) Status ==== ========= ========= 0A 2048 ok 4A 2048 ok 0B 2048 ok 4B 2048 ok 1A 2048 ok 5A 2048 ok 1B 2048 ok 5B 2048 ok 2A 2048 ok 6A 2048 ok 2B 2048 ok 6B 2048 ok 3A 2048 ok 7A 2048 ok 3B 2048 ok 7B 2048 ok Memory ========================= DIMM OK : 16 DIMM Deconf : 0 Max DIMMs : 16 Memory OK : 32.00 GB Memory Deconf : 0.00 GB
#
26 Getting Started with nPartitions
Page 27
Active and Inactive nPartition Boot States
Each nPartition has a boot state of either active or inactive.
The boot state indicates whether the nPartition has booted so that it may be interactively accessed through its console (active nPartitions) or if it cannot be used interactively (inactive nPartitions)
You can use the parstatus -P command or Partition Manager to list all nPartitions and their boot states (active or inactive status).
# parstatus -P [Partition] Par # of # of I/O Num Status Cells Chassis Core cell Partition Name (first 30 chars) === ============ ===== ======== ========== =============================== 0 inactive 2 1 ? feshd5a 1 active 2 1 cab1,cell2 feshd5b #
Likewise, you can view nPartition boot states using the Virtual Front Panel, which is available from the service processor Main menu for the server complex.
Active nPartition An nPartition that is active has at least one core-capable cell that is active (not in a boot-is-blocked state). When an nPartition is active, one or more of the cells assigned to the nPartition have completed partition rendezvous and the system boot interface (the BCH or EFI environment) has loaded and been displayed through the nPartition console. An operating system can be loaded and run from the system boot interface on an active nPartition.
Inactive nPartition An inactive nPartition is considered to be in the shutdown for reconfig state, because all cells assigned to the nPartition either remain at a boot-is-blocked state or are powered off.
To make an inactive nPartition active, use the BO command at the service processor (MP or GSP) Command menu. The BO command clears the boot-is-blocked flag for all cells assigned to the nPartition, thus allowing the cells to rendezvous and enabling the nPartition to run the system boot interface. (If all cells assigned to an nPartition are powered off, you must power on its cells to enable the nPartition to become active.)
To make an nPartition inactive perform a shutdown for reconfig. You can issue commands from the operating system, the system boot interface (BCH or EFI), or the service processor (MP or GSP) Command menu. All three methods reboot an nPartition and hold all of its cells at boot-is-blocked; as a result the nPartition is shutdown for reconfig (placed in an inactive state). For details see Chapter 5 (page 87).
Overview of Managing nPartitions
This section provides overviews of common nPartition management tasks.
The following task overviews are given here:
“Basics of Listing nPartition and Complex Status”
“Basics of nPartition Creation”
“Basics of nPartition Modification”
“Basics of nPartition Booting and Resetting”
Basics of Listing nPartition and Complex Status
You can list server complex hardware details and nPartition configuration details using the following tools and commands.
Overview of Managing nPartitions 27
Page 28
For details see Chapter 8 (page 223).
Service processor (MP or GSP) methods for listing hardware and nPartition status include the following commands, which are available from the service processor Command menu.
CP — List nPartition configurations, including all assigned cells. — PS — List cabinet, power, cell, processor, memory, I/O, and other details. — IO — List connections from cells to I/O chassis on HP Superdome servers. — ID — List product and serial numbers.
EFI Shell methods (available only on HP Integrity servers) for listing hardware and nPartition status include the following commands. Hardware and nPartition information displayed by the EFI Shell is limited to the local nPartition.
info sys — List the local nPartition number and active cell details. — info io — List the I/O configuration. — info mem — List memory details. — info cpu — List processor details.
BCH menu methods (available only on HP 9000 servers) for listing hardware and nPartition status include the following commands. Hardware and nPartition information displayed by the BCH menu is limited to the local nPartition in most cases.
— Information menu, PR command — List processor configuration details. — Information menu, ME command — List memory configuration details. — Information menu, IO command — List I/O configuration details. — Information menu, CID command — List complex product and serial numbers. — Configuration menu, PD command — List the local nPartition number and name.
nPartition administration tools for listing hardware and nPartition status include the following features.
— Partition Manager Version 1.0 — The ComplexShow Complex Details action provides
complex status information; use the Cells tab, CPUs/Memory tab, I/O Chassis tab, and Cabinet Info tab to display selected details.
— Partition Manager Version 2.0 — The following user interface features provide nPartition
and complex status:
General tab, Hardware tab, nPartitions tab, Cells tab, I/O tab, CPUs/Memory tab, Power and Cooling tab. Also, the ComplexShow Complex Details action.
parstatus -C command — List cell configurations. — parstatus -V -c# command — List detailed cell information. — parstatus -I command, rad -q command on HP-UX 11i v1 (B.11.11) systems, and
olrad -q command on HP-UX 11i v2 (B.11.23) and HP-UX 11i v3 (B.11.31) systems — List I/O chassis and card slot details.
parstatus -B command — List server cabinet summaries for the complex. — parstatus -V -b# command — List detailed server cabinet status. — parstatus -X command — List product and serial numbers.
parstatus -P command — List a configuration summary for all nPartitions. — parstatus -V -p# command — List detailed nPartition configuration information. — parstatus -w command — List the local nPartition number.
frupower -d -C command or frupower -d -I command — List power status for
all cells (-C) or all I/O chassis (-I).
For further details and summaries see Table 8-1 (page 224).
28 Getting Started with nPartitions
Page 29
Basics of nPartition Creation
Creating an nPartition involves using an nPartition administration tool to assign one or more cells in a complex to the new nPartition. At the time an nPartition is created you also can optionally specify various configuration options for the nPartition, such as its name, cell use-on-next-boot values, and other details. After an nPartition is created you can modify the nPartition, as described in “Basics of nPartition Modification” (page 30). For detailed procedures see Chapter 6 (page 165).
NOTE: When creating an nPartition, follow the HP nPartition requirements and guidelines. HP recommends only specific sets of nPartition configurations. For nPartition configuration requirements and recommendations, see Chapter 3 (page 61).
The method you choose for creating an nPartition can depend on whether you are creating the first nPartition in a complex, creating a "Genesis Partition" for a complex, or creating an additional nPartition in a complex that already has one or more nPartitions defined.
Creating the First nPartition in a Server Complex To create the first nPartition in a complex
you can do so either by creating a Genesis Partition or by using an nPartition administration tool to remotely manage the complex using IPMI over LAN.
— All cell-based servers support creating a Genesis Partition. See “Creating a Genesis
Partition for a Server Complex” (page 29).
— Only cell-based servers based on the HP sx1000 or sx2000 chipset support remote
administration using IPMI over LAN.
From a system with the Enhanced nPartition Commands, use the parcreate command
-g... -h... set of options. Or from Partition Manager Version 2.0 use the Switch Complexes dialog to connect to the complex and use the nPartitionCreate nPartition
action.
For remote administration details see “Remote and Local Management of nPartitions”
(page 41).
Creating a Genesis Partition for a Server Complex Creating a Genesis Partition involves
the service processor (MP or GSP) CC command to specify that an initial, one-cell nPartition be created within the server complex. To create a Genesis Partition, the complex either must have no nPartitions defined, or all nPartitions must be shutdown for reconfig (inactive). For details see “Genesis Partition” (page 30).
Creating Additional nPartitions in a Server Complex You can use either of two methods to
create additional nPartitions in a complex where one or more nPartitions already are defined: either use parcreate or Partition Manager from an nPartition running in the complex, or use the remote administration feature of those tools running on a system outside the complex. For a detailed procedure see “Creating a New nPartition” (page 172).
— Creating a New nPartition Locally — To create a new nPartition in the same complex
where parcreate or Partition Manager is running at least one nPartition must be booted with an operating system that has the nPartition tools installed.
Login to HP-UX on the nPartition and issue the parcreate command, or access Partition Manager running on the nPartition and use its Create nPartition action.
— Creating a New nPartition Remotely — To remotely create a new nPartition in a complex,
do so either by using the Enhanced nPartition Commands version of parcreate, or by using Partition Manager Version 2.0.
Only cell-based servers based on the HP sx1000 or sx2000 chipset support remote administration.
Both parcreate and Partition Manager support two methods of remote administration: WBEM and IPMI over LAN. For remote administration using WBEM the tool remotely accesses a booted operating system running on an nPartition in the target complex (for
Overview of Managing nPartitions 29
Page 30
example, by the -u... -h... set of options). For remote administration using IPMI over LAN the tool remotely accesses the service processor of the target complex (for example, by the -g... -h... set of options).
For remote administration details see “Remote and Local Management of nPartitions”
(page 41).
For detailed procedures for creating and managing nPartitions see Chapter 6 (page 165).
Genesis Partition
The Genesis Partition is the initial, one-cell nPartition created within a server complex by the service processor (MP or GSP) CC command. The Genesis Partition is just like any other nPartition except for how it is created and the fact that its creation wipes out any previous nPartition configuration data.
For a detailed procedure see “Creating a Genesis Partition” (page 170).
If your server complex has its nPartitions pre-configured by HP, you do not need to create a Genesis Partition.
NOTE: For servers based on the HP sx1000 or sx2000 chipset, you can instead use nPartition tools running on a remote system to remotely create and configure new nPartitions (including the first nPartition in the complex).
See “Remote and Local Management of nPartitions” (page 41) for details.
You can use nPartition management tools running on the Genesis Partition as the method for configuring all nPartitions in the complex. The Genesis Partition always is partition number 0.
When it is first created, the Genesis Partition consists of one cell that is connected to an I/O chassis that has core I/O installed. The Genesis Partition also should have a bootable disk (or a disk onto which you can install an operating system).
If an operating system is not installed on any disks in the Genesis Partition, you can boot the Genesis partition to the system boot interface (either BCH or EFI) and from that point install an operating system. This installation requires either having access to an installation server, or to a CD drive (or DVD drive) attached to an I/O chassis belonging to the nPartition.
After you boot an operating system on the Genesis Partition, you can modify the nPartition to include additional cells. You also can create other, new nPartitions and can modify them from the Genesis Partition or from any other nPartition that has an operating system with the nPartition tools installed.
Basics of nPartition Modification
Modifying an nPartition involves using an nPartition administration tool to revise one or more parts of the server Complex Profile data, which determines how hardware is assigned to and used by nPartitions. The Complex Profile is discussed in “Complex Profile” (page 36).
For detailed procedures see Chapter 6 (page 165).
You can modify an nPartition either locally or remotely.
For local administration, use nPartition Commands or Partition Manager from an nPartition in the same complex as the nPartition to be modified. Some nPartition details also can be modified locally from an nPartition console by using EFI Shell commands or BCH menu commands.
For remote administration, use remote administration features of the Enhanced nPartition Commands or Partition Manager Version 2.0.
You can use either of two methods for remote administration: WBEM and IPMI over LAN.
30 Getting Started with nPartitions
Page 31
For remote administration using WBEM the tool remotely accesses an operating system running on an nPartition in the target complex.
Use the -u... -h... set of parmodify options or the Partition Manager Switch Complexes action and "remote nPartition" option.
— For remote administration using IPMI over LAN the tool remotely accesses the service
processor of the target complex.
Use the -g... -h... set of parmodify options or the Partition Manager Switch Complexes action and "remote partitionable complex" option.
See “Remote and Local Management of nPartitions” (page 41) for details.
nPartition Modification Tasks
The following tasks are among the basic procedures for modifying nPartitions.
Assigning and Unassigning Cells
To assign (add) or unassign (remove) cells from an nPartition use the parmodify -p#
-a#... command to add a cell, or the parmodify -p# -d#... command to remove a cell from the specified nPartition (-p#, where # is the partition number). From Partition Manager select the nPartition, use the nPartitionModify nPartition action, and select the
Add/Remove Cells tab.
Also see “Assigning (Adding) Cells to an nPartition” (page 179) and see “Unassigning
(Removing) Cells from an nPartition” (page 182).
Removing an nPartition
To remove (delete) an nPartition use the parremove -p# command to remove a specified nPartition (-p#, where # is the partition number). From Partition Manager select the nPartition and use the nPartitionDelete nPartition action.
Also see “Removing (Deleting) an nPartition” (page 176).
Renaming an nPartition
To rename an nPartition use the parmodify -p# -P name command to set the name for a specified nPartition (-p#, where # is the partition number). From Partition Manager select the nPartition, use the nPartitionModify nPartition action, and select the General tab.
On an HP 9000 server you also can use use the BCH Configuration menu PD NewName command.
Also see “Renaming an nPartition” (page 185).
Setting Cell Attributes
To set attributes for a cell use the parmodify -p# -m#... command to modify cell attributes for a specified nPartition (-p#, where # is the partition number).
From Partition Manager Version 1.0 select the nPartition, use the nPartitionModify nPartition action, Change Cell Attributes tab, select the cell(s), and click Modify Cell(s).
From Partition Manager Version 2.0 select the nPartition, use the nPartitionModify
nPartition action, and use the Set Cell Options tab (to set the use-on-next-boot value) and Configure Memory tab (to set the cell local memory value).
On an HP 9000 server you also can use the BCH Configuration menu CELLCONFIG command to set use-on-next-boot values. On an HP Integrity server you also can use the EFI Shell cellconfig command to set use-on-next-boot values.
Also see “Setting Cell Attributes” (page 189).
Overview of Managing nPartitions 31
Page 32
Setting Core Cell Choices
To set core cell choices for an nPartition use the parmodify -p# -r# -r#... command to specify up to four core cell choices in priority order for a specified nPartition (-p#, where
# is the partition number).
From Partition Manager Version 1.0 select the nPartition, use the nPartitionModify nPartition action, Core Cell Choices tab.
From Partition Manager Version 2.0 select the nPartition, use the nPartitionModify nPartition action, Set Cell Options tab, and use the Core Cell Choice column to set priorities.
On an HP 9000 server you can use the BCH Configuration menu COC command to set core cell choices. On an HP Integrity server you can use the EFI Shell rootcell command to set core cell choices.
Also see “Setting nPartition Core Cell Choices” (page 194).
Setting nPartition Boot Paths
On HP Integrity servers boot paths can be listed and configured only from the local nPartition.
From HP-UX use the setboot command to configure the local nPartition boot paths, or use the parmodify -p# -b... -s... -t... command to set boot paths for a specified
nPartition (-p#, where # is the partition number).
On an HP 9000 server you can use the BCH Main menu PATH command to configure boot paths. On an HP Integrity server you can use the EFI Shell bcfg command to configure boot paths.
Also see “Configuring Boot Paths and Options” (page 155).
For more details and summaries see Table 6-1 (page 166).
Basics of nPartition Booting and Resetting
This section gives a brief overview of the boot process for cells and nPartitions and lists the main nPartition boot commands and tasks.
For more details see Chapter 5 (page 87).
Boot Process for Cells and nPartitions
The nPartition boot process, on both HP 9000 servers and HP Integrity servers, includes two phases: the cell boot phase and the nPartition boot phase.
1. Cell Boot Phase of the nPartition Boot Process The cell boot phase occurs when cells are powered on or reset. The main activities that occur during the cell boot phase are power-on-self-test activities. During this phase each cell operates independently of all other cells in the complex. Cells do not necessarily proceed through this phase at the same pace, because each cell may have a different amount of hardware to test and discover, or cells might be reset or powered on at different times. The main steps that occur during the cell boot phase are:
a. A cell is powered on or reset, and the cell boot-is-blocked (BIB) flag is set.
BIB is a hardware flag on the cell board. When BIB is set, the cell is considered to be inactive.
b. Firmware on the cell performs self-tests and discovery operations on the cell hardware
components. Operations at this point include processor self-tests, memory tests, I/O
32 Getting Started with nPartitions
Page 33
discovery, and discovery of interconnecting fabric (connections between the cell and other cells, I/O, and system crossbars).
c. After the firmware completes cell self-tests and discovery, it reports the cell hardware
configuration to the service processor (GSP or MP), informs the service processor it is "waiting at BIB", and then waits for the cell BIB flag to be cleared.
2. nPartition Boot Phase of the nPartition Boot Process The nPartition boot phase occurs when
an nPartition is booted, after its cells have completed self tests. During this phase "nPartition rendezvous" occurs, however not all cells assigned to an nPartition are required to participate in rendezvous. A minimum of one core-capable cell that has completed its cell boot phase is required before the nPartition boot phase can begin. By default, all cells assigned to the nPartition that have a "y" use-on-next-boot value are expected to participate in rendezvous, and the service processor will wait for up to ten minutes for all such cells to reach the "waiting at BIB" state. Cells that have a "n" use-on-next-boot value do not participate in rendezvous and remain waiting at BIB. The main steps that occur during the nPartition boot phase are:
a. The service processor provides a copy of the relevant Complex Profile data to the cells
assigned to the nPartition.
This data includes a copy of the Stable Complex Configuration Data and a copy of the Partition Configuration Data for the nPartition. For details see “Complex Profile”
(page 36).
b. The service processor releases BIB for all cells assigned to the nPartition that have a "y"
use-on-next-boot value and complete the cell boot phase in time.
The service processor does not release BIB for any cell with a "n" use-on-next-boot value, or for any cell that did not complete the cell boot phase within ten minutes of the first cell to do so.
Once BIB is release for a cell, the cell is considered to be active.
c. nPartition rendezvous begins, with the system firmware on each active cell using its
copy of complex profile data to contact other active cells in the nPartition.
d. The active cells in the nPartition negotiate to select a core cell. e. The chosen core cell manages the rest of the nPartition boot process. A processor on the
core cell runs the nPartition system boot environment (BCH on HP 9000 servers, EFI on HP Integrity servers). The core cell hands off control to an operating system loader when the OS boot process is initiated.
You can view progress during the cell and nPartition boot phases by observing the Virtual Front Panel for an nPartition, which is available from the service processor (MP or GSP) Main menu.
Common nPartition Boot Commands and Tasks
The following summary briefly describes the main nPartition boot commands and tasks. For more summaries and details see Table 5-1 (page 96).
Service processor (MP or GSP) support for managing nPartition booting includes the following commands, which are available from the service processor Command menu.
RS — Reset an nPartition.
On HP Integrity servers you should reset an nPartition only after all self tests and partition rendezvous have completed.
RR — Reset and perform a shutdown for reconfig of an nPartition.
On HP Integrity servers you should reset an nPartition only after all self tests and partition rendezvous have completed.
BO — Boot the cells assigned to an nPartition past the "waiting at BIB" state and thus
begin the nPartition boot phase.
Overview of Managing nPartitions 33
Page 34
TC — Perform a transfer of control reset of an nPartition. — PE — Power on or power off a cabinet, cell, or I/O chassis.
On HP Integrity rx8620 servers, rx8640 servers, rx7620 servers, and rx7640 servers, nPartition power on and power off also is supported to manage power of all cells and I/O chassis assigned to the nPartition using a single command.
EFI Shell support for managing nPartition booting includes the following commands. (EFI is available only on HP Integrity servers.)
bcfg — List and configure the boot options list for the local nPartition. — autoboot — List, enable, or disable the nPartition autoboot configuration value. — acpiconfig — List and configure the nPartition ACPI configuration setting, which
determines whether HP-UX, OpenVMS, Windows, or Linux can boot on the nPartition.
To boot HP-UX 11i v2 (B.11.23), HP-UX 11i v3 (B.11.31), or HP OpenVMS I64, the ACPI configuration setting must be set to default.
To boot Windows Server 2003, the ACPI configuration setting for the nPartition must be set to windows.
To boot Red Hat Enterprise Linux or SuSE Linux Enterprise Server: ◦ On HP rx7620 servers, rx8620 servers, or Integrity Superdome (SD16A, SD32A,
SD64A), the ACPI configuration must be set to single-pci-domain.
On HP rx7640 servers, rx8640 servers, or Integrity Superdome (SD16B, SD32B,
SD64B), the ACPI configuration must be set to default.
acpiconfig enable softpowerdown — When set, causes nPartition hardware to
be powered off when the operating system issues a shutdown for reconfig command. On HP rx7620, rx7640, rx8620, and rx8640 servers with a windows ACPI configuration setting, this is the default behavior. Available only on HP rx7620, rx7640, rx8620, and rx8640 servers.
acpiconfig disable softpowerdown — When set, causes nPartition cells to
remain at BIB when the operating system issues a shutdown for reconfig command. In this case an OS shutdown for reconfig makes the nPartition inactive. On HP rx7620, rx7640, rx8620, and rx8640 servers this is the normal behavior for nPartitions with an ACPI configuration setting of default or single-pci-domain. Available only on HP Integrity rx7620, rx7640, rx8620, and rx8640 servers.
reset — Resets the local nPartition, resetting all cells and then proceeding with the
nPartition boot phase.
reconfigreset — Performs a shutdown for reconfig of the local nPartition, resetting
all cells and then holding them at the "wait at BIB" state, making the nPartition inactive.
BCH menu support for managing nPartition booting includes the following commands. (BCH is available only on HP 9000 servers.)
BOOT — Initiate an operating system boot from a specified boot device path or path
variable.
REBOOT — Resets the local nPartition, resetting all cells and then proceeding with the
nPartition boot phase.
RECONFIGRESET — Performs a shutdown for reconfig of the local nPartition, resetting
all cells and then holding them at the "wait at BIB" state, making the nPartition inactive.
PATH — List and set boot device path variables (PRI, HAA, ALT). — Configuration menu, PATHFLAGS command — List and set the boot control flag for
each boot path, effectively determining the nPartition autoboot behavior.
34 Getting Started with nPartitions
Page 35
HP-UX includes the following commands for shutting down and rebooting the nPartition.
shutdown -r — Shuts down HP-UX and resets the local nPartition, resetting cells and
then proceeding with the nPartition boot phase.
On HP 9000 servers shutdown -r resets only the active cells.
On HP Integrity servers shutdown -r has the same effect as shutdown -R. All cells are reset and nPartition reconfiguration occurs as needed.
shutdown -h — On HP 9000 servers, shuts down HP-UX, halts all processing on the
nPartition, and does not reset cells.
On HP Integrity servers, shutdown -h has the same effect as shutdown -R -H and results in a shutdown for reconfig.
shutdown -R — Shuts down HP-UX and performs a reboot for reconfig of the
nPartition. All cells are reset and nPartition reconfiguration occurs as needed. The nPartition then proceeds with the nPartition boot phase.
shutdown -R -H — Shuts down HP-UX and performs a shutdown for reconfig of the
nPartition. All cells are reset and nPartition reconfiguration occurs as needed. All cells then remain at a "wait at BIB" state and the nPartition is inactive.
NOTE: On Superdome SX1000 PA and SX2000 PA, shutdown -R -H does not stop at BIB if the MP has been hot swapped since the last reboot.
On HP rx7620, rx7640, rx8620, and rx8640 servers with a default (to support HP-UX) ACPI configuration setting a "wait at BIB" state is the default behavior, but the acpiconfig enable softpowerdown EFI Shell command can be used to instead cause all nPartition hardware to power off.
HP OpenVMS I64 includes the following commands for shutting down and rebooting the nPartition.
@SYS$SYSTEM:SHUTDOWN.COM — Shuts down the OpenVMS I64 operating system.
The @SYS$SYSTEM:SHUTDOWN.COM command provides a series of prompts that you use to establish the shutdown behavior, including the shutdown time and whether the system is rebooted after it is shut down.
To perform a reboot for reconfig from OpenVMS I64 running on an nPartition,
issue @SYS$SYSTEM:SHUTDOWN.COM from OpenVMS, and then enter Yes at the "Should an automatic system reboot be performed" prompt
To perform a shutdown for reconfig of an nPartition running OpenVMS I64: first
issue @SYS$SYSTEM:SHUTDOWN.COM from OpenVMS and enter No at the "Should an automatic system reboot be performed" prompt, then access the MP and, from the MP Command Menu, issue the RR command and specify the nPartition that is to be shutdown for reconfig.
RUN SYS$SYSTEM:OPCRASH — Causes OpenVMS to dump system memory and then
halt at the P00>> prompt. To reset the nPartition following OPCRASH, access the nPartition console and press any key to reboot.
Overview of Managing nPartitions 35
Page 36
Microsoft® Windows® includes the following commands for shutting down and rebooting the nPartition.
shutdown /r — Shuts down Windows and performs a reboot for reconfig of the
nPartition. All cells are reset and nPartition reconfiguration occurs as needed. The nPartition then proceeds with the nPartition boot phase.
shutdown /s — Shuts down Windows and performs a shutdown for reconfig of the
nPartition. The default behavior differs on HP Integrity Superdome servers and HP Integrity HP rx7620, rx7640, rx8620, and rx8640 servers.
On HP Integrity Superdome servers, shutdown /s causes all cells to be reset and nPartition reconfiguration to occur as needed. All cells then remain at a "wait at BIB" state and the nPartition is inactive
On HP Integrity HP rx7620, rx7640, rx8620, and rx8640 servers, the default behavior is for shutdown /s to cause nPartition hardware to be powered off. On HP rx7620, rx7640, rx8620, and rx8640 servers with a windows ACPI configuration setting, the the acpiconfig disable softpowerdown EFI Shell command can be used to instead cause all cells to instead remain at a "wait at BIB" state.
Red Hat Enterprise Linux and SuSE Linux Enterprise Server include the following commands for shutting down and rebooting the nPartition.
shutdown -r time — Shuts down Linux and performs a reboot for reconfig of the
nPartition. All cells are reset and nPartition reconfiguration occurs as needed. The nPartition then proceeds with the nPartition boot phase.
The required time argument specifies when the Linux shutdown is to occur.
You can specify time in the format hh:mm, in which hh is the hour (one or two digits) and mm is the minute of the hour (two digits); or in the format +m, in which m is the number of minutes delay until shutdown; or specify now to immediately shut down.
shutdown -h time — Shuts down Linux and performs a shutdown for reconfig of the
nPartition. All cells are reset and nPartition reconfiguration occurs as needed. All cells then remain at a "wait at BIB" state and the nPartition is inactive.
The required time argument specifies when the Linux shutdown is to occur.
On HP rx7620, rx7640, rx8620, and rx8640 servers with an ACPI configuration setting of single-pci-domain, a "wait at BIB" state is the default OS shutdown for reconfig behavior, but the acpiconfig enable softpowerdown EFI Shell command can be used to instead cause all nPartition hardware to power off.
For details see Chapter 5 (page 87).
Complex Profile
The configurable aspects of a server complex are represented in a set of data called the "Complex Profile", which determines how hardware is assigned to and used by nPartitions within a server.
36 Getting Started with nPartitions
Page 37
The Complex Profile consists of three parts, or groups of data, which are described in detail in
Table 1-5 (page 40):
“Stable Complex Configuration Data” (page 40) — This group contains complex-wide settings, including the complex name, serial number, the nPartition assignment for each cell, and other details that apply to the entire server complex.
The Complex Profile contains one Stable Complex Configuration Data entry.
“Dynamic Complex Configuration Data” (page 40) — Architecturally reserved data.
“Partition Configuration Data” (page 40) — This group contains individual nPartition settings, including the nPartition name, core cell choices, and other details that are specific to an nPartition.
The Complex Profile contains a Partition Configuration Data entry for each possible nPartition. (A server complex may have a maximum of sixteen nPartitions, globally numbered from 0-15.)
The master copy of all parts of the Complex Profile resides on the service processor (MP or GSP) for the complex. Each cell in the complex also has a copy of the Stable Complex Configuration Data and a copy of the Partition Configuration Data for the nPartition to which it is assigned.
The service processor (MP or GSP) in the server manages all Complex Profile data and keeps all copies of the data coherent using a locking mechanism, as described in the next sections.
Changing the Server Complex Profile
To modify the Complex Profile and thus change the server complex configuration, you use an administration tool such as Partition Manager or one of the nPartition commands. See
“Administration Tools for nPartitions” (page 18) for details. You cannot directly edit the Complex
Profile data for a server.
The service processor maintains a set of locks that are used to ensure that only one set of changes to the Complex Profile occurs at a time.
When you configure nPartitions, the administration tools you use revise the Complex Profile for the server in coordination with the service processor. The tools acquire and release locks as needed when modifying Complex Profile entries. You do not directly manage Complex Profile locks under normal circumstances, but you can force an entry to be unlocked if required.
How the Complex Profile is Updated
A server Complex Profile is updated when you use one of the nPartition administration tools (such as Partition Manager or commands) to create, modify, or delete an nPartition or modify complex-wide data.
The general process by which changes to the Complex Profile occur is as follows:
1. An administrator uses an nPartition administration tool to request that a specific configuration change occurs.
This is a request to create, modify, or delete an nPartition or modify complex-wide data such as the complex name.
2. The tool acquires a lock from the service processor (MP or GSP) for the Complex Profile entry that will be revised.
The lock ensures that no other changes to the Complex Profile entry will occur while the tool revises it.
If the entry already is locked, that Complex Profile entry cannot be updated and the request will fail and the tool exits with an error message.
3. The tool reads the Complex Profile entry whose lock it has acquired.
4. The tool revises the Complex Profile entry according to the administrator request.
Complex Profile 37
Page 38
5. The tool sends the revised Complex Profile entry back to the service processor along with
the corresponding lock key.
6. The service processor then "pushes out" the new, revised Complex Profile entry by updating
its copy and updating all cells that have a copy of the entry.
However, the service processor will not push out a revised Complex Profile entry that affects the nPartition assignment of an active cell. In this case the revised entry will remain pending until the cell becomes inactive, for example during a reboot for reconfig or shutdown for reconfig of the nPartition to which the cell is assigned.
7. After the service processor has pushed out the revised Complex Profile entry it clears the
lock for the entry.
After the entry is unlocked then, as needed, another nPartition configuration task can lock and revise that portion of the Complex Profile.
A single administration task can revise multiple Complex Profile entries. For example, you can create a new nPartition and assign its name in a single action. In this case the tool you use must lock both the Stable Complex Configuration Data and the Partition Configuration Data entry for the new nPartition before revising the data according to the administration request.
Multiple nPartition configuration tasks can occur essentially concurrently if all tasks revise different Complex Profile entries (thus allowing each task to acquire a lock for the entry it revises).
Complex Profile Entry Locking and Unlocking
Each Complex Profile entry has its own lock which is used to restrict access to the entry. If necessary you can manually unlock Complex Profile entries, however in nearly all situations you instead should allow the administration tools to automatically acquire and release locks.
CAUTION: You should generally avoid manually unlocking Complex Profile entries because doing so can can result in the loss of configuration changes.
The locks for Complex Profile entries are managed as described here.
For the Stable Complex Configuration Data entry, there are slight differences in the locking mechanisms on HP 9000 and HP Integrity servers.
— On cell-based HP 9000 servers, the Stable Complex Configuration Data has a single
lock.
— On cell-based HP Integrity servers, the Stable Complex Configuration Data has two
separate locks: a "read lock" for restricting read access to the current Stable Complex Configuration Data entry, and a "write lock" for restricting access to a modifiable copy of the Stable Complex Configuration Data.
On both HP 9000 and HP Integrity cell-based servers there is one lock for each Partition Configuration Data entry (each nPartition has its own Partition Configuration Data entry).
The parunlock command and the service processor RL command enable you to manually unlock Complex Profile entries.
It can be necessary to manually unlock a Complex Profile entry in the situation where an nPartition configuration tool such as Partition Manager has prematurely exited. If such a tool exits before it sends revised Complex Profile entries and corresponding lock keys back to the service processor, the entries that the tool locked will remain locked indefinitely (until they are manually unlocked).
Manually Unlocking a Complex Profile Entry You can manually unlock Complex Profile entries after an nPartition configuration tool has exited before unlocking the entries it had locked. In this situation an attempt to modify the nPartition or complex-wide setting will fail because the Complex Profile entries still are locked. If you are certain no authorized users are changing configurations, use the parunlock command or the service processor RL command to unlock
38 Getting Started with nPartitions
Page 39
the entries. After they are unlocked you can perform the modifications you had previously attempted. For details see “Unlocking Complex Profile Entries” (page 198).
Aborting a Complex Profile Change A pending update of the Complex Profile can be canceled or prevented by clearing the lock for a Complex Profile entry before the service processor has pushed out the revised data for the entry. This occurs, for example, when you have issued a request to change the nPartition assignment of an active cell and then manually unlock the effected Complex Profile entries before performing a reboot for reconfig of the nPartition to which the cell is assigned. For details see “Canceling Pending Changes to the Complex Profile”
(page 199).
Complex Profile Group Details
Table 1-5 lists details of the three groups of data that comprise the Complex Profile.
Complex Profile 39
Page 40
NOTE: The Complex Profiles on cell-based HP 9000 servers and cell-based HP Integrity servers contain slightly different sets of information.
Table 1-5 covers both types of Complex Profiles.
The Complex Profile on cell-based HP Integrity servers includes all HP 9000 server Complex Profile data and additional components that are specific to HP Integrity servers. Some HP 9000 server Complex Profile data is unused on HP Integrity servers but is included for compatibility.
Table 1-5 Complex Profile Group Details
Description and ContentsComplex Profile Group
The Stable Complex Configuration Data contains complex-wide configuration details, some of which may be set by administrators.
Although the Stable Complex Configuration Data applies to the whole complex, the cell assignments and cell local memory (CLM) per cell components are comprised of data that affect the individual cells.
A copy of the Stable Complex Configuration Data resides on the service processor (MP or GSP) and on every cell in the complex.
The system boot interfaces (the BCH and EFI environments) do not have methods for changing Stable Complex Configuration Data. Instead, use the service processor command menu or nPartition management tools.
The Stable Complex Configuration Data includes these components:
• Model String — Only applies to HP 9000 servers. PA-RISC model.
• Complex System Name — User-chosen name for the complex.
• Original Product Number — Set by HP manufacturing.
• Current Product Number — Originally set by HP manufacturing.
• Creator Serial Number — Set by HP manufacturing.
• Cell Assignments — User-configurable nPartition assignments for all cells in the complex; also specifies each cell type (e.g. base).
• Cell Local Memory (CLM) Per Cell — Only on servers based on the HP sx1000 or sx2000 chipset. User-configurable setting for each cell that determines the amount of cell local memory. The operating system on an nPartition with CLM configured must also support CLM for the cell local memory to be accessible to the operating system.
• nPartition Configuration Privilege — Only on servers based on the HP sx1000 or sx2000 chipset. Either unrestricted or restricted. A restricted privilege means complex changes are possible only through the service processor LAN interface, which prompts for the IPMI password.
Stable Complex Configuration Data Complex-wide
information.
The Dynamic Complex Configuration Data is architecturally reserved information that applies to the entire server complex.
A copy of the Dynamic Complex Configuration Data resides on the service processor (MP or GSP) and on every cell in the complex. A reboot is not required for Dynamic Complex Configuration Data changes to take effect.
The system boot interfaces (the BCH and EFI environments) do not have methods for changing Dynamic Complex Configuration Data. Users and administrators do not directly configure this data.
Dynamic Complex Configuration Data Architecturally
reserved information.
Partition Configuration Data nPartition-
specific information (each nPartition has its own data).
40 Getting Started with nPartitions
Page 41
Table 1-5 Complex Profile Group Details (continued)
Description and ContentsComplex Profile Group
The Partition Configuration Data contains configuration details specific to each nPartition in the complex. Each nPartition has its own Partition Configuration Data entry, which may be modified by administrators.
The service processor (MP or GSP) has a copy of the Partition Configuration Data for every nPartition. Each cell has a copy of the Partition Configuration Data entry for the nPartition to which it is assigned.
Partition Configuration Data includes this data for each nPartition:
• HP 9000 server components (unused on HP Integrity servers) — These components apply only on HP 9000 servers, but are present on HP Integrity servers for compatibility: Primary Boot Path, HA Alternate Boot Path, Alternate Boot Path, Console Path, Keyboard Path, Boot Timer, Known Good Memory Requirement, Autostart and Restart Flags, and CPU Flags (e.g. Data Prefetch setting).
• Cell use-on-next-boot values — Specifies whether the cell is to be an active or inactive member of the nPartition to which it is assigned.
• Core Cell Choices — Up to four cells preferred to be the core cell.
• Partition Number — The partition number; not user-configurable.
• Profile Architecture — Specifies whether the current Partition Configuration Data applies to the HP 9000 server architecture or HP Integrity server architecture; not user-configurable.
• nPartition Name — The nPartition name, used in various displays.
• Cell Failure Usage — Specifies how each cell in the nPartition is handled when a processor or memory component fails self-tests. Only activating the cell to integrate it into the nPartition is supported (the ri failure usage option, as specified by the parcreate and parmodify commands).
• IP Address — If set, should be consistent with the IP address assigned to the nPartition when HP-UX is booted. Not actually used for network configuration, but for information only.
Remote and Local Management of nPartitions
You can remotely manage cell-based servers using either the Enhanced nPartition Commands or Partition Manager Version 2.0.
The Enhanced nPartition Commands and Partition Manager Version 2.0 also can run on an nPartition and manage that nPartition and the complex to which it belongs.
The ability to remotely manage a server based on the HP sx1000 chipset or HP sx2000 chipset is enabled by two technologies: the Web-Based Enterprise Management infrastructure (WBEM) and the Intelligent Platform Management Interface (IPMI). A brief overview of these technologies is provided first, then explanations of how to use the tools to locally and remotely manage cell-based servers are provided.
Intelligent Platform Management Interface (IPMI)
The nPartition management tools perform their functions by sending requests to the service processor. These requests are either to get information about the server or to affect changes to the server.
On the first generation of cell-based servers (the HP 9000 Superdome SD16000, SD32000, and SD64000 models; rp7405/rp7410; and rp8400 servers) a proprietary interface to the service processor was implemented. This interface relied on system firmware to convey information between HP-UX and the service processor. This in turn required that the nPartition management tools run on an nPartition in the complex being managed.
The service processor in all sx1000-based or sx2000-based servers supports the Intelligent Platform Management Interface (IPMI) as a replacement for the proprietary interface mentioned above. IPMI is an industry-standard interface for managing hardware. IPMI also supports value-added capabilities, such as HP's nPartition and complex management features.
Remote and Local Management of nPartitions 41
Page 42
The service processor in all sx1000-based or sx2000-based servers supports two of the communication paths defined by the IPMI standard: the Block Transfer path and IPMI over LAN. Some background details about each of these communication paths is provided in the next sections. How and when these paths are used is covered in the explanations of the local and remote management scenarios that follow.
IPMI Block Transfer (IPMI BT)
The IPMI Block Transfer (IPMI BT) path uses a driver [/dev/ipmi for HP-UX 11i v2 (B.11.23) and HP-UX 11i v3 (B.11.31)] and a hardware buffer on each cell to provide communication between the operating system and the service processor. Thus, each nPartition running HP-UX 11i v2 or HP-UX 11i v3 in an sx1000-based or sx2000-based server has its own private path to the service processor; the block transfer hardware on the core cell in each nPartition is used. The service processor always reliably knows which nPartition a request comes from.
NOTE: The IPMI BT path currently is supported only for nPartitions running the Enhanced nPartition Commands. To use the IPMI BT interface, you must locally or remotely access the operating system running in the target complex. For details see “Remote Management Using
WBEM” (page 44).
In many respects from an administrator's perspective the IPMI BT interface behaves like the proprietary interface used in the first-generation cell-based servers. For example, a user with superuser capabilities on an nPartition can manage the entire complex, including making changes to both the local nPartition and other nPartitions in the complex.
nPartition Configuration Privilege
Because it is not always desirable to allow a user on one nPartition to make changes that affect other nPartitions, HP provides the nPartition Configuration Privilege on sx1000-based or sx2000-based servers.
You can control the nPartition Configuration Privilege by using the PARPERM command at the service processor Command menu.
The nPartition Configuration Privilege has two settings:
Unrestricted — The default setting, which allows the behavior described above.
Restricted — Restricts use of the IPMI BT interface to the following capabilities: — Retrieving information about the server. Everything that is normally displayed by
Partition Manager and the parstatus command is still available.
— Making changes to the local nPartition's Partition Configuration Data. (Details on local
versus remote nPartitions is provided later.) — Manipulating any of the attention indicators (LEDs). — Powering on/off cells and I/O chassis that belong to the local nPartition.
Restricting the nPartition Configuration Privilege does not restrict deallocation of processors across nPartition boundaries.
By restricting the nPartition Configuration Privilege, you limit someone with superuser privileges on an nPartition to doing things that affect only that nPartition. However, when the nPartition Configuration Privilege is restricted certain changes can only be made by using the nPartition management tools in the mode that utilizes IPMI over LAN.
IPMI over LAN
IPMI requests can be sent to the service processor's LAN port, thus eliminating the need to involve any of the nPartitions in the server.
IPMI LAN access to a service processor may be enabled or disabled by the SA command at the service processor Command menu.
42 Getting Started with nPartitions
Page 43
The service processor will accept IPMI requests over its LAN port only if the request is accompanied by the correct password. To set the IPMI password use the SO command at the service processor Command menu.
Communication using IPMI over LAN is authenticated using the challenge and response protocol defined by the IPMI specification. The MD5 message digest algorithm (RFC1321) is used to encrypt the IPMI password and to ensure authentication of both the server and the client. All IPMI messages are authenticated in the manner described above. In addition, appropriate methods are implemented to protect against replay attacks.
The use of IPMI over LAN is not affected by setting the nPartition Configuration Privilege to restricted. When the IPMI BT interfaces are restricted certain changes to a complex can only be made by using the nPartition management tools in the mode that utilizes IPMI over LAN.
The following list describes all the actions that can be performed using IPMI over LAN.
Retrieving information about the server.
Changing the Stable Complex Configuration Data, including cell local memory settings and all cell assignments (that is: creating an nPartition, assigning cells to an nPartition, unassigning cells from an nPartition, and removing an nPartition).
Powering on/off all cells and I/O chassis in the server, including unassigned resources.
Manipulating any of the attention indicators (LEDs).
Web-Based Enterprise Management (WBEM)
The Enhanced nPartition Commands and Partition Manager Version 2.0 are implemented as WBEM client applications.
The Enhanced nPartition Commands toolset for HP-UX and Linux also includes a WBEM agent known as the nPartition Provider.
The Windows operating system includes the Windows Management Instrumentation (WMI) software, which is the Microsoft implementation of WBEM. To support the Windows release of the Enhanced nPartitionCommands, HP also provides the WMI Mapper and the WMI nPartition Provider software components for the Windows system. The WMI-based nPartition tools components for Windows provides a WBEM-compliant solution.
All communication with the service processor, whether by way of the IPMI BT path [for example, using /dev/ipmi on HP-UX 11i v2 (B.11.23) and HP-UX 11i v3 (B.11.31)] or by IPMI over LAN, is done by the nPartition Provider. The nPartition Provider responds to requests sent to it by the nPartition commands and Partition Manager.
Partition Manager uses the nPartition commands to make changes to a cell-based server. Partition Manager Version 2.0 only uses WBEM directly when retrieving information about a server.
The power of WBEM is that it enables a distributed architecture. The applications (the nPartition management tools) can be running on one system and can use the WBEM infrastructure to send requests to other systems. See “Remote Management Using WBEM” (page 44) for more details.
Local Management
As previously mentioned, the Enhanced nPartition Commands and Partition Manager Version
2.0 can run on an nPartition to manage that nPartition and the complex that it belongs to. This
is the default behavior of the tools when run on an nPartition.
In this scenario, the nPartition management tools send WBEM requests to the nPartition Provider running on the local nPartition (that is, the same nPartition where the tools are being run). The nPartition Provider uses /dev/ipmi to send requests to the service processor in the local complex.
If the nPartition Configuration Privilege is unrestricted, then the server can be managed from any nPartition and making changes to other nPartitions in the complex is supported. However, if the privilege is set to restricted then certain operations are supported only when using the
Remote and Local Management of nPartitions 43
Page 44
tools in the mode that uses IPMI over LAN (see “Remote Management Using IPMI over LAN”
(page 46)).
Local management is the only form of management supported by the older nPartition tools (the Original nPartition Commands and Partition Manager Version 1.0). Also, because the nPartition Configuration Privilege is a feature of the sx1000-based and sx2000-based servers it affects the older nPartition tools when used on nPartitions in an sx1000-based or sx2000-based server, but not when used on nPartitions in the first-generation cell-based servers.
Remote Management Using WBEM
WBEM enables one form of remote management of an nPartition complex: using nPartition management tools (WBEM client applications) that are running on one system to communicate with the nPartition Provider (a WBEM agent) running on an nPartition in the complex to be managed.
When performing remote management using WBEM the following terminology is used:
The complex being managed is referred to as a "remote complex" because it is remote with respect to the system where the tools are being run.
The remote complex is also the "target complex" as it is the complex that will be affected by any changes requested by the tools.
The nPartition that the tools communicate with (using WBEM) is referred to as a "remote nPartition" because it is remote with respect to the system where the tools are being run.
If the tools are used to retrieve information about or to make a change to a specific nPartition in the target complex, then that nPartition is the "target nPartition". The target nPartition and the remote nPartition might be the same, but don't have to be the same nPartition.
For example, the parmodify command could be used in a way where it sends requests to an nPartition in the target complex but the -p option identifies a different nPartition to be modified.
The following sections explain how to use the Enhanced nPartition Commands and Partition Manager Version 2.0 to remotely manage an nPartition complex using WBEM. The system where the tools are used could be an nPartition or other system, but where the tools are run is irrelevant when performing remote management of an nPartition complex.
NOTE: Remote management using WBEM relys on an nPartition in the target complex being booted to multi-user mode. The remote nPartition must be configured to accept remote WBEM requests.
Remote management using WBEM also requires that the Trust Certificate Store file on the local system contains a copy of the server certificate data from the SSL Certificate file on the system being managed. See “WBEM Remote Management Files” (page 44).
WBEM Remote Management Files
WBEM systems provide secure remote management using the following files as part of the SSL authentication process. Both files reside on all WBEM-enabled systems.
server.pem — WBEM SSL Certificate file The SSL Certificate file resides on the system that is being managed and contains the local server's PRIVATE KEY and CERTIFICATE data.
On HP-UX B.11.23 systems, the SSL Certificate file is the /var/opt/wbem/server.pem file.
44 Getting Started with nPartitions
Page 45
On a Windows system, the SSL Certificate file is in the location specified by the
%PEGASUS_HOME%\cimcerver_current.conf file; in this file the sslCertificateFilePath entry specifies the SSL Certificate file location.
client.pem — WBEM Trust Certificate Store file The Trust Certificate Store file resides on
the system from which WBEM remote management commands are issued.
On HP-UX B.11.23 systems, the Trust Certificate Store file is the /var/opt/wbem/client.pem file.
On a Windows system system, the Trust Certificate Store file is the %HP_SSL_SHARE%\client.pem file, where %HP_SSL_SHARE% specifies the directory where the file resides.
To remotely manage a server, the Trust Certificate Store file (client.pem) on the local system must contain a copy of the CERTIFICATE data from the SSL Certificate file (server.pem) on the remote server. The CERTIFICATE data includes all text starting with the "-----BEGIN CERTIFICATE-----" line through the "-----END CERTIFICATE-----" line.
By default the Trust Certificate Store file contains a copy of the CERTIFICATE data from the SSL Certificate data for the local system.
nPartition Commands Support for Remote Management Using WBEM
Two options supported by the Enhanced nPartition Commands result in remote management using WBEM. These options are:
-u username
The -u option specifies a valid username on the remote nPartition.
For the parstatus and fruled commands any user defined on the remote nPartition can be used, but the other commands require the username to be a user with superuser privileges on the remote nPartition.
-h hostname | IPaddress
The -h option specifies either the hostname or IP address of the remote nPartition.
When you use the -u... -h... set of options, the specified command sends the appropriate WBEM requests to the remote nPartition where the requests are handled by the nPartition Provider using /dev/ipmi to communicate with the service processor in the target complex.
Partition Manager Support for Remote Management Using WBEM
Partition Manager Version 2.0 supports remote management using WBEM in either of two ways.
Run Partition Manager Version 2.0 on an nPartition and then select the Switch Complexes task from the Tools menu. In the resulting dialog enter the hostname or IP address of the remote nPartition, and supply a username and that user's password.
If you will use Partition Manager only to display information about the target complex, then you can specify any user defined on the remote nPartition.
However, if you will use Partition Manager to make changes to the target complex then you must specify a user with superuser privileges on the remote nPartition.
Run Partition Manager Version 2.0 on a system that is not an nPartition, and Partition Manager will immediately display the Switch Complexes dialog.
Remote and Local Management of nPartitions 45
Page 46
Figure 1-1 Partition Manager Version 2.0 Switch Complexes Dialog
Remote Management Using IPMI over LAN
IPMI over LAN enables the second form of remote management of an nPartition complex: using nPartition management tools that are running on a system to communicate directly (without going through an nPartition) with the service processor in the complex to be managed.
When performing remote management using IPMI over LAN the following terminology is used:
The complex being managed is referred to as a "remote complex" because it is remote with respect to the system where the tools are being run.
The remote complex is also the "target complex" as it is the complex that will be affected by any changes requested by the tools.
If the tools are used to retrieve information about or to make a change to a specific nPartition in the target complex, then that nPartition is the "target nPartition".
Note that there is no concept of a "remote nPartition" in this scenario.
The following sections explain how to use the nPartition commands and Partition Manager to remotely manage an nPartition complex using IPMI over LAN.
The system where the tools are used could be an nPartition or other system, but where the tools are run is irrelevant when performing remote management of an nPartition complex.
nPartition Commands Support for Remote Management Using IPMI over LAN
Two options of the Enhanced nPartition Commands result in remote management using IPMI over LAN. These options are:
-g [password]
The password is the service processor's IPMI password.
-h hostname | IPaddress
The -h option specifies the hostname or IP address of the service processor in the target complex.
When you use the -g... -h... set of options, the specified command sends the appropriate WBEM requests to the local nPartition Provider, which in turn uses IPMI over LAN to communicate with the service processor in the target complex.
46 Getting Started with nPartitions
Page 47
Partition Manager Support for Remote Management Using IPMI over LAN
Partition Manager Version 2.0 can be used in this mode in either of two ways:
Run Partition Manager on an nPartition and then select the Switch Complexes task from the Tools menu. In the resulting dialog enter the hostname or IP address of the service processor in the target complex, and supply that service processor's IPMI password.
Run Partition Manager on a system that is not an nPartition. In this situation Partition Manager immediately displays the Switch Complexes dialog.
Licensing Information: Getting Server Product Details
When you license a software product to run on an HP system, you may need to provide machine or system details to the software vendor as part of the software registration process.
This section describes how to obtain information you may need when licensing non-HP software to run on a cell-based HP server.
For complete information about software product licensing, refer to the company that manufactures or sells the software you plan to use.
Unique Machine (Complex) Identifier /usr/bin/getconf _CS_MACHINE_IDENT
Unique nPartition Identifier /usr/bin/getconf _CS_PARTITION_IDENT
Unique Virtual Partition Identifier /usr/bin/getconf _CS_PARTITION_IDENT
Machine (Complex) Serial Number /usr/bin/getconf _CS_MACHINE_SERIAL and
/usr/sbin/parstatus -X
Server (Complex) Product Number /usr/sbin/parstatus -X
Machine (Complex) Hardware Model /usr/bin/getconf MACHINE_MODEL and
/usr/bin/model
HP-UX Version and Installed Bundles
For the HP-UX version: /usr/bin/uname -r For all bundles installed: /usr/sbin/swlist -l bundle
nPartition and Virtual Partition Unique Identifiers
NOTE: Use the getconf command or the confstr() call to obtain unique identifiers. Do not use the uname -i command, which does not report unique IDs for nPartition systems.
In order to guarantee compatibility on current and future platforms, use the interfaces to getconf(1) and confstr(3C) to retrieve unique machine identifiers.
The interfaces include the _CS_PARTITION_IDENT and _CS_MACHINE_IDENT parameters:
For a nPartition-specific or a virtual partition-specific unique ID use this command:
/usr/bin/getconf _CS_PARTITION_IDENT
The unique partition identifier value for a virtual partition environment has virtual partition-specific data added that does not appear for an equivalent non-vPars environment. See the examples that follow.
For a complex-specific unique ID use this command:
/usr/bin/getconf _CS_MACHINE_IDENT
On cell-based HP PA-RISC servers, the complex, nPartition, and virtual partition unique IDs are based in part on the machine serial number.
To retrieve the machine serial through these interfaces, specify the _CS_MACHINE_SERIAL parameter to them.
Licensing Information: Getting Server Product Details 47
Page 48
Refer to the confstr(3C) manpage for details on these parameters and their use.
Example 1-1 Unique IDs for an nPartition and Complex
The following examples show nPartition-unique and complex-unique IDs returned by the
getconf command, as well as the local nPartition number and machine serial number.
# parstatus -w The local partition number is 1. # /usr/bin/getconf _CS_PARTITION_IDENT Z3e02955673f9f7c9_P1 # /usr/bin/getconf _CS_MACHINE_IDENT Z3e02955673f9f7c9 # /usr/bin/getconf _CS_MACHINE_SERIAL USR2024FP1 #
Example 1-2 Unique IDs for Virtual Partitions (vPars)
The following example shows the virtual partition-unique ID returned by the getconf command, as well as the local nPartition number and the current virtual partition name.
# parstatus -w The local partition number is 0. # vparstatus -w The current virtual partition is Shad. # getconf _CS_PARTITION_IDENT Z3e0ec8e078cd3c7b_P0_V00 #
For details on virtual partitions, refer to the book Installing and Managing HP-UX Virtual Partitions (vPars).
48 Getting Started with nPartitions
Page 49
2 nPartition Server Hardware Overview
This chapter describes the cell-based HP server models, including system capacities, model strings, and differences among the cell-based server models.
Both HP 9000 servers and HP Integrity servers are discussed here.
The HP 9000 series of servers has HP PA-RISC processors.
The cell-based HP 9000 servers include three generations of servers: the first-generation models, models based on the HP sx1000 chipset, and models based on the HP sx2000 chipset.
The HP Integrity series of servers has Intel® Itanium® 2 processors.
The cell-based HP Integrity servers either are based on the HP sx1000 chipset or are based on the HP sx2000 chipset.
sx1000 Chipset for HP Servers
The second generation of cell-based servers is built around the HP sx1000 chipset. The sx1000 chipset supports both single-core and dual-core processors, including both HP PA-RISC and Intel® Itanium® 2 processors.
The following servers use the HP sx1000 chipset:
PA-RISC servers — HP rp7420, HP rp8420, HP 9000 Superdome (SD16A, SD32A, and SD64A models).
Itanium® 2-based servers — HP rx7620, HP rx7620-16, HP rx8620, HP rx8620-32, HP Integrity Superdome (SD16A, SD32A, SD64A models).
The HP sx1000 chipset provides a scalable server architecture with high-bandwidth processor, memory, and I/O busses. The HP sx1000 chipset includes interconnecting components such as memory buffers, I/O bus adapters and host bridges, and cell controllers, which have built-in low-level error correction.
One notable administration feature of HP servers built around the HP sx1000 chipset is management processor (MP) support for access to the server using IPMI over LAN. For details see “Remote and Local Management of nPartitions” (page 41).
sx2000 Chipset for HP Servers
The third generation of cell-based HP servers is built around the HP sx2000 chipset.
The following servers use the HP sx2000 chipset:
PA-RISC servers — HP rp7440, HP rp8440, HP 9000 Superdome (SD16B, SD32B, SD64B models).
Itanium® 2-based servers — HP rx7640, HP rx8640, HP Integrity Superdome (SD16B, SD32B, SD64B models).
The HP sx2000 chipset provides a scalable server architecture with high-bandwidth processor, memory, and I/O busses. The HP sx2000 chipset includes new cell boards with core I/O, new system and I/O backplanes, new interconneting components, and the addition of a redundant clock source that is hot-swappable.
HP servers built around the HP sx2000 chipset include management processor (MP) support for access to the server using IPMI over LAN. For details see “Remote and Local Management of
nPartitions” (page 41).
Model Identifiers for Machine Hardware
The machine hardware model identifies the server hardware type.
sx1000 Chipset for HP Servers 49
Page 50
A summary of the supported cell-based servers and their corresponding model identifiers appears in “Server Hardware Details: Cell-Based HP Servers” (page 51).
You can report the machine hardware model for the local server complex using the following methods:
From HP-UX 11i use either the /usr/bin/model command or the /usr/bin/getconf MACHINE_MODEL command.
From the Windows command line, use the systeminfo command to report system details including the system model.
Different methods are used to establish the machine hardware model on HP 9000 servers and HP Integrity servers.
For HP 9000 servers, the reported machine hardware model is the Model String component of the Stable Complex Configuration Data.
For HP Integrity servers, the machine hardware model is based on the Creator Manufacturer and Creator Product Name complex-wide settings.
On OEM versions of cell-based HP Integrity servers, the OEM Manufacturer and OEM Product Name are the machine hardware model, if they are set.
See “Complex Profile” (page 36) for details on the Model String and Creator complex-wide settings.
50 nPartition Server Hardware Overview
Page 51
Server Hardware Details: Cell-Based HP Servers
Table 2-1 lists the cell-based HP servers. For individual server details see the sections that follow.
Table 2-1 Models of Cell-Based HP Servers
DescriptionServer Model(s)Cell Capacity
Up to eight PA-RISC processor cores.
Runs HP-UX B.11.11. Also runs the HP-UX B.11.23 September 2004 release and later. Runs HP-UX B.11.31.
model string: 9000/800/rp7410
HP 9000 rp7405/7410 serverTwo-Cell Servers
See “Two-Cell nPartition
Server Model” (page 55).
Up to eight dual-core PA-RISC processors (16 processor cores total). Uses the HP sx1000 chipset.
Runs HP-UX B.11.11. Also runs the HP-UX B.11.23 September 2004 release and later. Runs HP-UX B.11.31.
model string: 9000/800/rp7420
HP 9000 rp7420 server
Up to eight dual-core PA-RISC processors (16 processor cores total). Uses the HP sx2000 chipset.
Runs the HP-UX B.11.11 December 2006 release.
model string: 9000/800/rp7440
HP 9000 rp7440 server
Up to eight Intel® Itanium® 2 processors. Uses the HP sx1000 chipset.
Runs HP-UX B.11.23, HP-UX B.11.31, HP OpenVMS I64, Microsoft® Windows® Server 2003, Red Hat Enterprise Linux, and SuSE Linux Enterprise Server.
model command output: ia64 hp rx7620 server
HP Integrity rx7620 server
Up to eight HP mx2 dual-processor modules with Intel® Itanium® 2 processors (16 processor cores total). Uses the HP sx1000 chipset.
Runs HP-UX B.11.23, HP-UX B.11.31, and Microsoft® Windows® Server 2003.
model command output: ia64 hp rx7620 server
HP Integrity rx7620-16 server
Up to eight Intel® Itanium® 2 processors: either single Intel® Itanium® 2 processors or dual-core Intel® Itanium® 2 processors. (Up to 16 processor cores total when using dual-core processors.)
Uses the HP sx2000 chipset.
Runs HP-UX B.11.23, HP-UX B.11.31, and Microsoft® Windows® Server 2003.
Servers with dual-core Intel® Itanium® 2 processors also can run HP OpenVMS I64 8.3, Red Hat Enterprise Linux 4 Update 4, and SuSE Linux Enterprise Server 10.
model command output: ia64 hp rx7640 server
HP Integrity rx7640
Server Hardware Details: Cell-Based HP Servers 51
Page 52
Table 2-1 Models of Cell-Based HP Servers (continued)
DescriptionServer Model(s)Cell Capacity
Up to 16 PA-RISC processors.
Runs HP-UX B.11.11. Also runs the HP-UX B.11.23 September 2004 release and later. Runs HP-UX B.11.31.
model string: 9000/800/S16K-A
HP 9000 rp8400 serverFour-Cell Servers
See “Four-Cell nPartition
Server Model” (page 56).
Up to 16 dual-core PA-RISC processors (32 processor cores total). Uses the HP sx1000 chipset.
Runs HP-UX B.11.11. Also runs the HP-UX B.11.23 September 2004 release and later. Runs HP-UX B.11.31.
model string: 9000/800/rp8420
HP 9000 rp8420 server
Up to 16 dual-core PA-RISC processors (32 processor cores total). Uses the HP sx2000 chipset.
Runs the HP-UX B.11.11 December 2006 release.
model string: 9000/800/rp8440
HP 9000 rp8440 server
Up to 16 Intel® Itanium® 2 processors. Uses the HP sx1000 chipset.
Runs HP-UX B.11.23, HP-UX B.11.31, HP OpenVMS I64, Microsoft® Windows® Server 2003, Red Hat Enterprise Linux, and SuSE Linux Enterprise Server.
model command output: ia64 hp rx8620 server
HP Integrity rx8620 server
Up to 16 HP mx2 dual-processor modules with Intel® Itanium® 2 processors (32 processor cores total). Uses the HP sx1000 chipset.
Runs HP-UX B.11.23, HP-UX B.11.31, and Microsoft® Windows® Server 2003.
model command output: ia64 hp rx8620 server
HP Integrity rx8620-32 server
Up to 16 Intel® Itanium® 2 processors: either single Intel® Itanium® 2 processors or dual-core Intel® Itanium® 2 processors. (Up to 32 processor cores total when using dual-core processors.)
Uses the HP sx2000 chipset.
Runs HP-UX B.11.23, HP-UX B.11.31, and Microsoft® Windows® Server 2003.
Servers with dual-core Intel® Itanium® 2 processors also can run HP OpenVMS I64 8.3, Red Hat Enterprise Linux 4 Update 4, and SuSE Linux Enterprise Server 10.
model command output: ia64 hp rx8640 server
HP Integrity rx8640
52 nPartition Server Hardware Overview
Page 53
Table 2-1 Models of Cell-Based HP Servers (continued)
DescriptionServer Model(s)Cell Capacity
Up to 64 PA-RISC processors.
Runs HP-UX B.11.11. Also runs the HP-UX B.11.23 September 2004 release and later. Runs HP-UX B.11.31.
• HP 9000 Superdome 16-way server model string: 9000/800/SD16000
• HP 9000 Superdome 32-way server model string: 9000/800/SD32000
• HP 9000 Superdome 64-way server model string: 9000/800/SD64000
HP 9000 Superdome SD16000, SD32000, SD64000 servers
HP 9000 Superdome Servers
See “Superdome Server
Models” (page 57).
Up to 64 dual-core PA-RISC processors (128 processor cores total). Uses the HP sx1000 chipset.
Runs HP-UX B.11.11. Also runs the HP-UX B.11.23 September 2004 release and later. Runs HP-UX B.11.31.
• HP 9000 Superdome 32-way server model string: 9000/800/SD16A
• HP 9000 Superdome 64-way server model string: 9000/800/SD32A
• HP 9000 Superdome 128-way server model string: 9000/800/SD64A
HP 9000 Superdome SD16A, SD32A, SD64A servers
Up to 64 dual-core PA-RISC processors (128 processor cores total). Uses the HP sx2000 chipset.
Runs the HP-UX B.11.11 December 2006 release.
• HP 9000 Superdome 32-way server model string: 9000/800/SD16B
• HP 9000 Superdome 64-way server model string: 9000/800/SD32B
• HP 9000 Superdome 128-way server model string: 9000/800/SD64B
HP 9000 Superdome SD16B, SD32B, SD64B servers
Server Hardware Details: Cell-Based HP Servers 53
Page 54
Table 2-1 Models of Cell-Based HP Servers (continued)
DescriptionServer Model(s)Cell Capacity
Up to 64 processor sockets, four per cell, with each cell having either single Intel® Itanium® 2 processors or HP mx2 dual-processor modules with Itanium 2 processors. (Up to 128 processor cores total when using HP mx2 modules.)
Runs HP-UX B.11.23, HP-UX B.11.31, and Microsoft® Windows® Server 2003. nPartitions with single Itanium 2 processors also can run HP OpenVMS I64, Red Hat Enterprise Linux, and SuSE Linux Enterprise Server.
These models use the HP sx1000 chipset:
• HP Integrity Superdome 16-way server model command output: ia64 hp superdome server
SD16A
• HP Integrity Superdome 32-way server model command output: ia64 hp superdome server
SD32A
• HP Integrity Superdome 64-way server model command output: ia64 hp superdome server
SD64A
HP Integrity Superdome SD16A, SD32A, SD64A servers
HP Integrity Superdome Servers
See “Superdome Server
Models” (page 57).
Up to 64 processor sockets, four per cell, with each cell having either single Intel® Itanium® 2 processors or dual-core Intel® Itanium® 2 processors. (Up to 128 processor cores total when using dual-core processors.)
Runs HP-UX B.11.23, HP-UX B.11.31, and Microsoft® Windows® Server 2003.
Servers with dual-core Intel® Itanium® 2 processors also can run HP OpenVMS I64 8.3, Red Hat Enterprise Linux 4 Update 4, and SuSE Linux Enterprise Server 10.
These models use the HP sx2000 chipset:
• HP Integrity Superdome 16-way server model command output: ia64 hp superdome server
SD16B
• HP Integrity Superdome 32-way server model command output: ia64 hp superdome server
SD32B
• HP Integrity Superdome 64-way server model command output: ia64 hp superdome server
SD64B
HP Integrity Superdome SD16B, SD32B, SD64B servers
54 nPartition Server Hardware Overview
Page 55
Two-Cell nPartition Server Model
The following cell-based HP servers scale from one to two cells:
The HP rp7405/rp7410 server has single-core HP PA-RISC processors.
The model string is: 9000/800/rp7410.
The HP rp7420 server has dual-core HP PA-RISC processors: PA-8800 processors, which provide two processor cores per processor socket.
The model string is: 9000/800/rp7420.
The HP rp7440 server has dual-core HP PA-RISC processors: PA-8900 processors, which provide two processor cores per processor socket.
The model string is: 9000/800/rp7440.
The HP rx7620 server has Intel® Itanium® 2 processors, either single-processor modules or HP mx2 dual-processor modules.
Both HP mx2 dual-processor modules and single Itanium 2 processors can exist in the same complex, but they cannot be mixed in the same nPartition.
The model command output is: ia64 hp rx7620 server.
The HP rx7640 server has single-core or dual-core Intel® Itanium® 2 processors.
The model command output is: ia64 hp rx7640 server.
Figure 2-1 shows a two-cell HP server cabinet.
Figure 2-1 Two-Cell HP Server Cabinet
On the two-cell HP servers you can configure a single nPartition using one or both cells, or can configure up to two separate nPartitions within the server complex. In a two-nPartition complex, you use cell 0 and its core I/O in one nPartition, and use cell 1 and its core I/O in the other nPartition.
The two-cell HP server models includes these features:
A single server cabinet that includes all cells, I/O chassis, processors, memory, PCI cards, and core I/O.
Either one or two cells. Each cell has up to four processor sockets and up to 16 DIMMs.
Two PCI I/O chassis that share the same chassis hardware.
One I/O chassis is connected to cell 0, the other is connected to cell 1.
Each I/O chassis has 8 PCI card slots, numbered from 1 to 8.
Server Hardware Details: Cell-Based HP Servers 55
Page 56
NOTE: On the first-generation and HP sx1000-based two-cell servers, two PCI slots by convention are dedicated for use by a combination LAN/SCSI card: PCI domain 0 slot 1 (the first slot on the left) and PCI domain 1 slot 8 (the last slot on the right).
On two-cell servers based on the HP sx2000 chipset, core I/O is provided in each cell.
A total server complex capacity of: 2 cells, 8 processor sockets, 32 DIMMs, and 16 PCI card slots.
Two-cell HP servers include a single server cabinet that may be rack-mounted or a stand-alone server configuration.
Four-Cell nPartition Server Model
The following cell-based HP servers scale from one to four cells:
The HP rp8400 server has single-core HP PA-RISC processors.
The model string is: 9000/800/S16K-A.
The HP rp8420 server has dual-core HP PA-RISC processors: PA-8800 processors, which provide two processor cores per processor socket.
The model string is: 9000/800/rp8420.
The HP rp8440 server has dual-core HP PA-RISC processors: PA-8900 processors, which provide two processor cores per processor socket.
The model string is: 9000/800/rp8440.
The HP rx8620 server has Intel® Itanium® 2 processors, either single-processor modules or HP mx2 dual-processor modules.
Both HP mx2 dual-processor modules and single Itanium 2 processors can exist in the same complex, but they cannot be mixed in the same nPartition.
The model command output is: ia64 hp rx8620 server.
The HP rx8640 server has single-core or dual-core Intel® Itanium® 2 processors.
The model command output is: ia64 hp rx8640 server.
Figure 2-2 shows an overview of a four-cell HP server cabinet.
Figure 2-2 Four-Cell HP Server Cabinet
You can configure a single nPartition using some or all cells, or can configure up to four separate nPartitions within the server complex when using an I/O expansion cabinet.
In a multiple-nPartition four-cell server complex, you would use cell 0 and its I/O chassis in one nPartition, and use cell 1 and its I/O chassis in another nPartition. The other cells (cells 2 and 3) can be assigned to either of the two nPartitions, or if connected to I/O in an expansion cabinet can be used to create additional nPartitions.
56 nPartition Server Hardware Overview
Page 57
The four-cell HP servers include these features:
A single server cabinet that includes cells, I/O chassis, processors, memory, PCI cards, and core I/O.
Two PCI I/O chassis in the server cabinet share the same chassis hardware. One I/O chassis is connected to cell 0, the other is connected to cell 1. Each I/O chassis has 8 PCI card slots, numbered from 1 to 8.
An optional I/O expansion cabinet that provides an additional two core I/O cards and an additional two I/O domains each containing eight PCI card slots (for a total of 16 more PCI card slots).
Two PCI I/O chassis in the I/O expansion cabinet share the same chassis hardware. One I/O chassis is connected to cell 2, the other is connected to cell 3.
From one to four cells. Each cell has four processor sockets and up to 16 DIMMs.
A total server complex capacity of: 4 cells, 16 processor sockets, 64 DIMMs, and either 16 or 32 PCI card slots.
Four-cell HP servers include a single server cabinet that can be rack-mounted or stand-alone. An optional I/O expansion cabinet may also be used to provide I/O connected to cell 2 and cell
3.
Superdome Server Models
HP Superdome servers scale up to 16 cells. The following types of HP Superdome servers are supported:
The first-generation HP 9000 Superdome: SD16000, SD32000, and SD64000
The HP sx1000-chipset-based HP 9000 Superdome: SD16A, SD32A, and SD64A
The HP sx1000-chipset-based HP Integrity Superdome server: SD16A, SD32A, and SD64A
The HP sx2000-chipset based HP 9000 Superdome server: SD16B, SD32B, and SD32B, and SD64B
The HP sx2000-chipset based HP Integrity Superdome server: SD16B, SD32B, and SD64B
A Support Management Station (SMS) is connected to each HP Superdome server through the service processor private LAN port. The SMS is either an HP-UX workstation or an HP ProLiant system running Microsoft® Windows®. The SMS primarily is used for support and service purposes. The Windows SMS supports Windows versions of Partition Manager and the nPartition commands, thus enabling remote administration of nPartitions from the SMS. For details see
“SMS (Support Management Station) for HP Superdome Servers” (page 67).
You can add up to two Superdome I/O expansion cabinets to the Superdome 32-way/64-way and Superdome 64-way/128-way server models. Each I/O expansion cabinet has up to six additional 12-slot I/O chassis.
Figure 2-3 shows an overview of an HP Superdome server compute cabinet.
Figure 2-3 HP Superdome Server Cabinet
Server Hardware Details: Cell-Based HP Servers 57
Page 58
The HP Superdome server models include:
“HP Superdome 16-/32-Way Servers: SD16000, SD16A, and SD16B” (page 58)
“HP Superdome 32-/64-Way Servers: SD32000, SD32A, and SD32B” (page 58)
“HP Superdome 64-/128-Way Servers: SD64000, SD64A, and SD64B” (page 59)
Details on these models are given in the following sections.
HP Superdome 16-/32-Way Servers: SD16000, SD16A, and SD16B
The HP Superdome 16-way/32-way server is a single-cabinet server that has from two to four cells, each with four processor sockets and up to 32 DIMMs.
The models of HP Superdome 16-way/32-way servers are: SD16000, SD16A, SD16B.
The HP 9000 Superdome SD16000 server has single-core HP PA-RISC processors.
The model string for the SD16000 server is: 9000/800/SD16000
The HP 9000 Superdome SD16A server has dual-core HP PA-RISC processors: PA-8800 processors, which provide two processor cores per processor socket.
The model string for the HP 9000 SD16A server is: 9000/800/SD16A
The HP 9000 Superdome SD16B server has dual-core HP PA-RISC processors: PA-8900 processors, which provide two processor cores per processor socket.
The model string for the HP 9000 SD16B server is: 9000/800/SD16B
The HP Integrity Superdome SD16A server has Intel® Itanium® 2 processors, either single-processor modules or HP mx2 dual-processor modules.
Both HP mx2 dual-processor modules and single Itanium 2 processors can exist in the same complex, but they cannot be mixed in the same nPartition.
The model command output for the HP Integrity SD16A server is: ia64 hp superdome
server SD16A
The HP Integrity Superdome SD16B server has single-core or dual-core Intel® Itanium® 2 processors.
The model command output for the HP Integrity SD16B server is: ia64 hp superdome
server SD16B
The Superdome 16-way/32-way server can have up to 16 processor sockets, 128 DIMMs, and up to four 12-slot PCI I/O chassis.
HP Superdome 32-/64-Way Servers: SD32000, SD32A, and SD32B
The Superdome 32-way/64-way server is a single-cabinet server that has from two to eight cells, each with four processor sockets and up to 32 DIMMs.
The models of HP Superdome 32-way/64-way servers are: SD32000, SD32A, and SD32B.
The HP 9000 Superdome SD32000 server has single-core HP PA-RISC processors.
The model string for the SD32000 server is: 9000/800/SD32000
The HP 9000 Superdome SD32A server has dual-core HP PA-RISC processors: PA-8800 processors, which provide two processor cores per processor socket.
The model string for the HP 9000 SD32A server is: 9000/800/SD32A
The HP 9000 Superdome SD32B server has dual-core HP PA-RISC processors: PA-8900 processors, which provide two processor cores per processor socket.
The model string for the HP 9000 SD32B server is: 9000/800/SD32B
58 nPartition Server Hardware Overview
Page 59
The HP Integrity Superdome SD32A server has Intel® Itanium® 2 processors, either single-processor modules or HP mx2 dual-processor modules.
Both HP mx2 dual-processor modules and single Itanium 2 processors can exist in the same complex, but they cannot be mixed in the same nPartition.
The model command output for the HP Integrity SD32A server is: ia64 hp superdome
server SD32A
The HP Integrity Superdome SD32B server has single-core or dual-core Intel® Itanium® 2 processors.
The model command output for the HP Integrity SD32B server is: ia64 hp superdome
server SD32B
The Superdome 32-way/64-way server can have up to 32 processor sockets, 256 DIMMs, up to four internal 12-slot PCI I/O chassis, plus optional I/O expansion cabinet hardware.
HP Superdome 64-/128-Way Servers: SD64000, SD64A, and SD64B
The Superdome 64-way/128-way server is a tightly interconnected dual-cabinet server that has from 4 to 16 cells, each with four processor sockets and up to 32 DIMMs.
The models of HP Superdome 64-way/128-way servers are: SD64000, SD64A, and SD64B.
The HP 9000 Superdome SD64000 server has single-core HP PA-RISC processors.
The model string for the SD64000 server is: 9000/800/SD64000
The HP 9000 Superdome SD64A server has dual-core HP PA-RISC processors: PA-8800 processors, which provide two processor cores per processor socket.
The model string for the HP 9000 SD64A server is: 9000/800/SD64A
The HP 9000 Superdome SD64B server has dual-core HP PA-RISC processors: PA-8900 processors, which provide two processor cores per processor socket.
The model string for the HP 9000 SD64B server is: 9000/800/SD64B
The HP Integrity Superdome SD64A server has Intel® Itanium® 2 processors, either single-processor modules or HP mx2 dual-processor modules.
Both HP mx2 dual-processor modules and single Itanium 2 processors can exist in the same complex, but they cannot be mixed in the same nPartition.
The model command output for the HP Integrity SD64A server is: ia64 hp superdome
server SD64A
The HP Integrity Superdome SD64B server has single-core or dual-core Intel® Itanium® 2 processors.
The model command output for the HP Integrity SD64B server is: ia64 hp superdome
server SD64B
The Superdome 64-way/128-way server can have up to 64 processor sockets, 512 DIMMs, and up to eight internal 12-slot PCI I/O chassis. (Each of the two cabinets in a Superdome 64-way/128-way server provides up to 32 processor sockets, 256 DIMMs, and up to four 12-slot PCI I/O chassis.) HP Superdome 64-way/128-way servers also can have optional I/O expansion cabinet hardware.
HP Superdome I/O Expansion Cabinet
HP Superdome 32-way/64-way and Superdome 64-way/128-way servers can include I/O expansion cabinets in addition to the server cabinet(s) in the complex.
Each I/O expansion cabinet has a cabinet number of either 8 or 9.
Server Hardware Details: Cell-Based HP Servers 59
Page 60
A Superdome I/O expansion cabinet includes up to 3 I/O bays, with two 12-slot I/O chassis in each bay. This provides for up to 6 chassis with a total of 72 PCI card slots in each I/O expansion cabinet.
The Superdome I/O expansion cabinet is a standard-size cabinet that, space permitting, you can mount peripherals in as well as I/O chassis.
Also refer to the book I/O Expansion Cabinet Guide for Superdome Servers.
60 nPartition Server Hardware Overview
Page 61
3 Planning nPartitions
This chapter describes how you can plan nPartition configurations. Details include the nPartition configuration requirements and recommendations.
For procedures to create and configure nPartitions, see Chapter 6 (page 165).
nPartition Hardware Requirements for Operating Systems
Table 3-1 lists the hardware requirements for operating systems running on nPartitions.
Table 3-1 Operating System Hardware Requirements
nPartition Hardware RequirementsOperating System
Supports up to 64 PA-RISC processor cores.HP-UX B.11.11
Supports up to 64 Intel® Itanium® 2 processor cores.HP-UX B.11.23, March 2004 and earlier
Supports up to 128 PA-RISC processors.
Supports up to 128 Intel® Itanium® 2 processor cores.
HP-UX B.11.23, September 2004 and later
Supports up to 128 Intel® Itanium® 2 processor cores.HP-UX B.11.31
Supports up to 4 cells (16 processor cores) on servers based on the HP sx1000 chipset. Requires single-core Itanium 2 processors, and does not support HP mx2 dual-processor modules.
HP OpenVMS I64 8.2-1
Supports up to 4 cells (16 processors, up to 32 cores) on servers based on the HP sx1000 chipset or HP sx2000 chipset.
On servers based on the HP sx2000 chipset, supported only in nPartitions that have dual-core Intel® Itanium® 2 processors.
HP OpenVMS I64 8.3
Supports up to 64 Intel® Itanium® 2 processor cores.Microsoft® Windows® Server 2003
Supports up to eight Intel® Itanium® 2 processor cores. Requires single-core Itanium 2 processors, and does not support HP mx2 dual-processor modules.
Supports a maximum of two cells in an nPartition.
Supports a maximum of one I/O chassis in an nPartition. Requires a PCI-X I/O chassis, and does not support PCI I/O chassis.
Supports a maximum of 96 GBytes memory.
Red Hat Enterprise Linux 3 Update 2
Supports up to eight Intel® Itanium® 2 processors. Requires single-core Itanium 2 processors, and does not support HP mx2 dual-processor modules.
Supports a maximum of two cells in an nPartition.
Supports a maximum of 128 GBytes memory.
Supports a maximum of two I/O chassis in an nPartition. Requires a PCI-X I/O chassis, and does not support PCI I/O chassis.
Red Hat Enterprise Linux 3 Update 3
nPartition Hardware Requirements for Operating Systems 61
Page 62
Table 3-1 Operating System Hardware Requirements (continued)
nPartition Hardware RequirementsOperating System
Supports up to eight Intel® Itanium® 2 processors.
On servers based on the HP sx2000 chipset, supported only in nPartitions that have dual-core Intel® Itanium® 2 processors.
Supports a maximum of two cells in an nPartition.
Supports a maximum of 128 GBytes memory.
Supports a maximum of two I/O chassis in an nPartition. Requires a PCI-X I/O chassis, and does not support PCI I/O chassis.
Red Hat Enterprise Linux 4 Update 4
Supports up to 16 Intel® Itanium® 2 processors. Requires single Itanium 2 processors, and does not support HP mx2 dual-processor modules.
Supports a maximum of four cells in an nPartition.
Supports a maximum of 256 GBytes memory.
Supports a maximum of two I/O chassis in an nPartition. Requires a PCI-X I/O chassis, and does not support PCI I/O chassis.
SuSE Linux Enterprise Server 9
Supports up to 16 Intel® Itanium® 2 processors.
On servers based on the HP sx2000 chipset, supported only in nPartitions that have dual-core Intel® Itanium® 2 processors.
Supports a maximum of four cells in an nPartition.
Supports a maximum of 256 GBytes memory.
Supports a maximum of two I/O chassis in an nPartition. Requires a PCI-X I/O chassis, and does not support PCI I/O chassis.
SuSE Linux Enterprise Server 10
Configuration Requirements for nPartitions
The hardware requirements determine which cells are eligible to be assigned to an nPartition.
For configuration requirements and restrictions for Superdome hybrid servers, refer to “HP
Superdome Hybrid Servers: Intel® Itanium® 2 and PA-RISC nPartition Mixing” (page 17). HP
Superdome servers based on the HP sx1000 chipset can support hybrid configurations with both PA-RISC nPartitions and Intel® Itanium® 2 nPartitions in the same server complex.
Every nPartition you configure must meet the following hardware requirements:
On HP 9000 systems, all cells in an nPartition must have the same processor revision level and clock speed. That is, the IODC_HVERSION must be identical for all PA-RISC processors.
You can view processor details, including the CPU type (revision level) and speed, by using the parstatus -V -c# command or by using Partition Manager (select the CellShow Cell Details action, CPUs/Memory tab).
On HP Integrity servers, all cells in an nPartition must have the same compatibility value.
The cell compatibility value is reported by the parstatus -V -c# command as "CPU Compatibility" for the cell.
Partition Manager Version 2.0 reports the value as "Cell Compatibility" in the General Cell Properties view for the cell, which you can display by clicking the cell location when viewing other details about the server complex.
62 Planning nPartitions
Page 63
On HP Integrity servers, all cells assigned to an nPartition must have either mx2 dual-processor modules or single Itanium 2 processors.
Both HP mx2 dual-processor modules and single Itanium 2 processors can exist in the same complex, but they cannot be mixed in the same nPartition.
The same firmware revision must be present on all cells within an nPartition.
At least one cell in every nPartition must be connected to an I/O chassis that has core I/O.
Only one core I/O is active per nPartition. If an nPartition has multiple cells that are connected to I/O chassis with core I/O, only the core I/O connected to the active core cell is active.
Recommended nPartition Configurations
For best performance and availability, configure nPartitions to meet the following guidelines.
On servers based on the HP sx1000 chipset or HP sx2000 chipset, the nPartition memory configuration should meet the following guidelines:
— The number of cells participating in memory interleave should be a power of two, and
each cell participating in interleave should contribute the same amount of memory.
— The total amount of memory being interleaved should be a power of two number of
GBytes.
When configuring cell local memory, ensure that the amount of interleaved memory meets the guidelines given here. (All memory not specified as being cell local will be interleaved.)
Also ensure that any nPartition that has cell local memory configured runs only operating systems that support cell local memory.
Cell local memory can be configured on servers based on the HP sx1000 chipset or HP sx2000 chipset.
CAUTION: Memory configured as cell local memory only can be used by operating systems that support it.
Any memory configured as cell local memory is unusable when an nPartition runs an operating system that does not support it.
The I/O chassis containing the active core I/O also should have an operating system (OS) boot disk and method of installing or recovering the OS (such as a CD-ROM/DVD-ROM drive, network connection to an install server, or tape drive). This applies to first-generation cell-based servers and HP sx1000-based servers; it allows the nPartition to boot or recover the OS, even if only the core cell for the nPartition is functioning. On HP sx2000-based servers, every cell has core I/O.
Assign multiple core-capable cells to each nPartition.
This allows the nPartition to boot at least to the system boot environment (either BCH or EFI) if a core-capable cell fails to boot. On HP sx2000-based servers, every cell has core I/O.
(Disregard this recommendation if you are configuring multiple nPartitions in a cell-based server that has only two core-capable cells.)
The memory configuration of all cells in an nPartition should be identical to achieve best performance.
Each cell in an nPartition should have: — the same number of DIMMs — the same capacity (size) and the same locations (population) of DIMMs
This avoids cell interconnect (crossbar) "hot spots" by distributing memory evenly across all of the cells in the nPartition.
Recommended nPartition Configurations 63
Page 64
The memory configuration of each cell should include a multiple of two memory ranks (first-generation cell-based HP 9000 servers) or a multiple of two memory echelons (servers based on the HP sx1000 chipset or HP sx2000 chipset) per cell.
On the first generation of cell-based HP 9000 servers, each memory rank is 4 DIMMs. If possible, install memory in sets of 8 DIMMs: 8 DIMMs or 16 DIMMs on HP rp7405/rp7410, rp8400, and Superdome (SD16000, SD32000, SD64000) cells. On Superdome cells, you also can install 24 DIMMs or 32 DIMMs per cell.
On servers based on the HP sx1000 chipset or HP sx2000 chipset, each memory echelon is 2 DIMMs. If possible, install memory in sets of 4 DIMMs: 4, 8, 12, or 16 DIMMs. On Superdome servers, you also can install 20, 24, 28, or 32 DIMMs per cell.
This provides a performance improvement by doubling the memory bandwidth of the cell, as compared to having one memory rank or memory echelon installed.
This also can provide an availability improvement, in that if one memory rank or echelon fails the cell still has at least one functional rank of memory.
(Memory rank 0, or echelon 0, must be functional for a cell to boot.)
Each nPartition should have PRI (primary), HAA (high-availability alternate), and ALT (alternate) boot paths defined and configured, and their path flags appropriately configured for your purposes.
NOTE: Note that on HP Integrity servers, the PRI path corresponds to the first item in the EFI boot options list, the HAA path is the second item in the boot options list, and ALT is the third boot option.
The PRI and HAA paths should be configured to reference disks that are connected to different cells, if possible, with HAA being a mirror of the root volume and PRI being the root volume. ALT should be the path of a recovery or install device.
Under this configuration, if the cell to which the PRI disk is connected fails or is otherwise inactive and the cell to which the HAA disk is available, then the nPartition still can boot an operating system.
Even if the PRI and HAA devices connect to the same cell (such as on a two-cell server with two nPartitions configured), the HAA device can be used to boot an operating system should the PRI device fail.
Recommended HP Superdome nPartition Configurations
On HP Superdome servers, the locations of the cells you assign to each nPartition and the resulting loads on server interconnections can affect nPartition system performance within the server.
HP offers specific guidelines for configuring nPartitions on HP Superdome servers in order to ensure good system performance.
NOTE: The guidelines in this section apply to HP Superdome servers only.
These guidelines follow two basic configuration principles:
1. Avoid sharing interconnecting hardware (crossbars and crossbar links) among multiple nPartitions.
2. Minimize the number of crossbar links used by each nPartition, but do not overload crossbar links by creating nPartitions that can generate more cell communications traffic across the links than the links can support. Overloading crossbar links degrades performance.
Configuration Guidelines for HP Superdome nPartitions Use these guidelines to help determine which cells to assign to the nPartitions you create on HP Superdome servers.
64 Planning nPartitions
Page 65
Define nPartitions in order of size.
Assign cells to the nPartition that has the largest cell count first. Then select cells for the next largest nPartition, and so on, and finally choose cells for the nPartition with the fewest cells last.
This provides more appropriate cell assignments for larger nPartitions (those with more cells). Any smaller nPartitions with fewer cells are more easily accommodated in the remaining, available cells.
Place each nPartition within an empty cabinet, if possible.
This applies to nPartitions in HP Superdome 64-way servers only.
If possible, assign each nPartition cells from a cabinet whose cells have no nPartition assignments. Do this before assigning cells from a cabinet that already has cells assigned to an nPartition.
Recommended nPartition Configurations 65
Page 66
66
Page 67
4 Using Management Interfaces and Tools
This chapter presents the system management interfaces and tools available on Hewlett-Packard's cell-based servers. Also covered here are the nPartition boot environments, management access procedures, and detailed command references.
Management differences on HP 9000 systems and HP Integrity systems are addressed in this chapter. For a discussion of the supported cell-based server models, see Chapter 2 (page 49).
SMS (Support Management Station) for HP Superdome Servers
The Support Management Station (SMS) is a workstation or PC that is connected to an HP Superdome server through the service processor private LAN. The SMS may either be an HP-UX workstation or an HP ProLiant system running Microsoft® Windows® 2000 with Service Pack 3 or later.
The SMS primarily is used by HP-certified service and support personnel for system scan, upgrade, and hardware verification purposes.
The Windows SMS is an HP ProLiant system running the Windows operating system that has an enhanced system support toolset, including Partition Manager and the HP nPartition commands (such as parcreate and parstatus, among others).
Use of the nPartition commands from a Windows SMS requires specifying the remote management command-line options (either the -u... -h... set of options or the -g... -h... options). For details see “Specifying Remote Management Options to Commands” (page 247).
You can use the Windows versions of the nPartition commands to remotely manage servers based on the HP sx1000 chipset or HP sx2000 chipset. Remote management using IPMI over LAN is supported for all servers based on the HP sx1000 chipset or HP sx2000 chipset. Remote nPartition management using WBEM is supported for nPartitions running an operating system with the HP nPartition Provider.
Details on the Windows SMS release of nPartition commands are given in “Commands for
Configuring nPartitions” (page 19).
For more details about the SMS for Superdome servers, refer to the Service Guide for your model of Superdome server.
Overview of nPartition Service Processor (MP or GSP) Interfaces
The service processor (MP or GSP) utility hardware is an independent support system for cell-based servers. It provides a way for you to connect to a server complex and perform administration or monitoring tasks for the server hardware and its nPartitions.
The main features of the service processor include:
Command Menu
nPartition Consoles
Console Logs
Chassis Code Viewer (on HP 9000 servers with HP PA-8700 processors) or Event Log Viewer (on servers based on the HP sx1000 chipset or HP sx2000 chipset)
Virtual Front Panels (live displays of nPartition and cell states)
These features are described in more detail in “Service Processor (MP or GSP) Features” (page 68).
The service processor is available when its cabinet has standby power, even if the main (48-volt) cabinet power switch is turned off.
Access to the service processor is restricted by user accounts. Each user account is password protected and provides a specific level of access to the server complex and service processor commands.
SMS (Support Management Station) for HP Superdome Servers 67
Page 68
Multiple users can independently interact with the service processor because each service processor login session is private. However, some output is mirrored: the Command menu and each nPartition console permit one interactive user at a time and mirror output to all users accessing those features. Likewise, the service processor mirrors live chassis codes to all users accessing the Live Chassis Logs feature (or the Live Events feature).
Up to 32 users can simultaneously login to the service processor through its network (customer LAN) interface and they can independently manage nPartitions or view the server complex hardware states.
Two additional service processor login sessions can be supported by the local and remote serial ports. These allow for serial port terminal access (through the local RS-232 port) and external modem access (through the remote RS-232 port).
In general, the service processor (MP or GSP) on cell-based servers is similar to the service processor on other HP servers, while providing enhanced features necessary for managing a multiple nPartitions.
For example, the service processor manages the complex profile, which defines nPartition configurations as well as complex-wide settings for the server.
The service processor also controls power, reset, and TOC capabilities, displays and records system events (or chassis codes), and can display detailed information about the various internal subsystems.
Service Processor (MP or GSP) Features
The following list describes the primary features available through the service processor on cell-based HP servers.
Command Menu The Command menu provides commands for system service, status, and
access configuration tasks.
To enter the Command menu, enter CM at the service processor Main menu. To exit the service processor Command menu, enter MA or type ^b (Control-b) to return to the service processor Main menu.
See “Command Reference for Service Processor Commands” (page 80) for details.
Service processor commands are restricted based on the three levels of access: Administrator, Operator, and Single Partition User. See “Service Processor Accounts and Access Levels”
(page 69) for details.
Consoles Each nPartition in a server complex has its own console.
Enter CO at the service processor Main menu to access the nPartition consoles. To exit the console, type ^b (Control-b).
See “nPartition Console Features” (page 70) for details.
Console output for each nPartition is reflected to all users currently accessing the nPartition console.
One console user can have interactive access to each nPartition console, and all other users of the console have read-only access. To gain write access for a console, type ^e cf (Control-e c f).
68 Using Management Interfaces and Tools
Page 69
Each nPartition console provides access to:
— The nPartition system boot environment: either BCH or EFI.
The BCH or EFI system boot environment is available when the nPartition is active but has not yet loaded or booted an operating system.
The Boot Console Handler (BCH) environment is provided on HP 9000 servers
only (PA-RISC servers).
The Extensible Firmware Interface (EFI) is provided on HP Integrity servers only
(Intel® Itanium®-based servers).
— HP-UX console for the nPartition.
The nPartition console provides console login access to HP-UX and serves as /dev/ console for the nPartition.
Console Logs Enter CL from the service processor Main menu to access the console logs menu. To exit the console log, type ^b (Control-b).
Each nPartition has its own console log, which stores a history of console output for the nPartition, including boot output, system boot environment (BCH or EFI) activity, and any HP-UX console login activity.
See “Viewing Console Logs” (page 77) for details.
The console log provides a limited history; it is a circular log file that overwrites the oldest information with the most recent.
All console activity is recorded in the console log, regardless of whether any service processor users are connected to the console.
Chassis Logs and Event Logs On both HP 9000 systems and HP Integrity systems, you can view real-time (live) system events and can view prior events that have been stored in a log history. Use the SL ("show logs") option from the service processor Main Menu to view events/chassis codes.
— On cell-based HP 9000 servers with HP PA-8700 processors, SL provides the Chassis
Log Viewer. The chassis log viewer includes options for viewing: activity (level 1 and greater) logs, error (level 2 and greater) logs, and live logs (which optionally may be filtered by cell, nPartition, or alert level).
— On cell-based servers based on the HP sx1000 chipset or HP sx2000 chipset, SL provides
the Event Log Viewer. The event log viewer includes options for viewing: forward progress (level 1 and greater) logs, system event (level 2 and greater) logs, and live logs (which optionally may be filtered by cell, nPartition, or alert level).
See “Viewing Chassis Codes or Event Logs” (page 78) for details.
Virtual Front Panel (VFP) for an nPartition The Virtual Front Panel (VFP) for each nPartition displays real-time boot activity and details about all cells assigned to the nPartition. The VFP display automatically updates as cell and nPartition status changes. A system-wide VFP also is provided.
Enter VFP at the Main menu to access the Virtual Front Panel menu. To exit a Virtual Front Panel, type ^b (Control-b).
See “Virtual Front Panel (VFP) nPartition Views” (page 79) for details.
Service Processor Accounts and Access Levels
To access the service processor interface for a server complex, you must have a user account that enables you to login to the service processor.
Each server complex has its own set of service processor user accounts, which are defined for the server complex and may differ from accounts on other complexes.
Service Processor (MP or GSP) Features 69
Page 70
Service processor user accounts have a specific login name, password, and access level.
The three user account access levels are:
Administrator Account Provides access to all commands, and access to all nPartition consoles
and Virtual Front Panels.
Can manage user accounts (using the Command menu SO command) and can reconfigure various service processor settings.
Operator Account Provides access to a subset of commands, and access to all nPartition
consoles and Virtual Front Panels.
Single Partition User Account Provides access to a restricted subset of commands and
provides access to the nPartition console for a single nPartition. However, allows the user to view the Virtual Front Panel for any nPartition.
Can only execute commands that affect the assigned nPartition.
Cannot execute commands that could potentially affect multiple nPartitions or affect the service processor configuration.
Each user account can either permit repeated login sessions (if it is a "multiple use" account), or restrict the account to only login once (for "single use" accounts).
nPartition Console Features
The service processor Console menu provides access to all nPartition consoles within the server complex.
Enter CO from the service processor Main menu to access an nPartition console. To exit the nPartition console, type ^b (Control-b) to return to the Main menu.
Each nPartition in a complex has a single console. However, multiple connections to the console are supported, allowing multiple users to simultaneously view the console output. Only one connection per console permits write-access.
To force (gain) console write access for an nPartition console, type ^ecf (Control-e c f).
70 Using Management Interfaces and Tools
Page 71
Each nPartition console can display a variety of information about the nPartition, including:
Partition startup, shutdown, and reset output.
The system boot environment: either Boot Console Handler (BCH, on HP 9000 servers) or Extensible Firmware Interface (EFI, on HP Integrity servers).
The system boot environment is available when the nPartition has not yet booted an operating system and has completed Power-On Self Tests (POST) and completed nPartition rendezvous to become active.
The HP-UX login prompt and "console shell access".
CAUTION: When you use an nPartition console connection to login to an operating system running on the nPartition, logout from the operating system when you have finished using it before you type ^B (Control-b) to disconnect from the nPartition console.
If you fail to logout from the operating system console session, then any other service processor user who has permission to access the nPartition could connect to the nPartition console and use the open login session.
Disconnecting from an nPartition console does not close any open operating system login sessions.
nPartition Console Access versus Direct OS Login
You may need to consider the following factors when deciding whether to interact with an nPartition through the service processor console interface or a direct operating system (OS) login session.
Whether you want to log your activity to the console log for the nPartition (all console activity is stored at least temporarily).
Whether the OS is installed, booted, and properly configured on the nPartition.
If the OS is not installed on an nPartition, you should access the nPartition console (through the service processor) in order to install and configure the OS.
You should use the network to login to the OS running on an nPartition when you do not need to use service processor features and do not want to record a log of your activity.
Before an OS has booted, the service processor nPartition consoles are the primary method of interacting with an nPartition.
After an nPartition has booted the OS, you should be able to connect to and login to the nPartition by using telnet or rlogin or ssh to remotely login to HP-UX or Linux, or by using remote desktop for a remote Windows session.
Boot Console Handler System Boot Environment
Each nPartition in a server complex has its own Boot Console Handler (BCH) interface.
The BCH interface is available through an nPartition console interface before an operating system has booted and after the cells have booted and performed nPartition rendezvous (to make the nPartition active).
The nPartition BCH interface enables you to manage and configure the HP-UX boot process for an nPartition. You also can configure some settings for the local nPartition, get some information about the nPartition and its server complex, and perform other tasks such as reboot.
To access an nPartition console type CO from the service processor (MP or GSP) Main menu.
To force console write access, type ^ecf (Control-e c f).
To exit the console, type ^b (Control-b) to return to the Main Menu.
The BCH interface is available after one or more core-capable cells assigned to the nPartition has been powered on; its hardware has completed all Power-On Self Tests (POST); and the cells have booted past boot-is-blocked, rendezvoused, and BCH has started executing.
Boot Console Handler System Boot Environment 71
Page 72
Once you begin the HP-UX boot process and load ISL, the BCH interface is no longer available.
The BCH menus and commands for nPartitions differ slightly from the commands menus for BCH on other HP 9000 server systems.
To display the current BCH menu and commands, type DI.
The BCH interface HELP command lists BCH command or menu details.
Main Menu: Enter command or menu > HELP MA
---- Main Menu Help ----------------------------------------------------------
The following submenus are available from the main menu:
COnfiguration------------------------------------BootID INformation-----------------------ALL BootTimer SERvice-------------BAttery BootINfo CEllConfig CLEARPIM CAche COreCell MemRead ChipRevisions CPUConfig PDT ComplexID DataPrefetch PIM FabricInfo DEfault SCSI FRU FastBoot FwrVersion KGMemory IO PathFlag LanAddress PD MEmory ResTart PRocessor TIme ...
Extensible Firmware Interface System Boot Environment
On HP Integrity servers the system boot environment is provided by the Extensible Firmware Interface (EFI).
EFI is available through an nPartition console interface before an operating system has booted and after the cells have booted and performed nPartition rendezvous (to make the nPartition active).
The EFI environment enables you to manage and configure the operating system boot process for an nPartition. You also can configure some settings for the local nPartition, get information about the nPartition and its server complex, and perform other tasks such as reboot.
The EFI boot environment has two main components:
EFI Boot Manager — A menu-driven interface that enables to you configure and select boot options. From the EFI Boot Manager you can load an operating system, reset the nPartition, and configure various system boot and console options.
EFI Shell — A command-line system boot interface that you can enter by selecting the EFI Shell option from the EFI Boot Manager Menu.
Type exit to leave the EFI Shell interface and return to the EFI Boot Manager Menu.
The EFI Shell provides much of the same functionality as the Boot Console Handler (BCH) interface on HP 9000 systems (PA-RISC systems).
For details on using the EFI Shell use the help command.
The following commands are supported for accessing and using the EFI system boot environment for an nPartition:
To access an nPartition console type CO from the service processor (MP or GSP) Main menu.
To force console write access, type ^ecf (Control-e c f).
To exit the console, type ^b (Control-b) to return to the Main Menu.
Windows Special Administration Console (SAC)
After an nPartition has successfully loaded Microsoft® Windows® Server 2003, you can access a text-based Windows administration interface at the nPartition console.
72 Using Management Interfaces and Tools
Page 73
The Special Administration Console (SAC) interface lets you interact with the Windows operating system running on an nPartition by using the SAC> command prompt that is provided at the nPartition console interface. The SAC commands listed in Table 4-1 (page 73) are provided for managing Windows.
The SAC interface enables you to have administrative access to Windows on an nPartition even if Windows networking is not functional.
Tips for using the SAC interface and a table of SAC commands follow. SAC Interface: Tips for Interacting with Windows Special Administration Console When using
the Windows SAC interface through an nPartition console, you can use the commands in Table 4-1
(page 73). You also can use the following tips to help complete tasks with the SAC.
To list all commands available from the SAC, issue the ? or help command at the SAC>
prompt.
To list basic identification and boot information about the instance of Windows running on
the nPartition whose console you are using, issue the id command.
To switch among the "channels" provided by the SAC interface, use the channel management command.
SAC Channel Management Commands
Esc Tab — Change channels: if multiple channels exist, typing Esc Tab switches to the next channel and typing Space selects a channel.
ch — List all channels. ch -? — Display the channel-management command help.
ch -si # — Switch to a channel by its number. ch -sn name — Switch to a channel by its name. ch -ci # — Close a channel by its number. ch -cn name — Close a channel by its name.
To create a new Windows command prompt that you can interact with through the nPartition console, issue the cmd command.
The cmd SAC command creates a new channel for the command prompt, which you can then switch to (using Esc Tab, or other commands) in order to login to Windows running on the nPartition.
When you need to type function keys and are using the SAC, type the following key sequence:
Esc #
For example, for the F3 key type Esc then 3 in quick sequence.
Table 4-1 lists the commands you can issue at the Windows SAC> prompt, which is provided
through an nPartition console after Windows has booted.
Table 4-1 Windows SAC Commands
DescriptionSAC Command
Channel management commands. Use ch -? for more help.ch
Create a Command Prompt channel.
cmd
Dump the current kernel log.
d
Toggle detailed or abbreviated tlist (Windows process) info.
f
Display command help.
? or help
List all IP network numbers and their IP addresses.
i
Set IP address, subnet, and gateway for an IP network number.
i # ip sub gate
Windows Special Administration Console (SAC) 73
Page 74
Table 4-1 Windows SAC Commands (continued)
DescriptionSAC Command
Display the Windows nPartition identification information.
id
Kill the given process.
k pid
Lower the priority of a process to the lowest possible.
l pid
Lock access to Command Prompt channels.
lock
Limit the memory usage of a process to MB-allow.
m pid MB-allow
Toggle paging the display.
p
Raise the priority of a process by one.
r pid
Display the current time and date (24 hour clock used).
s
Set the current time and date (24 hour clock used).
s mm/dd/yyyy hh:mm
Display tlist info (a list of Windows processes running on the nPartition).
t
Restart the system immediately.
restart
Shutdown the system immediately. This puts the nPartition in a shutdown for reconfig (inactive) state.
To boot the nPartition (make it active), use the BO command at the service processor Command menu.
shutdown
Crash the Windows system running on the nPartition. You must have crash dump enabled.
crashdump
Accessing and Using the Service Processor
This section describes how to login to the service processor (MP or GSP) for a server complex.
You can connect to the service processor for a server complex by using the following methods:
Connecting through the customer LAN port by using telnet, if login access through the
customer LAN is enabled for the service processor.
On HP Superdome servers, the customer LAN hardware is labeled "Customer LAN". On HP rp8400 servers it is "GSP LAN". On HP rp7405/rp7410 servers it is the only LAN port on the core I/O.
Use telnet to open a connection with the service processor, then login by entering the account name and corresponding password.
Connecting through the local RS-232 port using a direct serial cable connection.
On HP Superdome server hardware, the local RS-232 port is labeled "Local RS-232". On HP rp8400 servers it is the "Local Console" port. On HP rp7405/rp7410 servers it is the 9-pin D-shaped connector (DB9) labeled "Console".
Connecting through the remote RS-232 port using external modem (dial-up) access, if remote modem access is configured.
On HP Superdome server hardware, the remote RS-232 port is labeled "Remote RS-232". On HP rp8400 servers it is the "Remote Console" port. On HP rp7405/rp7410 servers it is the DB9 connector labeled "Remote".
74 Using Management Interfaces and Tools
Page 75
Example 4-1 Overview of a Service Processor Login Session
The following output shows a sample login session for a server whose service processor hostname is "hpsys-s".
> telnet hpsys-s Trying... Connected to hpsys-s.rsn.hp.com. Escape character is '^]'. Local flow control off
MP login: Username MP password:
Welcome to the
S Class 16K-A
Management Processor
(c) Copyright 1995-2001 Hewlett-Packard Co., All Rights Reserved.
Version 0.23
MP MAIN MENU:
CO: Consoles VFP: Virtual Front Panel CM: Command Menu CL: Console Logs SL: Show chassis Logs HE: Help X: Exit Connection
MP>
Procedure 4-1 Logging in to a Service Processor
This procedure connects to and logs in to the service processor (MP or GSP) for a server complex by using telnet to access the customer LAN.
If connecting through the local RS-232 port, skip Step 1 (instead establish a direct-cable connection) and begin with Step 2.
1. Use the telnet command on a remote system to connect to the service processor for the
server complex.
You can connect directly from the command line, for example:
telnet sdome-g
or run telnet first, and then issue the open command (for example, open sdome-g) at the telnet> prompt.
All telnet commands and escape options are supported while you are connected to the service processor.
2. Login using your service processor user account name and password.
GSP login:Username GSP password:Password
Accessing and Using the Service Processor 75
Page 76
3. Use the service processor menus and commands as needed and log out when done.
To log out, select the Exit Connection menu item from the Main menu (enter X at the GSP> prompt or MP> prompt).
You also can terminate a login session by issuing the telnet escape key sequence ^] (type: Control-right bracket) and entering close at the telnet> prompt.
NOTE: If possible, you should log out of any consoles and menus before terminating your telnet session.
If accessing an OS on an nPartition, log out of the OS before exiting the console and service processor sessions. (Otherwise an open OS login session will remain available to any other service processor users.)
Using Service Processor Menus
The service processor (MP or GSP) has a set of menus that give you access to various commands, consoles, log files, and other features.
See “Navigating through Service Processor Menus” (page 76) for details on using these menus.
The following menus are available from the service processor Main menu (which is the menu you first access when logging in):
Console Menu—Provides access to nPartition consoles for the server.
Virtual Front Panel Menu—Provides a Virtual Front Panel for each nPartition (or for the entire server complex).
Command Menu—Includes service, status, system access, and other commands.
Console Log Viewer Menu—Allows access to the console logs for nPartitions.
Chassis Log Viewer Menu or Event Log Viewer Menu—Allows access to the server chassis code logs (on HP 9000 servers with HP PA-8700 processors) or event logs (on servers based on the HP sx1000 chipset or HP sx2000 chipset). Chassis logs and event logs are functionally equivalent: they record system activities. However, event logs are more descriptive.
Help Menu—Provides online help on a variety of service processor topics and on all service processor Command menu commands.
These menus provide a central point for managing a server complex outside of an operating system.
The service processor menus provide many tools and details not available elsewhere. More administration features also are available from the nPartition system boot environments (BCH or EFI), the nPartition tools, and various operating system commands.
Navigating through Service Processor Menus
The following list includes tips for navigating through service processor menus and using various menu features:
Control-b
Exit current console, console log, chassis log, or Virtual Front Panel.
When accessing the Command menu, an nPartition console, any log files, or any Virtual Front Panel (VFP), you can exit and return to the Main menu by typing ^b (Control-b).
Q (or lower-case q)
Exit or cancel current menu prompt.
Enter Q (or lower-case q) as response to any menu prompt to exit the prompt and return to the previous sub-menu.
76 Using Management Interfaces and Tools
Page 77
You can do this throughout the service processor menus, including the console menus, various command menu prompts, and the log and VFP menus.
Note that, from the Command menu prompt (GSP:CM> or MP:CM>) you must enter MA (not Q) to return to the Main menu. However, you can enter Q or q to cancel any command.
Control-]
Escape the service processor connection and return to the telnet prompt.
At any time during your telnet connection to a service processor, you can type the ^] (Control-right bracket) escape sequence.
This key sequence escapes back to the telnet prompt. When at the telnet> prompt you can use the following commands, among others: ? (print telnet command help information), close (close the current connection), and quit (exit telnet).
To return to the service processor connection, type enter (or return) one or more times.
Network Configuration for a Service Processor
This gives an overview of the network settings for service processor (MP or GSP) hardware. These settings are used for connections to the service processor and are not used for HP-UX networking.
Details on configuring service processor networking are given in the service guide for each server.
The service processor utility hardware on HP Superdome servers has two network connections: the customer LAN and private LAN.
The service processor on other (non-Superdome) cell-based servers does not have a private LAN; only a customer LAN connection is provided.
Features of service processor LANs are given in the following list.
Customer LAN for Service Processor The customer LAN is the connection for login access
to the service processor menus, consoles, commands, and other features.
All cell-based servers have a customer LAN.
On HP Superdome servers, the customer LAN port is labeled "Customer LAN". On HP rp8400 servers it is "GSP LAN". On HP rp7405/rp7410 servers it is the only LAN connection on each board.
Private LAN for Service Processor (Superdome Only) The private LAN is the connection to
the Superdome service support processor (SSP) workstation, also called the service management station (SMS).
Only Superdome servers have a private LAN. It typically is not used on the Superdome server models based on the HP sx1000 chipset or HP sx2000 chipset.
To list the current service processor network configuration, use the LS command. To configure service processor network settings, use the LC command from the Command menu. For a procedures, refer to the service guide for your server.
Viewing Console Logs
Each nPartition in a server complex has its own console log that stores a record of the most recent nPartition console activity.
To access the console log for an nPartition, enter CL from the service processor Main menu and select which nPartition console log you want to view. To exit the console log viewer, type ^b (Control-b) to return to the Main menu.
When viewing an nPartition console log, type P to view the previous page of the console log, or type N (or Enter) to view the next page.
Viewing Console Logs 77
Page 78
When you enter a console log viewer it displays the oldest data in the log first and allows you to page through the log to view the more recently recorded activity.
Each console log is a circular log file that records approximately 30 to 40 pages of data. All nPartition console activity is written to this log file, regardless of whether a user is connected to the nPartition console.
As a console log is written the oldest data in the log is overwritten by current data, as needed, so that the last 30 to 40 pages of console output always is available from the console log viewer.
Viewing Chassis Codes or Event Logs
The event log and chassis code viewers enables you to view chassis codes or event logs that are emitted throughout the entire server complex.
NOTE: On HP 9000 servers with HP PA-8700 processors, the equivalent of event logs is chassis codes.
To enter the event log viewer enter SL at the service processor Main menu. To exit the viewer type ^b (Control-b) to return to the Main menu.
Event logs are data that communicate information about system events from the source of the event to other parts of the server complex. Event log data indicates what event has occurred, when and where it happened, and its severity (the alert level).
All event logs pass from the event source through the service processor. The service processor takes any appropriate action and then reflects the event logs to all running nPartitions. If an nPartition is running event monitoring software, it may also take action based on the event logs (for example, sending notification e-mail).
System administrators, of course, may have interest in viewing various event logs—especially event logs that indicate failures or errors.
Hardware, software, and firmware events may emit event logs as a result of a failure or error, a major change in system state, or basic forward progress. For example: a fan failure, a machine check abort (MCA), the start of a boot process, hardware power on or off, and test completion all result in event logs being emitted.
NOTE: The front panel attention LED for a cell-based server cabinet is automatically turned on when one or more event logs of alert level 2 or higher have not yet been viewed by the administrator. When this attention LED is on, entering the chassis log viewer turns the LED off.
You can remotely check the on/off status of this attention LED by using the PS command, G option, from the service processor Command menu.
On cell-based servers, event logs are recorded in the server complex activity log (for events of alert level 0 or alert level 1) or the error log (for events alert level 2 or higher).
GSP> SL
Chassis Logs available:
(A)ctivity Log (E)rror Log (L)ive Chassis Logs
(C)lear All Chassis Logs (Q)uit
GSP:VW> L
Entering Live Log display
78 Using Management Interfaces and Tools
Page 79
A)lert filter C)ell filter P)artition filter U)nfiltered V)iew format selection ^B to Quit
Current filter: ALERTS only
Log Viewing Options: Activity, Error, and Live Chassis Logs When you enter the chassis log viewer by entering SL at the service processor (MP or GSP) Main menu, you can select from these viewers:
Activity Log Viewer Allows you to browse recorded event logs of alert level 0 or 1.
Error Log Viewer Allows you to browse recorded event logs of alert level 2 or higher.
Live Chassis Logs Viewer Displays event logs in real time as they are emitted.
By default, the live event log viewer has the Alert filter enabled, which causes it to display only the events of alert level 3 or higher.
To view all event logs in real-time, type U for the Unfiltered option.
You also can filter the live codes by cell (C) or nPartition (P). Cell filter: only display event logs emitted by a specific cell in the server complex. Partition filter: only display event logs emitted by hardware assigned to a specific nPartition.
When viewing event log logs, type V to change the display format. The viewers can show event logs in text format (T), keyword format (K), or raw hex format (R).
Virtual Front Panel (VFP) nPartition Views
The Virtual Front Panel (VFP) provides ways to monitor the boot or run status of each cell in an nPartition and of the nPartition itself. The VFP provides the sort of information typically displayed on the LCD of a non-partitionable server.
The VFP presents a real-time display of activity on the selected nPartition(s) and it automatically updates when cell and nPartition status change.
To access the VFP feature, enter VFP from the service processor Main menu. To exit the VFP, type ^b (Control-b) to return to the Main menu.
When you access a Virtual Front Panel, you can either select the nPartition whose VFP you want to view or select the system VFP to view summary information for all nPartitions in the server complex.
E indicates error since last boot Partition 0 state Activity
------------------ -------­ Cell(s) Booting: 710 Logs
# Cell state Activity
- ---------- -------­ 0 Early CPU selftest Cell firmware test 232 Logs 1 Early CPU selftest Processor test 230 Logs 2 Memory discovery Physical memory test 242 Logs
GSP:VFP (^B to Quit) >
Virtual Front Panel (VFP) nPartition Views 79
Page 80
Command Reference for Service Processor Commands
Table 4-2 lists the commands available from the service processor command menu (the MP:CM>
or GSP:CM> prompt).
The following categories of commands are available:
“Service Commands — Service Processor (MP or GSP)”.
“Status Commands — Service Processor (MP or GSP)”.
“System and Access Configuration Commands — Service Processor (MP or GSP)”.
Some commands are restricted to users with Operator or Administrator authority. Also note that the available set of commands may differ depending on the utility revision level and server hardware model.
For details on these commands, use the help (HE: Help) feature at the service processor Main Menu. Enter the command name at the MP:HELP or GSP:HELP prompt for syntax, restrictions, and other information.
Table 4-2 Service Processor (MP or GSP) Command Reference
DescriptionCommand
Service Commands — Service Processor (MP or GSP) Commands for general server complex administration and
nPartition management.
Boot an nPartition past Boot is Blocked (BIB).
BO
Display FRU information of an entity.
DF
Return to the Main menu.
MA
Modem reset.
MR
Activate/deactivate a PCI card.
PCIOLAD
Power entities on or off.
PE
Reset entity.
RE
Reset an nPartition for reconfiguration; the nPartition remain inactive, in the shutdown for reconfig state.
RR
Reset an nPartition.
RS
Send a TOC signal to an nPartition.
TC
Broadcast a message to all users of the MP Command Handler.
TE
Margin the voltage in a cabinet.
VM
Display a list of MP connected users.
WHO
Status Commands — Service Processor (MP or GSP) Commands for displaying hardware and nPartition information.
Display nPartition cell assignments.
CP
Display the list of available commands.
HE
Display IO chassis/cell connectivity.
IO
Display LAN connected console status.
LS
Display the status of the modem.
MS
Display detailed power and hardware configuration status.
PS
Display revisions of all firmware entities in the complex.
SYSREV
System and Access Configuration Commands — Service Processor (MP or GSP) Commands for managing server complex accounts, security, and nPartition configuration.
80 Using Management Interfaces and Tools
Page 81
Table 4-2 Service Processor (MP or GSP) Command Reference (continued)
DescriptionCommand
Restrict/unrestrict nPartition Reconfiguration Privilege.
PARPERM
Modify default nPartition for this login session.
PD
Rekey Complex Profile locks (unlock Complex Profile).
RL
Display and set (enable/disable) MP remote access methods.
SA
Configure security options and access control (user accounts and passwords).
SO
MP diagnostics and reset.
XD
Command Reference for EFI Shell Commands
Table 4-3 lists the commands supported by the EFI Shell interface on cell-based HP Integrity
servers.
The EFI Shell is accessible from an nPartition console when the nPartition is in an active state but has not booted an operating system.
The following categories of commands are available:
“Boot Commands — EFI Shell”.
“Configuration Commands — EFI Shell”.
“Device, Driver, and Handle Commands — EFI Shell”.
“Filesystem Commands — EFI Shell”.
“Memory Commands — EFI Shell”.
“Shell Navigation and Other Commands — EFI Shell”.
“Shell Script Commands / Programming Constructs — EFI Shell”.
For details on these commands, enter help command at the EFI shell prompt.
Table 4-3 EFI Shell Command Reference
DescriptionCommand
Boot Commands — EFI Shell Commands related to nPartition booting.
Set/view autoboot timeout variable.
autoboot
Display/modify the driver/boot configuration.
bcfg
Set/view BootTest bits.
boottest
Display/modify direct boot profiles for use by lanboot.dbprofile
Boot over the LAN.
lanboot
Reset the system (nPartition) for reconfiguration; the nPartition remains inactive, in the shutdown for reconfig state.
reconfigreset
Reset the system (nPartition).
reset
Connect drivers for bootables devices.
search
Configuration Commands — EFI Shell Commands for changing and retrieving system (nPartition) information.
Set/view ACPI configuration mode.
acpiconfig
Deconfigure/reconfigure cells. (Set cell use-on-next-boot values.)
cellconfig
Deconfigure/reconfigure processors and processor cores..
cpuconfig
Display the current date or set the date of the system (nPartition).
date
Command Reference for EFI Shell Commands 81
Page 82
Table 4-3 EFI Shell Command Reference (continued)
DescriptionCommand
Deconfigure/reconfigure memory (DIMMs).
dimmconfig
Display/change the error level.
err
View/clear logs.
errdump
View FRU data.
fru
Display hardware information.
info
Set/view a monarch processor.
monarch
Make a PAL call.
palproc
Enable/disable PCI expansion ROM drivers.
romdrivers
Set/view preferred root cells. (Set nPartition core cell choices.)
rootcell
Make a SAL call.
salproc
Performs TFTP operation to a bootp/DHCP enabled Unix boot server.
tftp
Display the current time or set the time of the system (nPartition). EFI time is set and presented in GMT (Greenwich mean time).
time
Save/restore specific EFI variables.
variable
Display the version information.
ver
Device, Driver, and Handle Commands — EFI Shell Commands for managing devices, drivers, and handles.
View serial port com settings.
baud
Bind a driver to a device.
connect
Hex dump of BlkIo devices.
dblk
Display devices managed by EFI drivers.
devices
Display tree of devices.
devtree
Dump handle info.
dh
Disconnect driver(s) from device(s).
disconnect
Display list of drivers.
drivers
Invoke the Driver Config Protocol.
drvcfg
Invoke the Driver Diagnostics Protocol.
drvdiag
Dump known GUID IDs.
guid
Display MAC address.
lanaddress
Load EFI drivers.
load
Map shortname to device path.
map
Display the open protocols for given handle.
openinfo
Display PCI devices or PCI function configuration space.
pci
Reconnect driver(s) from a device.
reconnect
Unload a protocol image.
unload
Filesystem Commands — EFI Shell Commands for managing files, directories, and attributes.
Display/change the attributes of files/directories.
attrib
82 Using Management Interfaces and Tools
Page 83
Table 4-3 EFI Shell Command Reference (continued)
DescriptionCommand
Update/view the current directory.
cd
Compare the contents of two files.
comp
Copy one or more files/directories to another location.
cp
Edit an ASCII or UNICODE file in full screen.
edit
Compress infile and write to outfile.
eficompress
Decompress infile and write to outfile.
efidecompress
Edit a file, block device, or memory region using hex.
hexedit
Display a list of files and subdirectories in a directory.
ls
Create one or more directories.
mkdir
Mount a filesystem on a block device.
mount
Delete one or more files/directories.
rm
Set the size of a file.
setsize
Update time of file/directory with current time.
touch
Display the contents of a file.
type
Display volume information of the file system.
vol
Memory Commands — EFI Shell Commands for listing and managing memory, EFI variables, and NVRAM details.
Set the default NVRAM values.
default
Dump memory or memory mapped IO.
dmem
Display all EFI variables.
dmpstore
Display the memory map.
memmap
Display/modify MEM/IO/PCI.
mm
View/clear nPartition or cell memory page deallocation table (PDT).
pdt
Shell Navigation and Other Commands — EFI Shell Commands for basic EFI Shell navigation and customization.
Set/get alias settings.
alias
Clear the standard output with an optional background color.
cls
Exit EFI Shell environment.
exit
Display current monotonic counter value.
getmtc
Display help.
help or ?
Display the mode of the console output device.
mode
Set/Get environment variable.
set
Turn on/off extended character features.
xchar
Shell Script Commands / Programming Constructs — EFI Shell EFI shell-script commands.
Echo message to stdout or toggle script echo.
echo
Script-only: Use with IF THEN.
else
Script-only: Delimiter for FOR loop construct.
endfor
Command Reference for EFI Shell Commands 83
Page 84
Table 4-3 EFI Shell Command Reference (continued)
DescriptionCommand
Script-only: Delimiter for IF THEN construct.
endif
Script-only: Loop construct.
for
Script-only: Jump to label location in script.
goto
Script-only: IF THEN construct.
if
Take user input and place in EFI variable.
input
Script-only: Prompt to quit or continue.
pause
Stall the processor for some microseconds.
stall
Command Reference for BCH Menu Commands
Table 4-4 lists the commands available from the Boot Console Handler (BCH) menus for an
nPartition.
The BCH Menu is accessible from an nPartition console when the nPartition is in an active state but has not booted an operating system.
The following categories of commands are available:
“General Commands — Boot Console Handler (BCH)”.
“Main Menu Commands — Boot Console Handler (BCH)”.
“Configuration Menu Commands — Boot Console Handler (BCH)”.
“Information Menu Commands — Boot Console Handler (BCH)”.
“Service Menu Commands — Boot Console Handler (BCH)”.
For details on these commands, use the help (HE) command. At any BCH menu enter HE command for details about the specified command, or enter HE for general help.
Table 4-4 Boot Console Handler (BCH) Command Reference
DescriptionCommand
General Commands — Boot Console Handler (BCH) These BCH commands are available from all BCH menus.
Boot from the specified path.
BOot [PRI| HAA| ALT| path]
Restart nPartition.
REBOOT
Reset the nPartition to allow Complex Profile reconfiguration; the nPartition remains inactive, in the shutdown for reconfig state.
RECONFIGRESET
Redisplay the current menu.
DIsplay
Display help for the current menu or the specified menu or command.
HElp [menu |command]
Main Menu Commands — Boot Console Handler (BCH) Commands to find devices, set boot paths (PRI,HAA, ALT), and access other BCH menus.
Boot from the specified path.
BOot [PRI |HAA |ALT| path]
Display or modify a device boot path.
PAth [PRI |HAA |ALT] [path]
Search for boot devices.
SEArch [ALL |cell |path]
84 Using Management Interfaces and Tools
Page 85
Table 4-4 Boot Console Handler (BCH) Command Reference (continued)
DescriptionCommand
Display or change scrolling capability.
ScRoll [ON|OFF]
Access the Configuration Menu, which displays or sets boot values.
COnfiguration
Access the information menu, which displays hardware information.
INformation
Access the Service Menu, which displays service commands.
SERvice
Configuration Menu Commands — Boot Console Handler (BCH) Commands to display or set boot values.
Return to the BCH Main Menu.
MAin
Display or set Boot Identifier.
BootID [cell [proc [bootid]]]
Seconds allowed for boot attempt.
BootTimer [0-200]
Configure or deconfigure the specified cell.
CEllConfig [cell] [ON|OFF]
Display or set core cell choices for the nPartition.
COreCell [choice cell]
Configure or deconfigure the processor (cpu) on the specified cell.
CPUconfig [cell [cpu [ON|OFF]]]
Display or set data prefetch behavior.
DataPrefetch [ENABLE |DISABLE]
Set the nPartition to predefined (default) values.
DEfault
Display or set boot tests execution (self tests).
FastBoot [test][RUN |SKIP]
Display or set KGMemory requirement.
KGMemory [value]
Display or set boot path flags (boot actions).
PathFlags [PRI|HAA|ALT] [value]
Display or set the nPartition name.
PD [name]
Set nPartition restart policy.
ResTart [ON|OFF]
Read or set the real time clock, the local nPartition date/time setting. The BCH time is set and presented in GMT (Greenwich mean time).
TIme [cn:yr:mo:dy:hr:mn:[ss]]
Information Menu Commands — Boot Console Handler (BCH) Commands to display hardware information.
Return to the BCH Main Menu.
MAin
Display all of the information available for the nPartition.
ALL [cell]
Display boot-related information.
BootINfo
Display cache information.
CAche [cell]
Display revisions of major integrated circuits.
ChipRevisions [cell]
Display Complex information.
ComplexID
Display Fabric information.
FabricInfo
Display FRU information
FRU [cell] [CPU|MEM]
Display versions for PDC, ICM, and complex.
FwrVersion [cell]
Command Reference for BCH Menu Commands 85
Page 86
Table 4-4 Boot Console Handler (BCH) Command Reference (continued)
DescriptionCommand
Display I/O interface information.
IO [cell]
Display memory information.
MEmory [cell]
Display processor information
PRocessor [cell]
Service Menu Commands — Boot Console Handler (BCH) Commands related to nPartition system service tasks.
Return to the BCH Main Menu.
MAin
Display cell battery status.
BAttery [cell]
Clear the non-volatile processor internal memory (NVM PIM) data for the nPartition.
CLEARPIM [cell]
Display, deallocate, or re-allocate the DIMM identified by dimm in cell number specified by cell.
DimmDealloc [cell] [dimm] [ON|OFF]
Display error log information.
ErrorLog [cell][MEMORY |IO |FABRIC |CELL]
Display Core I/O LAN station address.
LanAddress
Read memory locations scope of nPartition.
MemRead address [len]
Display or clear the memory page deallocation table (PDT).
PDT [cell] [CLEAR]
Display the processor internal memory (PIM) data for the nPartition.
PIM [cell [proc]] [HPMC |LPMC |TOC]
Display or set SCSI device parameters.
SCSI [path [INIT |RATE |TERM |WIDTH |DEFAULT [id]]]
86 Using Management Interfaces and Tools
Page 87
5 Booting and Resetting nPartitions
This chapter introduces nPartition system boot and reset concepts, configuration options, and procedures for booting and resetting nPartitions.
This chapter covers boot details for HP-UX, HP OpenVMS I64, Microsoft® Windows® Server 2003, Red Hat Enterprise Linux, and the SuSE Linux Enterprise Server operating systems.
Differences in the nPartition boot process on PA-RISC systems and Intel® Itanium®-based systems also are addressed in this chapter.
CAUTION: An nPartition on an HP Integrity server cannot boot HP-UX virtual partitions when in nPars boot mode. Likewise, an nPartition on an HP Integrity server cannot boot an operating system outside of a virtual partition when in vPars boot mode.
For details, refer to “Boot Modes on HP Integrity nPartitions: nPars and vPars Modes” (page 94).
NOTE: For details on boot and reset of nPartitions running vPars software, refer to Installing and Managing HP-UX Virtual Partitions (vPars).
Overview of nPartition System Booting
This section provides an overview of the nPartition system boot process for HP 9000 servers and HP Integrity servers.
On cell-based HP servers, system resources are configured into one or more nPartitions. Each nPartition includes the cells (with processors and memory) assigned to it and the I/O that is connected to those cells.
An nPartition can boot and reboot independently of any other nPartitions in the same server complex. Each nPartition runs its own firmware and has its own system boot environment. nPartitions provide hardware and software fault isolation: a reset, TOC, or MCA in one nPartition does not affect any other nPartition in most cases.
Each nPartition is effectively an independent system that follows the boot processes outlined in the following lists. Boot Overview for Cell-Base HP 9000 Servers shows an overview of the boot process on HP 9000 servers (PA-RISC systems). Boot Overview for Cell-Based HP Integrity
Servers shows an overview of the boot process on HP Integrity servers (Itanium® 2-based
systems).
Also refer to “Boot Process for Cells and nPartitions” (page 32) for details. Boot Overview for Cell-Base HP 9000 Servers Cell-based HP 9000 servers have PA-RISC
processors and have the following boot process:
1. PDC Self Test
2. PDC Boot
3. Boot Console Handler (BCH, a menu-driven boot environment)
4. Initial System Loader (ISL)
5. Secondary System Loader (hpux)
6. HP-UX Operating System
Boot Overview for Cell-Based HP Integrity Servers Cell-based HP Integrity servers have Intel® Itanium® processors and have the following boot process:
1. Processor Abstraction Layer (PAL)
2. System Abstraction Layer (SAL)
3. Extensible Firmware Interface (EFI)
4. EFI Boot Manager (menu-driven boot environment)
Overview of nPartition System Booting 87
Page 88
a. EFI Shell (command-driven boot environment) b. EFI Scripts and Applications
EFI scripts and EFI applications can be initiated from either EFI Boot Manager or EFI Shell.
5. Operating System Loader The following OS loaders are supported on HP Integrity servers. OS loaders can be initiated from the EFI Boot Manger or the EFI Shell.
a. HPUX.EFI Loader
Loader for the HP-UX operating system.
b. ELILO.EFI Loader
Loader for Red Hat Enterprise Linux or SuSE Linux Enterprise Server.
c. vms_loader.efi Loader
Loader for HP OpenVMS I64.
d. ia64ldr.efi Loader
Loader for Microsoft Windows Server 2003. ia64ldr.efi must be initiated from EFI Boot Manager (not from the EFI Shell).
Boot Process Differences for nPartitions on HP 9000 servers and HP Integrity servers
The following lists, “HP Integrity Server Booting” and “HP 9000 Server Booting”, describe system boot features and differences on HP Integrity and HP 9000 servers.
HP Integrity Server Booting This list describes system boot features on cell-based HP Integrity servers.
The nPartition system boot environment is the Extensible Firmware Interface (EFI): the EFI Boot Manager menu and the EFI Shell.
The autoboot process is configured by the EFI autoboot setting and the order of items in the boot options list.
The boot options list can include:
— First boot option: configured using the setboot -p... or parmodify -b...
command.
— Second boot option: configured using the setboot -h... or parmodify -s...
command
— Third boot option: configured using the setboot -a... or parmodify -t...
command.
Each operating system has its own OS loader. — The HP-UX OS loader is HPUX.EFI, which supports hpux(1M) loader options.
You can issue hpux loader commands from the HPUX> prompt.
— The HP OpenVMS I64 loader is vms_loader.efi. — The Microsoft® Windows® loader is ia64ldr.efi and it is invoked only from the
EFI Boot Manager.
— The loader for Red Hat Enterprise Linux and SuSE Linux Enterprise Server is
ELILO.EFI.
You can issue ELILO loader commands from the "ELILO boot" prompt.
The EFI system boot environment includes an ACPI configuration setting that must be set properly for the OS being booted: either HP-UX, OpenVMS I64, Windows, or Linux. For details see “ACPI Configuration Value—HP Integrity Server OS Boot” (page 92).
88 Booting and Resetting nPartitions
Page 89
HP 9000 Server Booting This list describes system boot features on cell-based HP 9000 servers.
The nPartition system boot environment is the Boot Console Handler (BCH).
The autoboot process is configured using boot device paths (PRI, HAA, ALT) and path flags.
— PRI boot path: configured using the setboot -p... or parmodify -b... command. — HAA boot path: configured using the setboot -h... or parmodify -s... command — ALTboot path: configured using the setboot -a... or parmodify -t... command.
The HP-UX B.11.11 OS loaders are ISL and hpux. Issue commands from the ISL> prompt.
Types of Booting and Resetting for nPartitions
HP cell-based servers provide two special types of reboot and reset for managing nPartitions: performing a reboot for reconfig, and performing a shutdown for reconfig.
The following list summarizes all types of booting, rebooting, and resetting that are supported for HP nPartition systems. See the “Reboot for Reconfig” and “Shutdown for Reconfig State” items for a discussion of these nPartition-specific boot processes.
NOTE: You can perform the Windows shutdown tasks either by using the shutdown command or by using the StartShut Down action.
Reboot A reboot shuts down the operating system and reboots the nPartition. On HP 9000
systems, only the active cells in the nPartition are reset. On HP Integrity systems, all cells are reset.
To perform a standard reboot of an nPartition use the HP-UX shutdown -r command, the Windows shutdown /r command, the Linux shutdown -r time command, or the OpenVMS: @SYS$SYSTEM:SHUTDOWN with an automatic system reboot.
Halt A halt shuts down the operating system, halts all processing on the nPartition, and
does not reboot.
To halt the operating system use the HP-UX shutdown -h command.
To reboot an nPartition that was halted from HP-UX use the RS command from the service processor Command menu.
Halting the system is supported only on HP 9000 servers. On HP Integrity servers the effect of the shutdown -h command or its Windows and Linux equivalents is to perform a shutdown for reconfig (see “Shutdown for Reconfig State” in this list). On HP OpenVMS servers, shutting down without rebooting halts OpenVMS but does not perform a shutdown for reconfig.
Reset A reset resets the nPartition immediately. On HP 9000 systems, only the active cells
in the nPartition are reset. On HP Integrity systems all cells are reset.
You can reset an nPartition using the REBOOT command from the BCH interface, the reset command from the EFI Shell, or the RS command from the service processor Command menu.
The RS command does not check whether the specified nPartition is in use or running an operating system—be certain to correctly specify the nPartition.
Overview of nPartition System Booting 89
Page 90
NOTE: On HP Integrity servers you should reset an nPartition only after all self tests and partition rendezvous have completed. For example, when the nPartition is inactive (all cells are at BIB) or is at EFI.
Boot an nPartition from the Service Processor (GSP or MP) A boot initiated from the service processor boots an inactive nPartition past the shutdown for reconfig state to allow it to become active.
To boot an inactive nPartition, use the BO command from the service processor Command menu.
The cells assigned to the nPartition proceed past boot-is-blocked (BIB), rendezvous, and the nPartition boots to the system boot environment (BCH or EFI).
Reboot for Reconfig A reboot for reconfig shuts down the operating system, resets all cells assigned to the nPartition, performs any nPartition reconfigurations, and boots the nPartition back to the system boot environment (BCH or EFI).
To perform a reboot for reconfig of the local nPartition, use the HP-UX shutdown -R command, Windows shutdown /r command, or the Linux shutdown -r time command. To perform a reboot for reconfig from OpenVMS I64 running on an nPartition, issue
@SYS$SYSTEM:SHUTDOWN.COM from OpenVMS, and then enter Yes at the "Should an automatic system reboot be performed" prompt.
All cells—including any inactive cells and all newly added or deleted cells—reset and the nPartition is reconfigured as needed. All cells with a "y" use-on-next-boot setting participate in partition rendezvous and synchronize to boot as a single nPartition.
After you assign a cell to an nPartition, or remove an active cell from an nPartition, you can perform a reboot for reconfig of the nPartition to complete the cell addition or removal.
If an nPartition is configured to boot an operating system automatically, it can do so immediately following a reboot for reconfig.
Shutdown for Reconfig State Putting an nPartition into the shutdown for reconfig state involves shutting down the operating system (as required), resetting all cells assigned to the nPartition, performing any nPartition reconfigurations, and keeping all cells at a boot-is-blocked (BIB) state, thus making the nPartition and all of its cells inactive.
On HP rx7620, rx7640, rx8620, and rx8640 servers, you can configure the OS shutdown for reconfig behavior for each nPartition to either power off hardware or keep cells at BIB. See
“ACPI Softpowerdown Configuration—OS Shutdown Behavior” (page 93) for details.
To put an nPartition into the shutdown for reconfig state use the shutdown -R -H HP-UX command, the shutdown /s Windows command, or the Linux shutdown -h time command. To perform a shutdown for reconfig of an nPartition running OpenVMS I64: first
issue @SYS$SYSTEM:SHUTDOWN.COM from OpenVMS and enter No at the "Should an automatic system reboot be performed" prompt, then access the MP and, from the MP Command Menu, issue the RR command and specify the nPartition that is to be shutdown for reconfig.
From system firmware, to put an nPartition into the shutdown for reconfig state use the RECONFIGRESET command from the BCH interface, the reconfigreset command from the EFI Shell, or the RR command from the service processor Command menu.
To make an nPartition boot past shutdown for reconfig, use either the BO command or the PE command from the service processor Command menu.
— For an inactive nPartition whose cells are at BIB, use the BO command from the service
processor Command menu. The BO command makes the nPartition active by allowing
90 Booting and Resetting nPartitions
Page 91
its cells to boot past BIB, rendezvous, and boot to the system boot environment (BCH or EFI) and, if configured, automatically boot an operating system.
— For an nPartition whose cells have been powered off, use the PE command to power
on the nPartition hardware.
TOC: Transfer-of-Control Reset When you initiate a transfer-of-control reset, the service
processor immediately performs a TOC reset of the specified nPartition, which resets the nPartition and allows a crash dump to be saved.
If crash dump is configured for an OS on an nPartition, then when you TOC the nPartition while it is running the OS, the nPartition performs a crash dump and lets you select the type of dump.
To perform a TOC reset, use the TC command from the service processor Command menu. HP nPartition systems do not have TOC buttons on the server cabinet hardware.
From the Windows SAC, you can initiate a crash dump by issuing the crashdump command at the SAC> prompt
From HP OpenVMS I64, you can cause OpenVMS to dump system memory and then halt at the P00>> prompt by issuing the RUN SYS$SYSTEM:OPCRASH command. To reset the nPartition following OPCRASH, access the nPartition console and press any key to reboot.
System Boot Configuration Options
This section briefly discusses the system boot options you can configure on cell-based servers. You can configure boot options that are specific to each nPartition in the server complex.
HP 9000 Boot Configuration Options
On cell-based HP 9000 servers the configurable system boot options include boot device paths (PRI, HAA, and ALT) and the autoboot setting for the nPartition. To set these options from HP-UX, use the setboot command. From the BCH system boot environment, use the PATH command at the BCH Main menu to set boot device paths, and use the PATHFLAGS command at the BCH
Configuration menu to set autoboot options. For details issue HELP command at the appropriate BCH menu, where command is the command for which you want help.
HP Integrity Boot Configuration Options
On cell-based HP Integrity servers you must properly specify the ACPI configuration value, which affects the OS startup process and on some servers can affect the shutdown behavior. You also can configure boot device paths and the autoboot setting for the nPartition. Details are given in the following list.
Boot Options List The boot options list is a list of loadable items available for you to select
from the EFI Boot Manager menu. Ordinarily the boot options list includes the EFI Shell and one or more operating system loaders.
The following example includes boot options for HP OpenVMS, Microsoft Windows, HP-UX, and the EFI Shell. The final item in the EFI Boot Manager menu, the Boot Configuration menu, is not a boot option. The Boot Configuration menu allows system configuration through a maintenance menu.
EFI Boot Manager ver 1.10 [14.61] Please select a boot option
HP OpenVMS 8.2-1 EFI Shell [Built-in] Windows Server 2003, Enterprise HP-UX Primary Boot: 4/0/1/1/0.2.0 Boot Option Maintenance Menu
Use ^ and v to change option(s). Use Enter to select an option
Overview of nPartition System Booting 91
Page 92
NOTE: In some versions of EFI, the Boot Configuration menu is listed as the Boot Option Maintenance menu.
To manage the boot options list for each system use the EFI Shell, the EFI Boot Configuration menu, or operating system utilities.
At the EFI Shell, the bcfg command supports listing and managing the boot options list for all operating systems except Microsoft Windows. On HP Integrity systems with Windows installed the \MSUtil\nvrboot.efi utility is provided for managing Windows boot options from the EFI Shell. Likewise on HP Integrity systems with OpenVMS installed the \efi\vms\vms_bcfg.efi and \efi\vms\vms_show utilities are provided for managing OpenVMS boot options.
The EFI Boot Configuration menu provides the Add a Boot Option, Delete Boot Option(s), and Change Boot Order menu items. (If you must add an EFI Shell entry to the boot options list, use this method.)
Operating system utilities for managing the boot options list include the HP-UX setboot command and the HP OpenVMS @SYS$MANAGER:BOOT_OPTIONS.COM command.
The OpenVMS I64 installation and upgrade procedures assist you in setting up and validating a boot option for your system disk. HP recommends that you allow the procedure to do this. Alternatively, you can use the @SYS$MANAGER:BOOT_OPTIONS.COM command (also referred to as the OpenVMS I64 Boot Manager utility) to manage boot options for your system disk. The OpenVMS I64 Boot Manager (BOOT_OPTIONS.COM) utility is a menu-based utility and is easier to use than EFI. To configure OpenVMS I64 booting on Fibre Channel devices, you must use the OpenVMS I64 Boot Manager utility (BOOT_OPTIONS.COM). For more information on this utility and other restrictions, refer to the HP OpenVMS for Integrity Servers Upgrade and Installation Manual.
For details refer to the following sections.
— To set HP-UX boot options refer to “Adding HP-UX to the Boot Options List” (page 109). — To set OpenVMS boot options refer to “Adding HP OpenVMS to the Boot Options List”
(page 120).
— To set Windows boot options refer to “Adding Microsoft Windows to the Boot Options
List” (page 125).
— To set Linux boot options refer to “Adding Linux to the Boot Options List” (page 130).
Autoboot Setting You can configure the autoboot setting for each nPartition either by using the autoboot command at the EFI Shell, or by using the Set Auto Boot TimeOut menu item at the EFI Boot Option Maintenance menu.
To set autoboot from HP-UX, use the setboot command.
ACPI Configuration Value—HP Integrity Server OS Boot On cell-based HP Integrity servers you must set the proper ACPI configuration for the OS that will be booted on the nPartition.
To check the ACPI configuration value, issue the acpiconfig command with no arguments at the EFI Shell.
To set the ACPI configuration value, issue the acpiconfig value command at the EFI Shell, where value is either default, windows, or single-pci-domain. Then reset the nPartition
by issuing the reset EFI Shell command for the setting to take effect.
92 Booting and Resetting nPartitions
Page 93
The ACPI configuration settings for the supported OSes are in the following list. — HP-UX ACPI Configuration: default On cell-based HP Integrity servers, to boot or install
the HP-UX OS, you must set the ACPI configuration value for the nPartition to default.
For details refer to “ACPI Configuration for HP-UX Must Be default” (page 110).
HP OpenVMS I64 ACPI Configuration: default On cell-based HP Integrity servers, to
boot or install the HP OpenVMS I64 OS, you must set the ACPI configuration value for the nPartition to default.
For details refer to “ACPI Configuration for HP OpenVMS I64 Must Be default”
(page 122).
Windows ACPI Configuration: windows On cell-based HP Integrity servers, to boot or
install the Windows OS, you must set the ACPI configuration value for the nPartition to windows.
For details refer to “ACPI Configuration for Windows Must Be windows” (page 127).
Red Hat Enterprise Linux ACPI Configuration: single-pci-domain or default On cell-based
HP Integrity servers, to boot or install the Red Hat Enterprise Linux OS, you must set the ACPI configuration value for the nPartition to either single-pci-domain or default.
On HP rx7620 servers, rx8620 servers, or Integrity Superdome (SD16A, SD32A,
SD64A), the ACPI configuration must be set to single-pci-domain.
On HP rx7640 servers, rx8640 servers, or Integrity Superdome (SD16B, SD32B,
SD64B), the ACPI configuration must be set to default.
For details refer to “ACPI Configuration for Red Hat Enterprise Linux Must Be
single-pci-domain or default” (page 131).
SuSE Linux Enterprise Server ACPI Configuration: single-pci-domain or default On
cell-based HP Integrity servers, to boot or install the SuSE Linux Enterprise Server OS, you must set the ACPI configuration value for the nPartition to single-pci-domain or default.
On HP rx7620 servers, rx8620 servers, or Integrity Superdome (SD16A, SD32A,
SD64A), the ACPI configuration must be set to single-pci-domain.
On HP rx7640 servers, rx8640 servers, or Integrity Superdome (SD16B, SD32B,
SD64B), the ACPI configuration must be set to default.
For details refer to “ACPI Configuration for SuSE Linux Enterprise Server Must Be
single-pci-domain or default” (page 133).
ACPI Softpowerdown Configuration—OS Shutdown Behavior On HP rx7620, rx7640, rx8620,
and rx8640 servers, you can configure the nPartition behavior when an OS is shut down and halted. The two options are to have hardware power off when the OS is halted, or to have the nPartition be made inactive (all cells are in a boot-is-blocked state). The normal OS shutdown behavior on these servers depends on the ACPI configuration for the nPartition.
You can run the acpiconfig command with no arguments to check the current ACPI configuration setting; however, softpowerdown information is displayed only when different from normal behavior.
To change the nPartition behavior when an OS is shut down and halted, use either the
acpiconfig enable softpowerdown EFI Shell command or the acpiconfig disable softpowerdown command, and then reset the nPartition to make the ACPI configuration
change take effect. — acpiconfig enable softpowerdown When set on HP rx7620, rx7640, rx8620, and rx8640
servers, acpiconfig enable softpowerdown causes nPartition hardware to be
Overview of nPartition System Booting 93
Page 94
powered off when the OS issues a shutdown for reconfig command (for example, shutdown -h or shutdown /s).
This is the normal behavior on HP rx7620, rx7640, rx8620, and rx8640 servers with a windows ACPI configuration setting.
When softpowerdown is enabled on HP rx7620, rx7640, rx8620, and rx8640 servers, if one nPartition is defined in the server, then halting the OS powers off the server cabinet including all cells and I/O chassis. On HP rx7620, rx7640, rx8620, and rx8640 servers with multiple nPartitions, halting the OS from an nPartition with softpowerdown enabled causes only the resources on the local nPartition to be powered off.
To power on hardware that has been powered off, use the PE command at the management processor Command menu.
acpiconfig disable softpowerdown When set on HP rx7620, rx7640, rx8620, and rx8640
servers, acpiconfig disable softpowerdown causes nPartition cells to remain at a boot-is-blocked state when the OS issues a shutdown for reconfig command (for example, shutdown -h or shutdown /s). In this case an OS shutdown for reconfig makes the nPartition inactive.
This is the normal behavior on HP rx7620, rx7640, rx8620, and rx8640 servers with an ACPI configuration setting of default or single-pci-domain.
To make an inactive nPartition active, use the management processor BO command to boot the nPartition past the boot-is-blocked state.
Boot Modes on HP Integrity nPartitions: nPars and vPars Modes On cell-based HP Integrity servers, each nPartition can be configured in either of two boot modes:
nPars Boot Mode
In nPars boot mode, an nPartition is configured to boot any single operating system in the standard environment. When an nPartition is in nPars boot mode, it cannot boot the vPars monitor and therefore does not support HP-UX virtual partitions.
vPars Boot Mode
In vPars boot mode, an nPartition is configured to boot into the vPars environment. When an nPartition is in vPars boot mode, it can only boot the vPars monitor and therefore it only supports HP-UX virtual partitions and it does not support booting HP OpenVMS I64, Microsoft Windows, or other operating systems. On an nPartition in vPars boot mode, HP-UX can boot only within a virtual partition (from the vPars monitor) and cannot boot as a standalone, single operating system in the nPartition.
CAUTION: An nPartition on an HP Integrity server cannot boot HP-UX virtual partitions when in nPars boot mode. Likewise, an nPartition on an HP Integrity server cannot boot an operating system outside of a virtual partition when in vPars boot mode.
To check or set the boot mode for an nPartition on a cell-based HP Integrity server, use any of the following tools as appropriate. Refer to Installing and Managing HP-UX Virtual Partitions (vPars), Sixth Edition, for details, examples, and restrictions.
94 Booting and Resetting nPartitions
Page 95
parconfig EFI shell command
The parconfig command is a built-in EFI shell command. Refer to the help parconfig command for details.
\EFI\HPUX\vparconfig EFI shell command
The vparconfig command is delivered in the \EFI\HPUX directory on the EFI system partition of the disk where HP-UX virtual partitions has been installed on a cell-based HP Integrity server. For usage details, enter the vparconfig command with no options.
vparenv HP-UX command
On cell-based HP Integrity servers only, the vparenv HP-UX command is installed on HP-UX systems that have the HP-UX virtual partitions software. Refer to vparenv(1m) for details.
NOTE: On HP Integrity servers, nPartitions that do not have the parconfig EFI shell command do not support virtual partitions and are effectively in nPars boot mode.
HP recommends that you do not use the parconfig EFI shell command and instead use the \EFI\HPUX\vparconfig EFI shell command to manage the boot mode for nPartitions on cell-based HP Integrity servers.
Refer to Installing and Managing HP-UX Virtual Partitions (vPars), Sixth Edition, for details.
Tools for Booting nPartitions
The tools for booting nPartitions and configuring related settings are:
Service Processor (MP or GSP) Menus
Service processor menus provide a complex-wide service interface that can allow access to all hardware and nPartitions.
See “Command Reference for Service Processor Commands” (page 80).
EFI Boot Manager and EFI Shell
On HP Integrity servers only, the EFI (Extensible Firmware Interface) Boot Manager and Shell are the methods for interacting with an nPartition before it has booted an operating system.
See “Command Reference for EFI Shell Commands” (page 81).
Boot Console Handler (BCH) Menu Commands
On PA-RISC servers, the BCH interface is the method for interacting with an nPartition before it has booted HP-UX.
See “Command Reference for BCH Menu Commands” (page 84).
nPartition Commands
HP nPartition commands allow you to configure, manage, and monitor nPartitions and hardware within a server complex.
The Enhanced nPartition Commands also can remotely manage complexes based on the HP sx1000 chipset or HP sx2000 chipset.
See “Commands for Configuring nPartitions” (page 19) for details.
Partition Manager (/opt/parmgr/bin/parmgr)
Partition Manager provides a graphical interface for managing and monitoring nPartitions and hardware within a server complex.
See “Partition Manager” (page 22).
Tools for Booting nPartitions 95
Page 96
Task Summaries for nPartition Boot and Reset
Table 5-1 describes the main nPartition boot and reset tasks and provides brief summaries and
references for detailed procedures.
You can perform the nPartition boot tasks in Table 5-1 “nPartition Boot and Reset Task
Summaries” using various tools, including the service processor (MP or GSP), Boot Console
Handler (BCH, available only on PA-RISC servers), Extensible Firmware Interface (EFI, available only on HP Integrity servers), HP-UX commands, or Partition Manager ( /opt/parmgr/bin/parmgr).
See “Tools for Booting nPartitions” (page 95) for details.
Table 5-1 nPartition Boot and Reset Task Summaries
SummaryTask
This section has tips for resolving common nPartition boot issues.
See “Troubleshooting Boot Problems” (page 100).
“Troubleshooting Boot Problems”
Use the service processor Console Menu (CO) to access the BCH or EFI system boot environment for an nPartition.
See “Accessing nPartition Console and System Boot Interfaces” (page 101).
“Accessing nPartition Console and System Boot Interfaces”
Use the VFP option from the service processor Main Menu to access a Virtual Front Panel for monitoring the boot status of an nPartition.
See “Monitoring nPartition Boot Activity” (page 104).
“Monitoring nPartition Boot Activity”
• BCH Menu: SEARCH command.
• EFI Shell: map command.
See “Finding Bootable Devices” (page 106).
“Finding Bootable Devices”
CAUTION: Under normal operation you shut down the operating system before
issuing a TOC reset.
• Service Processor (MP or GSP): TC command.
See “Performing a Transfer of Control Reset” (page 107).
“Performing a Transfer of Control Reset”
• BCH Menu: BOOT command.
• EFI Boot Manager: select an item from the boot options list.
• EFI Shell: access the EFI System Partition (for example fs0:) for a root device and enter HPUX to invoke the loader.
See “Booting HP-UX” (page 110). This section also covers booting HP-UX in single-user mode and LVM-maintenance mode.
“Booting HP-UX”
• Issue the /sbin/shutdown command with the desired options, such as -r to shut down and reboot automatically, or -h to shut down and halt the system.
• The -R and -H options to shutdown and reboot are used when performing nPartition reconfigurations; see the Reboot for Reconfig and Shutdown for Reconfig details in this table.
See “Shutting Down HP-UX” (page 117).
“Shutting Down HP-UX”
NOTE: Only supported on HP Integrity servers.
• EFI Boot Manager: select an item from the boot options list.
• EFI Shell: access the EFI System Partition (for example fs0:) for a root device and enter vms_loader to invoke the loader.
See “Booting HP OpenVMS” (page 122).
“Booting HP OpenVMS”
96 Booting and Resetting nPartitions
Page 97
Table 5-1 nPartition Boot and Reset Task Summaries (continued)
SummaryTask
NOTE: Only supported on HP Integrity servers.
• At the OpenVMS command line issue the @SYS$SYSTEM:SHUTDOWN command and
specify the shutdown options in response to the prompts given.
See “Shutting Down HP OpenVMS” (page 123).
“Shutting Down HP OpenVMS”
NOTE: Only supported on HP Integrity servers.
• EFI Boot Manager: select an item from the boot options list. (Windows does not support being invoked from the EFI Shell.)
See “Booting Microsoft Windows” (page 126).
“Booting Microsoft Windows”
NOTE: Only supported on HP Integrity servers.
• Issue the shutdown command with the desired options, such as /r to shut down and reboot automatically, /s to shut down and halt (make the nPartition inactive), or /a to abort a system shutdown.
You also can select the StartShut Down action and either choose Restart or choose Shut down from the pull-down menu.
• The /r and /s options to shutdown also are used when performing nPartition reconfigurations; see the Reboot for Reconfig and Shutdown for Reconfig details in this table.
See “Shutting Down Microsoft Windows” (page 128).
“Shutting Down Microsoft Windows”
NOTE: Only supported on HP Integrity servers.
• EFI Boot Manager: select an item from the boot options list.
• EFI Shell: access the EFI System Partition (for example fs0:) for a root device and enter ELILO to invoke the loader.
See “Booting Red Hat Enterprise Linux” (page 131).
“Booting Red Hat Enterprise Linux”
NOTE: Only supported on HP Integrity servers.
• EFI Boot Manager: select an item from the boot options list.
• EFI Shell: access the EFI System Partition (for example fs0:) for a root device and enter ELILO to invoke the loader.
See “Booting SuSE Linux Enterprise Server” (page 132).
“Booting SuSE Linux Enterprise Server”
NOTE: Only supported on HP Integrity servers.
• Issue the /sbin/shutdown command with the desired options, such as -r to shut down and reboot automatically, or -h to shut down and halt the system.
You must include the required time argument to specify when the operating system shutdown is to occur.
See “Shutting Down Linux” (page 134).
“Shutting Down Linux”
Task Summaries for nPartition Boot and Reset 97
Page 98
Table 5-1 nPartition Boot and Reset Task Summaries (continued)
SummaryTask
• Service Processor (MP or GSP): RS command; under normal operation you first shut
down the operating system.
On HP Integrity servers you should reset an nPartition only after all self tests and partition rendezvous have completed.
• BCH Menu: REBOOT command.
• EFI Boot Manager: Boot Option MaintenanceCold Reset.
• EFI Shell: reset command.
• HP-UX: /sbin/shutdown or /usr/sbin/reboot command.
• OpenVMS: @SYS$SYSTEM:SHUTDOWN command and enter Yes at the "Should an
automatic system reboot be performed" prompt.
• Windows: shutdown /r command or the StartShut Down action and Restart
pull-down menu option.
• Linux: /sbin/shutdown command. You must include the required time argument
to specify when the shutdown is to occur.
See “Rebooting and Resetting nPartitions” (page 135).
“Rebooting and Resetting nPartitions”
NOTE: Only supported for cell-based HP servers.
• HP-UX: /sbin/shutdown -R command.
• OpenVMS: @SYS$SYSTEM:SHUTDOWN command and enter Yes at the "Should an
automatic system reboot be performed" prompt.
• Windows: shutdown /r command or the StartShut Down action and Restart
pull-down menu option.
• Linux: /sbin/shutdown -r time command. You must include the time argument
to specify when the shutdown is to occur.
See “Performing a Reboot for Reconfig” (page 139).
“Performing a Reboot for Reconfig”
NOTE: Only supported for cell-based HP servers.
• Service Processor (MP or GSP): RR command; under normal operation you first shut
down the operating system.
• BCH Menu: RECONFIGRESET command.
• EFI Shell: reconfigreset command.
• HP-UX: /sbin/shutdown -R -H command.
• OpenVMS: @SYS$SYSTEM:SHUTDOWN command and enter No at the "Should an
automatic system reboot be performed" prompt, then at the service processor (MP or GSP) Command menu enter the RR command and specify the nPartition.
• Windows: shutdown /s command or the StartShut Down action and Shut down
pull-down menu option.
• Linux: /sbin/shutdown -h time command. You must include the time argument
to specify when the shutdown is to occur.
See “Shutting Down to a Shutdown for Reconfig (Inactive) State” (page 141).
“Shutting Down to a Shutdown for Reconfig (Inactive) State”
NOTE: Only supported for cell-based HP servers.
• Service Processor (MP or GSP): BO command.
• HP-UX: specify the -B option when using the /usr/sbin/parmodify command to
reconfigure an inactive nPartition.
See “Booting an Inactive nPartition” (page 146).
“Booting an Inactive nPartition”
• BCH Menu: BOOT LAN... command.
• EFI Boot Manager: select Boot Option MaintenanceBoot from a File and select the "Load File" option for the LAN card that has the desired MAC address.
• EFI Shell: lanboot select command.
See “Booting over a Network” (page 147).
“Booting over a Network”
98 Booting and Resetting nPartitions
Page 99
Table 5-1 nPartition Boot and Reset Task Summaries (continued)
SummaryTask
NOTE: Only supported on PA-RISC systems.
• BCH Menu: issue the BOOT command and reply y (for "yes") to the Do you wish to stop at the ISL prompt question.
See “Booting to the HP-UX Initial System Loader (ISL)” (page 149).
“Booting to the HP-UX Initial System Loader (ISL)”
NOTE: Only supported on HP Integrity servers.
• EFI Shell or EFI Boot Manager: start booting HP-UX and type any key to interrupt the boot process, stopping it at the HP-UX Boot Loader prompt (HPUX>).
See “Booting to the HP-UX Loader (HPUX.EFI)” (page 150).
“Booting to the HP-UX Loader (HPUX.EFI)”
• BCH Menu: boot to the Initial System Loader prompt (ISL>), and from ISL issue HP-UX loader commands in the following form:
hpux command
For example: enter hpux ls to issue the ls command.
• EFI Shell or EFI Boot Manager: boot to the HP-UX Boot Loader prompt (HPUX>), and issue HP-UX loader commands directly.
For example: enter ls to issue the ls command.
See “Using HP-UX Loader Commands” (page 151).
“Using HP-UX Loader Commands”
NOTE: Only supported on HP Integrity servers.
• EFI Shell or EFI Boot Manager: start booting Linux and type any key to interrupt the boot process, stopping it at the ELILO Linux Loader prompt ("ELILO boot").
See “Booting to the Linux Loader (ELILO.EFI)” (page 152).
“Booting to the Linux Loader (ELILO.EFI)”
NOTE: Only supported on HP Integrity servers.
• EFI Shell or EFI Boot Manager: boot to the ELILO Linux Loader prompt ("ELILO boot") and issue loader commands directly.
See “Using Linux Loader (ELILO) Commands” (page 154).
“Using Linux Loader (ELILO) Commands”
• BCH Menu: PATH command.
• EFI Boot Manager: use Boot Option Maintenance operations to add or delete boot options, or to change the order of items in the boot options list.
• EFI Shell: bcfg command for HP-UX options.
For example: bcfg boot dump to list all boot options, or help bcfg for details setting and reordering boot options list items.
For Windows boot options, use the MSUtil\nvrboot.efi utility.
• HP-UX: /usr/sbin/setboot or /usr/sbin/parmodify command. On HP Integrity systems, only the boot options list for the local nPartition may be displayed and modified.
See “Configuring Boot Paths and Options” (page 155).
“Configuring Boot Paths and Options”
Task Summaries for nPartition Boot and Reset 99
Page 100
Table 5-1 nPartition Boot and Reset Task Summaries (continued)
SummaryTask
• BCH Menu: the PATHFLAGS command from the BCH Configuration menu sets
boot-time actions for an nPartition.
To set the boot actions for an nPartition boot paths, enter:
PATHFLAGS VAR action
where VAR is the boot path variable (PRI, HAA, or ALT) and action is the boot action (0 for "go to BCH", 1 for "boot, if fail go to BCH", 2 for "boot, if fail try next path", or 3 for "skip this path, try next path").
• EFI Boot Manager: Boot Option MaintenanceSet Auto Boot TimeOut operation.
• EFI Shell: autoboot command.
For example: autoboot off to disable autoboot, or autoboot 60 to enable autoboot with a 60-second timeout period.
• HP-UX: setboot -b on or setboot -b off command, to turn on (enable) or turn
off (disable) autoboot.
See “Configuring Autoboot Options” (page 158).
“Configuring Autoboot Options”
NOTE: HP recommends that all self tests be performed.
• BCH Menu: Configuration menu FASTBOOT command: enter FASTBOOT to list settings; enter FASTBOOT RUN to enable all tests; enter FASTBOOT TEST RUN or FASTBOOT
TEST SKIP to enable or disable an individual test.
• EFI Shell: boottest command to list settings; boottest on to enable all tests; boottest off to disable all tests. To configure a specific test, use the boottest test on or boottest test off command.
• HP-UX B.11.11: setboot -t testname=value to configure the test for all following boots, or setboot -T testname=value to configure the test for the next boot only. setboot -v to list settings.
• HP-UX B.11.23 and B.11.31: setboot -t testname=value to configure the test for the next boot only. setboot -v to list settings.
See “Configuring Boot-Time System Tests” (page 161).
“Configuring Boot-Time System Tests”
Troubleshooting Boot Problems
On HP cell-based servers, you might encounter different boot issues than on other HP servers.
The following boot issues are possible on cell-based servers.
Problem: On an HP Integrity server, HP-UX begins booting but is interrupted with a panic
when launching the HP-UX kernel (/stand/vmunix).
Causes: The nPartition ACPI configuration might not be properly set for booting HP-UX. In order to boot the HP-UX operating system an nPartitionmust have its acpiconfig value set to default.
Actions: At the EFI Shell interface, enter the acpiconfig command with no arguments to list the current ACPI configuration for an nPartition. If the acpiconfig value is set to windows, then HP-UX cannot boot; in this situation you must reconfigure acpiconfig.
To set the ACPI configuration for HP-UX: at the EFI Shell interface enter the acpiconfig default command, and then enter the reset command for the nPartition to reboot with the proper (default) configuration for HP-UX.
100 Booting and Resetting nPartitions
Loading...