IBM Computer Hardware User Manual

Redbooks
Paper
© Copyright IBM Corp. 2005. All rights reserved. ibm.com/redbooks 1
Hardware Management Console (HMC) Case Configuration Study for LPAR Management
This IBM® Redpaper provides Hardware Management Console (HMC) configuration considerations and describes case studies about how to use the HMC in a production environment. This document does not describe how to install the HMC or how to set up LPARs. We assume you are familiar with the HMC. Rather, the case studies presented in this Redpaper provide a framework to implement some of the more useful HMC concepts. It provides examples to give you ideas on how to exploit the capabilities of the HMC.
The topics discussed in this Redpaper are:
򐂰 Basic HMC considerations 򐂰 Partitioning considerations 򐂰 Takeover case study:
– Description of the scenario – Setting up remote ssh connection to the HMC – Using the HMC to perform CoD operations – Examples of dynamic LPAR oper ations – Using micropartitioning features – Security considerations
Dino Quintero
Sven Meissner
Andrei Socoliuc
2 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
򐂰 Automation 򐂰 High availability considerations for HMCs
Introduction and overview
The Hardware Management Console (HMC) is a dedicated workstation that allows you to configure and manage pa rtitions. To perform maintenance operations, a graphical user interface (GUI) is provided.
Functions performed by the HMC include:
򐂰 Creating and maintaining a multiple partition environment 򐂰 Displaying a virtual operating system session terminal for each partition 򐂰 Displaying a virtual operator panel of contents for each partition 򐂰 Detecting, repor tin g, an d storing changes in hardware conditions 򐂰 Po wering managed systems on and off 򐂰 Acting as a service focal point 򐂰 Activating CoD
Although this Redpaper contains information relevant to POWER4 systems, our focus is on the HMC configur ation for POWER5 systems. The case studies are illustrated with POWER5 systems only.
Basic HMC considerations
The Hardware Management Console (HMC) is based on the IBM eServer™ xSeries® hardware architecture running dedicated applications to provide partition management for single or multiple servers called managed syst ems. There are two types of HMCs depending on the CPU ar chitecture of the managed systems:
򐂰 HMC for POWER4 systems 򐂰 HMC for POWER5 systems
Table1 shows the curren t list of the hardware models for HMCs supported in a POWER4 or POWER5 environment. The HMCs are available as desktop or rack-mountable systems.
Note: POWER4™ systems use a serial line to communicate with the HMC. This has changed with POWER5™. The POWER5 systems use a LAN connection to communicate with the HMC. POWER4 and POWER5 systems cannot be managed by the same HMC.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 3
Table 1 Types of HMCs
The HMC 3.x code version is used f or PO WER4 mana ged systems and HMC 4.x for POWER5 systems (iSeries™ and pSeries®). For managing POWER5 pSeries machines, HMC 4.2 code ver sion or later is required.
Table 2 shows a detailed relationship between the POWER5 p Series servers and the supporte d HM Cs.
Table 2 Supported HMCs for pSeries and OpenPower platforms
Type Supported managed
systems
HMC code version
7315-CR3 (rack mount) POWER4 or POWER5
1
HMC 3.x, HMC 4.x, or HMC 5.x
7315-C04 (desktop) POWER4 or POWER5
1
HMC 3.x, HMC 4.x, or
HMC 5.x 7310-CR3 (rack mount) POWER5 HMC 4.x or HMC 5.x 7310-C04 (desktop) POWER5 HMC 4.x or HMC 5.x
1
- Licensed Internal Code needed (FC0961) to upgrade these HMCs to manager POWER5 syst ems. A single
HMC cannot be used to manage a mixed environment of POWER4 and POWER5 systems.
Managed system HMC model supported HMC required
p505 7310-C04 or 7310-CR3
3
No
1
p510 7310-C04 or 7310-CR3
3
No
1
p520 7310-C04 or 7310-CR3
3
No
1
p550 7310-C04 or 7310-CR3
3
No
1
p570 7310-C04 or 7310-CR3
3
No
1
p575 7310-C04 or 7310-CR3
3
Yes
2
p590 7310-C04 or 7310-CR3
3
Yes
2
p595 7310-C04 or 7310-CR3
3
Yes
2
OpenPower™ 720 7310-C04 or 7310-CR3
3
No
1
OpenPower 710 7310-C04 or 7310-CR3
3
No
1
1
- An HMC is not required if the system runs in full system partition. For a partitioned
environment an HMC is required.
2
- It is recommended to have two HMCs installed for high availability considerations.
3
- Previous HMC models with the latest HMC code level are also supported.
4 Hardware Management Console (HMC) Case Configuration Study for LPAR Managemen t
The maximum number of HMCs supported by a single POWER5 managed system is two. The number of LPARs managed by a single HMC has been increased from earlier versions of the HMC to the current supported release as shown in Table 3.
Table 3 HMC histor y
HMC connections
During the installation of the HMC, you have to consider the number of network adapters required. You can have up to three Ethernet adapters installed on an HMC. There are several connections you have to consider when planning the installation of the HMC:
򐂰 HMC to the FSP (Flexib l e Service Processor): It is an IP- based netw ork used
for management functions of the POWER5 systems; for example, power management and partition management.
POWER5 systems have two interfaces (T1 and T2) available for connections to the HMC. It is recommended to use both of them for redundant configuration, and high availability. Depending on your environment, you have multip le options to configure the network between the HMC and FSP.
The default mechanism for allocation of the IP addresses for the FSP ports is dynamic. The HMC can be configured as a DHCP server which allocates the IP address at the time the managed system is powered on. Static IP address allocation is also an option. You can configure the FSP ports with a static IP address by using the Advanced System Management Interface (ASMI)
HMC code No. of
HMCs
No. of servers
No. of LPARs
Other information
4.1.x 1 4 40 iSeries Only
4.2.0 2 16 64 p5 520 550 570
4.2.1 2 32 160 OpenPower 720
4.3.1 2 32 254 p5 590 595
4.4.0 2 32 254 p5 575
HMC 7310-CR3/C04
4.5.0 2 32/48 254 48 for non 590/595
5.1.0 2 32/48 254 48 for non 590/595
Note: It is recommended to configure this connection as a private net w ork.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 5
menus. However not all POWER5 servers support this mechanism of allocation. Currently p575, p590, and p595 servers support only DHCP.
򐂰 HMC to partitions: HMC requires TCP/IP connection to communicate with the
partitions for functions such as dynamic LPAR and Service Focal Point.
򐂰 Service Agent (SA) connections: SA is the application running on the HMC for
reporting hardware failures to the IBM support center. It uses a modem for dial-out connection or an a vailable Internet connection. It can also be used to transmit service and performance information to IBM and also for CoD enablement and billing information.
򐂰 Remote connection to the HMC using Web-b ased System Manager (W ebSM)
or ssh: For accessing the graphical interface, you can use the WebSM Remote Client running on UNIX® (AIX® or Linux®) or Windows®. The command line interface is also a vailable by using the secure shell connection to the HMC. It can be used by an external management system or a partition to perform HMC operations remotely.
When planning for the HMC installation also consider that the distance between the HMC and the managed system must be within 8m (26 ft) distance. The distance complies with IBM maintenance rules.
Partitioning considerations
With POWER5 systems a greater flexibility was introduced in setting up the resources of a partition by enabling the Advanced Power Virtualization functions to provide:
򐂰 POWER™ Hypervisor: Supports partitioning and dynamic resource
movement across multiple operating system environments.
򐂰 Shared processor LPAR (micro-partitioning): Enables you to allocate less
than a full physical processor to a logical partition.
򐂰 Virtual LAN: Provides network Virtualization capabilities that allow you to
prioritize traffic on shared networks.
򐂰 Virtual I/O (VIO): Provides the ability to dedicate I/O adapters and devices to
a virtual server, thus allowing the on demand allocation and management of I/O devices.
򐂰 Capacity on Demand (CoD): Allows system resources such as processors
and memory to be activated on an as-needed basis.
򐂰 Simultaneous multi-threading (SMT): Allows applications to increase overall
resource utilization by virtualizing multiple physical CPUs through the use of
Note: Either eth0 or eth1 can be a DHCP server on the HMC.
6 Hardware Management Console (HMC) Case Configuration Study for LPAR Managemen t
multi-threading. SMT is a feature supported only in AIX 5.3 and Linux at an appropriate level.
򐂰 Multiple operating system support: Logical partitioning allows a single server
to run multiple operating system images concur rently. On a POWER5 system the following oper ating systems can be installed: AIX 5L™ Version 5.2 ML4 or later, SUSE Linux Enterprise Server 9 Service Pack 2, Red Hat Enterprise Linux ES 4 QU1, and i5/OS.
Additional memory allocation in a partitioned environment
Three memory regions are reserved for the physical memory allocation of a partition:
򐂰 Hypervisor 򐂰 Translation control entry (TCE) tables 򐂰 Partition page tables
At the beginning of a partition size planning, you have to consider that the allocated amount of memory in these three regions is not usable for the physical memory allocation of the pa rtition.
Hypervisor and TCE
All POWER5 systems require the use of the hypervisor. The h ypervisor supports many advanced functions including shared processors, Virtual I/O (VIO), high-speed communications between partitions using Virtual LAN or concurrent maintenance. There are many variables that dictate how much hypervisor memory you will need. It is not a fixed amount of memory as with POWER4 systems.
Also the amount of IO drawers and the different ways to use IO, such as shared environment, affect the amount of memory the hypervisor uses.
Partition page tables
Partition page tables are set aside in additional memory in the hypervisor to handle the partition’s memory addressing. The amount of memory the partition page table reserve depends on the maximum value of the partition, and must be considered in your partition size planning.
Note: The number of VIOs, the number of partitions, and the number of IO drawers affect the hypervisor memo ry.
Note: The bigger the maximum value of a partition, the bigger the amount of memory not usable for the physical memory allocation of the partition.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 7
To calculate your desired and maximum memory values accurately, we recommend that you use the LVT tool. This tool is available at:
http://www.ibm.com/servers/eserver/iseries/lpar/systemdesign.htm
Figure 1 shows an example of how you can use the LPAR validation tool to verify a memory configuration. In Figure 1, there are 4 partitions (P1..P4) defined on a p595 system with a total amount of 32 GB of memory.
Figure 1 Using LVT to validate the LPAR configuration
8 Hardware Management Console (HMC) Case Configuration Study for LPAR Managemen t
The memory allocated to the hypervisor is 1792 MB. When we change the maximum memory parameter of partition P3 from 4096 MB to 32768 MB, the memory allocated to the hypervisor increases to 2004 MB as shown in Figure 2 .
Figure 2 Memory used by hypervisor
Figure 3 is another example of using LVT when verifying a wrong memory configuration. Note that the total amount of allocated memory is 30 GB, but the maximum limits for the partitions require a larger hypervisor memory.
Figure 3 An example of a wrong memory configuration
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 9
Micro-partitioning
With POWER5 systems, increased flexibility is provided for allocating CPU resources by using micropart ition in g features. The following parameters can be set up on the HMC:
򐂰 Dedicated/shared mode, wh ich allows a partition to allocate either a full CPU
or partial units. The minimum CPU allocation unit for a partition is 0.1.
򐂰 Minimum, desired, and maximum limits f or the nu mber of CPUs allocated to a
dedicated partition.
򐂰 Minimum, desired and maximum limits for processor units and virtual
processors, when using the shared processor pool.
򐂰 Capped/uncapped and weight (shared processor mode). Table 4 summarizes the CPU partitioning parameters with their range values,
and indicates if a parameter can be changed dynamically.
Table 4 Partition parameters
Min/Desired/Max values for CPU, processing units, and virtual processors can be set only in the partitio n ’s profile. Each time the par titio n is activated, it tr ies to acquire the desired values. A partition cannot be activated if at least the minimum values of the parameters cannot be satisfied.
Parameter Range Dynamic
LPAR
Capped Capped/uncapped Yes Weight 0-255 Yes Processing mode Dedicated/shared No Processors (dedicated
CPUs)
Min-Max Processor
1
Yes
Processing Units (shared CPUs)
Min-Max Processing units
1
Yes
Virtual processors Min-Max virtual
processors
2
Yes
1
- Max value is limited by the number of CPUs installed in the system,
including CoD.
2
- Between 1 and 64; the min and max allowed values are actually determined by the min/max of processing units: at least 1 processor for each 1.0 processing units and max value limited to 10*max processing units or 64.
10 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Capacity on Demand
The Capacity on Demand (CoD) for POWER5 systems offers multiple options, including:
򐂰 Permanent Capacity on Demand:
– Provides system upgrades by activating processors and/or memory. – No special contracts and no monitoring are required. – Purchase agreement is fulfilled using activation keys.
򐂰 On/Off Capacity on Demand:
– Enables the temporary use of a requested number of processors or
amount of memory.
– On a registered system, the customer selects the capacity and activates
the resource.
– Capacity can be turned ON and OFF by the customer; usage information
is reported to IBM.
– This option is post-pay. You are charged at activation.
򐂰 Reserve Capacity on Demand:
– Used for processors only. – Prepaid debit temporary agreement, activated using license keys. – Adds reserve processor capacity to the shared processor pool, used if the
base shared pool capacity is exceeded.
– Requires AIX 5L Version 5.3 and the Advanced POWER Virtualiza tio n
feature.
򐂰 Trial Capacity on Demand:
– Tests the effects of additional processors and memory. – Partial or total activation of installed processors and/or memory. – Resources are available for a fixed time, and must be returned after trial
period.
– No formal commitment required.
Note: Take into consideration that changes in the profile will not get activated unless you power off and start up your partition. Rebooting of the operating system is not sufficient.
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 11
HMC sample scenarios
The following examples illustrate POWER5 advance features.
Examples of using capped/uncapped, weight, dynamic LPAR and CoD features
Our case study describes different possibilities to take advantage of the micropartitioning features and CoD assuming a failover/fallback scenario based on two independent servers. The scenario does not address a particular clustering mechanism used between the two nodes . We describe the operation by using both the WebSM GUI and the command line interface.
Figure 4 on page 12 shows the initial configuration. Node nils, a partion of a p550 system, is a production system with 2 CPUs and 7 GB memory. We will force node nils to fail. Node julia, also a partion of a p550 system, is the standby system for nils. The resou rces for julia are very small, just 0.2 processors and 1 GB memory.
In case of takeover, CoD On/Off will be activated. Two more CPUs and 8 GB more memory will be available to add to a partion. You can use CoD On/Off for our procedure because you have to pay for the actual days the CoD is active only. You have to inform IBM about the amount of days you have made use of CoD monthly. This can be done by the service agent automatically. For more information, refer to “APPENDIX” on page 40.
Furthermore, the resources that will be available by activating CoD On/Off can be assigned to dedicated and to shared partitions. After CoD ac tiv ation, the CPU and the memory resources will be assigned to julia so that julia will have the same resources as nils had.
After nils is again up and running and ready to reacquire the application, julia will reduce the resources as in the initial configuration and will deactivate CoD.
12 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Figure 4 Initial configuration
Table 5 shows our configuration in detail. Our test system has only one 4-pack DASD available. Therefore we installed a VIO server to have sufficient disks available for our partitions.
Table 5 CPU and memory allocation table
It is recommended to dedicate a processor whe n op tim al perfor m an ce is required for the VIO server. However, in this section we use a shared processor to configure our VIO to make the be st use of the re sources on ou r test system as shown in Table 6 on page 13.
Partition name
CPU
(Min/Desired/Max)
Virtual processors (Min/Desired/Max)
Dedicated/ Shared
Capped/ Uncapped
nicole_vio 0.5/0.8/2.0 1/1/2 Shared Capped oli 1/1/4 N/A Dedicated N/A julia 0.1/0.2/2.0 1/1/4 Shared Capped
P550 – 2 CPU - 8GB
nils (production) 2 CPUs (dedicated) 7 GB
P550 – 4 CPU – 8 GB
Oli (production) 1 CPU (dedicated) 5120 MB
julia (st andby)
0.2 CPU (shared) 1024 MB
HMC 1 HMC 2
Cluster
Nicole_vio
0.8 CPU (shared) 1024 MB
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 13
Table 6 Memory allocation
Enabling ssh access to HMC
By default, the ssh server on the HMC is not enabled. The following ste ps configure ssh access for node julia on HMC. The procedure will allow node julia to run HMC commands without providing a password.
򐂰 Enabling the remote command execution on HMC. In the management area of the HMC main panel, select HMC Management
HMC Configuration. In the right panel select Enable or Disable Remote Command Execution and select Enable the remote command execution using the ssh facility (see Figure 5).
Figure 5 Enabling remote command execution on HMC
The HMC provides firewall capabilities for each Ethernet interface. You can access the firewall menu using the graphical interface of the HMC. In the “Navigation Area” of the HMC main panel, select HMC Management
Memory (MB)
Partition name Min Desired Max
nicole_vio 512 1024 2048 oli 1024 5120 8192 julia 512 1024 8192
14 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
HMC Configuration. In the right panel select Customize Network Setting,
press the LAN Adapters tab, choose the interface used for remote access and press Details. In the new window select the Firewall tab. Check that the ssh port is allowed for access (see Figure6).
Figure 6 Firewall settings for eth1 interface
򐂰 Install the ssh client on the AIX node: The packages can be found on the AIX 5L Bonus Pack CD. To get the latest
release packages, access the following URL:
http://sourceforge.net/projects/openssh-aix
Openssl is required for installing the Openssh package. You can install it from the AIX 5L Toolbox for Linux CD, or access the Web site:
http://www.ibm.com/servers/aix/products/aixos/linux/download.html
After the installation, verify that the openssh filesets are installed by using the lslpp command on the AIX node, as shown in Example 1.
Example 1 Check openssh filesets are installed
root@julia/.ssh>lslpp -L |grep ssh openssh.base.client 3.8.0.5302 C F Open Secure Shell Commands openssh.base.server 3.8.0.5302 C F Open Secure Shell Server openssh.license 3.8.0.5302 C F Open Secure Shell License openssh.man.en_US 3.8.0.5302 C F Open Secure Shell
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 15
openssh.msg.en_US 3.8.0.5302 C F Open Secure Shell Messages -
򐂰 Log in the user account used for remote access to the HMC. Generate the
ssh keys using the ssh-keygen command. In Example 2, we used the root user account and specified the RSA algorithm for encryption. The security keys are saved in the /.ssh directory.
Example 2 ssh-keygen output
root@julia/>ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (//.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in //.ssh/id_rsa. Your public key has been saved in //.ssh/id_rsa.pub. The key fingerprint is: 72:fb:36:c7:35:4a:20:0d:57:7f:68:ce:d0:33:be:40 root@julia
򐂰 Distribute the public key in file id_rsa.pub to the HMC. In Example 3, we use
the mkauthkeys command to register the key for the hscroot account. The k e y will be saved in the file authorized_ke ys2 on the $HOME/.ssh directory on the HMC.
Example 3 Distribute the public key to the HMC
root@julia/>cd /.ssh root@julia/.ssh>ls -l total 16
-rw------- 1 root system 887 Mar 30 19:52 id_rsa
-rw-r--r-- 1 root system 220 Mar 30 19:52 id_rsa.pub root@julia/.ssh>juliakey=`cat /.ssh/id_rsa.pub` root@julia/.ssh>ssh hscroot@hmctot184 mkauthkeys -a \"$juliakey\" The authenticity of host 'hmctot184 (10.1.1.187)' can't be established. RSA key fingerprint is 00:2c:7b:ac:63:cd:7e:70:65:29:00:84:44:6f:d7:2e. Are you sure you want to continue connecting (yes/no)?yes Warning: Permanently added 'hmctot184,10.1.1.187' (RSA) to the list of known hosts. hscroot@hmctot184's password: root@julia/.ssh> root@julia/.ssh> root@julia/.ssh>ssh hscroot@hmctot184 lshmc -V "version= Version: 4 Release: 5.0 HMC Build level 20050519.1 MH00308: Required Maintenance Fix for V4R5.0 (04-25-2005) " root@julia/.ssh>
16 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
Now, we force node nils to fail and prepare to start the takeover scenario (see Figure 7).
Figure 7 CoD and dynamic LPAR operations after takeover
Enabling On/Off CoD for processor and memory
Before activating the CPU and memory resources, you have to prepare the CoD environment by gett ing an enablement code from IBM. For more information about how to get an activation code, refer to the CoD Web site:
http://www.ibm.com/servers/eserver/pseries/ondemand/cod/
򐂰 Activating On/Off CoD using the graphical interface. From the Server
Management window, highlight the managed system. Click on Selected Manage on Demand Activations Capacity on Demand (see Figure 8 on page 17).
P550 – 2 CPU - 8GB
nils (production) 2 CPUs (dedicated) 7 GB
P550 – 4 CPU – 8 GB
nicole_vio (VIO server)
0.8 CPU (shared) 1024 MB
julia (production) 2 CPU (shared) 7 GB
HMC 1 HMC 2
CoD activation DLPAR operations
2
1 - Failoverto node julia 2 - Node julia remotely activates CoD and performs DLPAR operations via HMC
takeover
1
oli (production) 1 CPU (dedicated) 5120 MB
Hardware Management Console (HMC) Case Configuration Study for LPAR Management 17
Figure 8 Activating the On/Off CoD
򐂰 Activating On/Off CoD using the command line int erface. Example 4 shows how node julia activates 2 CPUs and 8 GB of RAM for 3 days
by running via ssh the command chcod on the HMC.
Example 4 Activating CoD using command line interface
CPU: root@julia/.ssh>ssh hscroot@hmctot184 "chcod -m p550_itso1 -o a -c onoff
-r proc -q 2 -d 3"
Memory: root@julia/.ssh>ssh hscroot@hmctot184 "chcod -m p550_itso1 -o a -c onoff -r mem -q 8192 -d 3"
򐂰 Perform the dynamic LPAR operations to increase the CPU units and
memory capacity of the target partition.
After enabling the CoD feature for CPU, the additional processors are automatically added in the shared processor pool and can be assigned to any shared or dedicated partition.
18 Hardware Management Console (HMC) Case Configuration Study for LPAR Management
In order for node julia to ope rate with the same resources as node nils had, we have to add 1.8 processing units and 6.5 GB memory to this node.
򐂰 Allocation of processor units.
– Using the graphical user interface.
In the Server and Partition panel on HMC, right-click on partition julia and select Dynamic Logical Partitioning Processor Resour ces Add. In the dialog window, enter the desired values for additional processing units and virtual processors as shown in Figure 9.
Figure 9 Performing dynamic LPAR operation for CPU
– Using the command line interface.
In Example 5, we run th e com m a nd lshwres on the HMC to get the current values of the cpu units and virtual processors used by node julia, bef ore and after increasing the processing units.
Note: If you use reserve CoD instead of ON/OFF CoD to temporarily activate processors, you can assign the CPUs to shared partitions only.
Loading...
+ 40 hidden pages