For more information..................................................................................................................... 33
Executive summary
Traditional IT environments are often silos in which both technology and human resources are aligned
around an application or business function. Capacity is fixed, resources are over-provisioned to meet
peak demand, and systems are complex and difficult to change. Costs are based on owning and
operating the entire vertical infrastructure—even when it is being underutilized.
Resource optimization is one of the goals of the HP Adaptive Enterprise strategy—a strategy for
helping customers synchronize business and IT to adapt to and capitalize on change. To help you
realize the promise of becoming an Adaptive Enterprise, HP provides virtualization technologies that
pool and share resources to optimize utilization and meet demands automatically.
HP-UX Workload Manager (WLM) is a virtualization solution that helps you achieve a true Adaptive
Enterprise. As a goal-based policy engine in the HP Virtual Server Environment, WLM integrates
virtualization techniques—including partitioning, resource management, utility pricing resources, and
clustering—and links them to your service level objectives (SLOs) and business priorities. WLM
enables a virtual HP-UX server to grow and shrink automatically, based on the demands and SLOs for
each application it hosts. You can consolidate multiple applications onto a single server to receive
greater return on your IT investment while ensuring that end-users receive the service and performance
they expect.
WLM automates many of the features of the Process Resource Manager (PRM) and HP-UX Virtual
Partitions (vPars). WLM manages CPU resources within a single HP-UX instance as well as within and
across hard partitions and virtual partitions. It automatically adapts system or partition CPU resources
(cores) to the demands, SLOs, and priorities of the running applications. (A core is the actual data
processing engine within a processor, where a single processor can have multiple cores.) On systems
with HP Instant Capacity, WLM automatically moves cores among partitions based on the SLOs in the
partitions. Given the physical nature of hard partitions, the “movement” of cores among partitions is
achieved by deactivating a core on one nPartition and then activating a core on another.
This paper presents an overview of the techniques and tools available for using WLM A.03.02 and
WLM A.03.02.02. WLM A.03.02 is available with the following operating system and hardware
combinations:
Operating SystemsHardware
HP-UX 11i v1 (B.11.11)HP 9000 servers
HP-UX 11i v2 (B.11.23)HP Integrity servers and HP 9000 servers
HP-UX 11i v1 (B.11.11) and
HP-UX 11i v2 (B.11.23)
WLM A.03.02.02 is available with the following operating system and hardware combinations:
Operating SystemsHardware
HP-UX 11i v3 (B.11.31)HP 9000 servers and HP Integrity servers
(Some of the functionality presented in this paper was available starting with WLM A.02.00.)
This paper assumes you have a basic understanding of WLM terminology and concepts, as well as
WLM configuration file syntax. The paper first gives an overview of a WLM session. Then, it provides
Servers combining HP 9000 partitions and HP
Integrity partitions (in such environments, HP-UX
11i v1 supports HP 9000 partitions only)
3
background information on various ways to use WLM, including how to complete several common
WLM tasks. Lastly, it discusses how to monitor WLM and its effects on your workloads.
If you prefer to configure WLM using a graphical wizard, see the white paper, “Getting started with
HP-UX Workload Manager,” available from the information library at:
http://www.hp.com/go/wlm
HP-UX Workload Manager in action
This section provides a quick overview of various commands associated with using WLM. It takes
advantage of some of the configuration files and scripts that are used in the chapter “Learning WLM
by example” in the HP-UX Workload Manager User’s Guide. These files are in the directory
/opt/wlm/examples/userguide/ and at:
http://www.hp.com/go/wlm
To become familiar with WLM, how it works, and some related commands:
The file multiple_groups.wlm is shown in the following. This configuration:
– Defines two workload groups: g2 and g3.
– Assigns applications (in this case, perl programs) to the groups. (With shell/perl programs, give
the full path of the shell or perl followed by the name of the program.) The two programs loop2.pl
and loop3.pl are copies of loop.pl. The loop.pl script (available in
/opt/wlm/examples/userguide) runs an infinite outer loop, maintains a counter in the inner loop,
and shows the time spent counting.
– Sets bounds on usage of CPU resources . The number of CPU shares for the workload groups can
never go below the gmincpu or above the gmaxcpu values. These values take precedence over
the minimum and maximum values that you can optionally set in the slo structures.
– Defines an SLO for g2. The SLO is priority 1 and requests 15 CPU shares for g2.
– Defines a priority 1 SLO for g3 that requests 20 CPU shares.
4
# Name:
# multiple_groups.wlm
#
# Version information:
#
# $Revision: 1.10 $
#
# Dependencies:
# This example was designed to run with HP-UX WLM version A.01.02
# or later. It uses the cpushares keyword introduced in A.01.02
# and is, consequently, incompatible with earlier versions of
# HP-UX WLM.
#
# Requirements:
#To ensure WLM places the perl scripts below in their assigned
#workload groups, add "/opt/perl/bin/perl" (without the quotes) to
#the file /opt/prm/shells.
prm {
groups = g2 : 2,
g3 : 3;
apps = g2 : /opt/perl/bin/perl loop2.pl,
g3 : /opt/perl/bin/perl loop3.pl;
gmincpu = g2 : 5, g3 : 5;
gmaxcpu = g2 : 30, g3 : 60;
}
slo test2 {
pri = 1;
cpushares = 15 total;
entity = PRM group g2;
}
slo test3 {
pri = 1;
cpushares = 20 total;
entity = PRM group g3;
}
5
3. Note what messages a WLM startup produces. Start another session to view the WLM message
The text in the log shows when the WLM daemon wlmd started, as well as what arguments it was
started with—including the configuration file used.
4. Check that the workload groups are in effect.
The prmlist command shows current configuration information. This HP Process Resource
Manager (PRM) command is available because WLM uses PRM to provide some of the WLM
functionality. For more information on the prmlist command, see the “prmlist” section on page 31.
# /opt/prm/bin/prmlist
PRM configured from file: /var/opt/wlm/tmp/wmprmBAAa06335
File last modified: Thu Aug 29 08:35:23 2006
A Web interface to the prmlist command is available. For information, see the wlm_watch(1M)
manpage.
6
In addition, you can use the wlminfo command, which shows CPU Shares and utilization (CPU
Util) for each workload group, and beginning with WLM A.03.02, the command also shows
memory utilization (because memory records are not being managed for any of the groups in this
example, a “-“ is displayed in the Mem Shares and Mem Util columns):
# /opt/wlm/bin/wlminfo group
Thu Aug 29 08:36:38 2006
Workload Group PRMID CPU Shares CPU Util Mem Shares Mem Util State
OTHERS 1 65.00 0.00 - - ON
g2 2 15.00 0.00 - - ON
g33 20.00 0.00 - - ON
5. Start the scripts referenced in the configuration file, as explained in the following:
a. WLM checks the files /etc/shells and /opt/prm/shells to ensure one of them lists each shell
or interpreter, including perl, used in a script. If the shell or interpreter is not in either of those
files, WLM ignores its application record (the workload group assignment in an apps
statement).
Add the following line to the file /opt/prm/shells so that the application manager can
correctly assign the perl programs to workload groups:
/opt/perl/bin/perl
b. Start the two scripts loop2.pl and loop3.pl. The following scripts produce output, so you
These scripts start in the PRM_SYS group because you started them as the root user.
However, the application manager soon moves them (within 30 seconds) to their assigned
groups, g2 and g3. After waiting 30 seconds, run the following ps command to see that the
processes have been moved to their assigned workload groups:
# ps -efP | grep loop
The output will include the following items (column headings are included for convenience):
The wlminfo command shows usage of CPU resources (CPU utilization) by workload group.
The command output, which might be slightly different on your system, follows:
# /opt/wlm/bin/wlminfo group
Workload Group PRMID CPU Shares CPU Util Mem Shares Mem Util State
OTHERS 1 65.00 0.00 - - ON
g2 2 15.00 14.26 - - ON
g3 3 20.00 19.00 - - ON
This output shows that both groups are using CPU resources up to their allocations. If the
allocations were increased, the groups’ usage would probably increase to match the new
allocations.
11. Stop the loop.pl, loop2.pl, and loop3.pl perl programs.
Where is HP-UX Workload Manager installed?
The following table shows where WLM and some of its components are installed.
ItemInstallation path
WLM/opt/wlm/
WLM Toolkits/opt/wlm/toolkits/
Manpages for WLM and its toolkits/opt/wlm/share/man/
If you are using WLM configurations that are based on the Process Resource Manager (PRM) product,
you must install PRM.
Can I see how HP-UX Workload Manager will perform
without actually affecting my system?
WLM provides a passive mode that enables you to see approximately how WLM will respond to a
given configuration—without putting WLM in charge of your system resources. Using this mode,
enabled with the -p option to wlmd, you can gain a better understanding of how various WLM
features work. In addition, you can verify that your configuration behaves as expected—with minimal
effect on the system. For example, with passive mode, you can answer the following questions:
• How does a cpushares statement work?
• How do goals work? Is my goal set up correctly?
• How might a particular cntl_convergence_rate value or the values of other tunables affect
allocation change?
• How does a usage goal work?
• Is my global configuration file set up as I wanted? If I used global arbitration on my production
system, what might happen to the CPU layouts?
• Is a user’s default workload set up as I expected?
• Can a user access a particular workload?
• When an application is run, which workload does it run in?
• Can I run an application in a particular workload?
• Are the alternate names for an application set up correctly?
For more information on how to use the passive mode of WLM, as well as explanations of how
passive mode does not always represent actual WLM operations, see the “PASSIVE MODE VERSUS
ACTUAL WLM MANAGEMENT” section in the wlm(5) manpage.
9
Activate a configuration in passive mode by logging in as root and running the command:
# /opt/wlm/bin/wlmd -p -a config.wlm
where config.wlm is the name of your configuration file.
The WLM global arbiter, wlmpard, which is used in managing SLOs across virtual partitions and
nPartitions, also provides a passive mode.
How do I start HP-UX Workload Manager?
Before starting WLM (activating a configuration), you might want to try the configuration in passive
mode, discussed in the previous section. Otherwise, you can activate your configuration by logging in
as root and running the following command:
# /opt/wlm/bin/wlmd -a config.wlm
where config.wlm is the name of your configuration file.
When you run the wlmd -a command, WLM starts the data collectors you specify in the WLM
configuration.
Although data collectors are not necessary in every case, be sure to monitor any data collectors you
do have. Because data collection is a critical link for effectively maintaining your configured SLOs,
you must be aware when a collector exits unexpectedly. One method for monitoring collectors is to
use wlminfo slo.
For information on creating your WLM configuration, see the “How do I create a configuration file?“
section on page 11.
WLM automatically logs informational messages to the file /var/opt/wlm/msglog. In addition, WLM
can log data that enables you to verify WLM management and fine-tune your WLM configuration file.
To log this data, use the -l option. This option causes WLM to log data to /var/opt/wlm/wlmdstats.
The following command line starts WLM, logging data for SLOs every third WLM interval:
# /opt/wlm/bin/wlmd -a config.wlm -l slo=3
For more information on the -l option, see the wlmd(1M) manpage.
How do I stop HP-UX Workload Manager?
With WLM running, stop it by logging in as root and running the command:
# /opt/wlm/bin/wlmd -k
10
Loading...
+ 23 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.