wlm(5)wlm(5)NAME
HP-UX WLM overview
DESCRIPTION
HP-UX Workload Manager (WLM) is an automatic resource management tool
used for goal-based workload management. A workload is a partition or a
group of processes that are treated as a single unit for the purposes
of resource management. For example, a virtual partition could be a
single workload; also, a database application that consists of multiple
cooperating processes could be considered a workload.
You can use WLM within a whole server that can be clustered in an HP
Serviceguard high availability cluster, Extended Campus Cluster, Metro‐
cluster, Continentalcluster, or in a Hyperplex configuration. You can
also use WLM on an HP Integrity Virtual Machines (Integrity VM) Host
and within any individual Integrity VM (guest). You can use WLM within
nPartitions and virtual partitions as well as across partitions.
WLM is most effective managing applications that are CPU-bound. It
adjusts the CPU allocation of the group of processes that constitute a
workload, basing adjustment on current needs and performance of that
workload's applications.
When using fair share scheduler (FSS) groups, WLM allocates CPU
resources in shares (portions or time slices) of multiple cores (a core
is the actual data-processing engine within a processor; a single pro‐
cessor might have multiple cores, and a core might have multiple execu‐
tion threads). When using WLM partition management or processor set
(PSET) management, WLM allocates CPU resources in whole cores.
Starting with HP-UX 11i v3 (11.31), WLM supports the logical CPU
(Hyper-Threading) feature for processors designed to support the fea‐
ture and that have the appropriate firmware installed. A logical CPU
is an execution thread contained within a core. With Hyper-Threading
enabled, each core can contain multiple logical CPUs. WLM supports the
Hyper-Threading feature for PSET-based groups. WLM automatically sets
the Hyper-Threading state for the default PSET to optimize performance.
(The default PSET, also known as PSET 0, is where all FSS groups
reside.) When new PSETs are created, they inherit the Hyper-Threading
state that the system had before WLM was activated (inheritance is
based on the system state before WLM activation because WLM may change
the Hyper-Threading setting for the default PSET to optimize perfor‐
mance). Cores can be moved from one partition to another and will take
on the Hyper-Threading state of their destination PSET.
You can modify the Hyper-Threading state of the system by using the
command (for an example, see the wlmconf(4) manpage; for more informa‐
tion, see the kctune(1) manpage).
WLM enables you to override the default Hyper-Threading state of cores
assigned to a specific PSET group. Use the LCPU attribute, specified in
the PSET group's definition in the structure. For more information, see
the wlmconf(4) manpage. The LCPU attribute is based on an attribute
value that you can also examine and set using the option with the com‐
mand. For more information on this command, see the psrset(1M) manpage.
Do not use the or command to modify a PSET while WLM is running. When‐
ever possible, use WLM to control PSETs.
HP-UX WLM provides automatic resource allocation and application per‐
formance management through the use of prioritized service-level objec‐
tives (SLOs). Multiple prioritized workloads can be managed on a single
server, both within and across partitions, based on their reported per‐
formance levels.
When a workload group has no active SLOs, WLM reduces its resource
shares. (You control when SLOs are active through the WLM configuration
file.) For more information on these reductions, see the discussion of
the keyword in the wlmconf(4) manpage.
WORKLOAD MANAGEMENT ACROSS PARTITIONS
WLM is optimized for moving cores across hosts such as nPartitions and
virtual partitions. Using hosts as workloads, WLM manages workload
allocations while maintaining the isolation of their HP-UX instances.
WLM automatically moves (or "virtually transfers ) cores among parti‐
tions based on the SLOs and priorities you define for the workloads.
WLM can manage nested workloads (based on FSS groups and PSETs) inside
virtual partitions that are inside nPartitions.
For each host (nPartition or virtual partition) workload, you define
one or more SLOs in the WLM configuration file. WLM allows you to pri‐
oritize the SLOs so that an SLO assigned a higher priority is given
precedence over SLOs with a lower priority. Once configured, WLM then
automatically manages CPU resources to satisfy the SLOs for the work‐
loads. In addition, you can integrate WLM with HP Serviceguard to allo‐
cate resources in a failover situation according to defined priorities
(for more information on integrating WLM with HP Serviceguard, see the
"Serviceguard" section below).
For more information on WLM management of partitions, see the sections
"nPartitions" and "Virtual partitions" below.
WORKLOAD MANAGEMENT IN A SINGLE HP-UX INSTANCE
You can also use WLM to manage workloads to divide resources within a
single HP-UX instance by managing workloads based on FSS groups or
PSETs. Such workloads are usually referred to as workload groups.
When you configure WLM, you define workload groups for the system or
partition, and you can determine the placement of processes in workload
groups by assigning specific applications, users, and Unix groups to
each workload group. (WLM places the processes associated with the des‐
ignated applications, users, or Unix groups in the assigned workload
groups.) You can also create your own criteria for placing application
processes in specified workload groups by defining process maps. A
process map associates a specific workload group with a script or com‐
mand and its arguments that gather and output the process IDs to be
placed in that group. WLM spawns the command or script at 30-second
intervals, and at each interval, places the identified processes in the
appropriate groups. In addition, the WLM SAP Toolkit, in conjunction
with the HP Serviceguard Extension for SAP (SGeSAP) product, takes
advantage of process maps, providing a script that enables you to place
specified SAP processes in specific workload groups managed by WLM.
For more information, see the section "SAP software" below.
You can assign secure compartments to workload groups, creating the
secure compartments with the HP-UX feature Security Containment.
Secure compartments isolate files and processes. WLM can then automati‐
cally allocate resources for these secure compartments.
When you configure WLM, you define one or more SLOs for each workload
group and prioritize them. To satisfy the SLOs for the workload groups,
WLM will then automatically manage CPU resources and, optionally, real
memory and disk bandwidth (WLM management of workload groups is con‐
fined within the HP-UX instance or partition; no allocation is made
across partitions.) If multiple users or applications within a workload
group are competing for resources, standard HP-UX resource management
determines the resource allocation within the workload.
With real memory, WLM allows you to specify lower and upper limits on
the amount of memory a workload group receives. Disk bandwidth shares
can be statically assigned in the configuration file.
In specific, within a single instance of HP-UX, WLM can manage the fol‐
lowing resources for your workload groups:
CPU
Arbitrates CPU requests to ensure high-priority SLOs meet their
objectives. SLOs make CPU requests for workload groups. CPU
resources are allocated in CPU shares, where a CPU share is
1/100 of each CPU on the system or 1/100 of a single core of CPU
resources, depending on WLM's mode of operation. You can allo‐
cate CPU resources in:
+ Time slices on several cores
+ Whole cores used by PSET-based workload groups
+ Whole cores used by virtual partition-based workload groups
+ Whole cores used by nPartition-based workload groups (with
each nPartition using Instant Capacity, formerly known as iCOD)
A workload group may not achieve its CPU request because CPU
resources are oversubscribed and the workload group's SLOs are
low priority.
Disk bandwidth
Ensures that each workload group is allocated disk bandwidth
according the current WLM configuration.
Memory
Ensures that each workload group is granted at least its mini‐
mum, but (optionally) no more than its capped amount of real
memory.
In addition, WLM has an application manager that ensures specified
applications and their child processes run in the appropriate workload
groups.
WLM COMMANDS
WLM supports the commands listed below. For more information about a
command, see its manpage.
Starts WLM and activates a configuration. Can also be used to validate
WLM configuration files and to log data for performance tuning.
Provides various WLM data.
This graphical configuration wizard greatly simplifies the process of
creating a WLM configuration.
Usage of the wizard requires Java Runtime Environment (JRE) version 1.5
or later be installed on your system. For PRM-based configurations, PRM
C.03.00 or later is required. (To take advantage of the latest updates
to WLM, use the latest version of PRM available.) or
This graphical interface allows you to create, modify, and deploy WLM
configurations both locally and remotely. In addition, it provides mon‐
itoring capabilities.
This graphical interface can be run on the local system where WLM is
running or on any remote system with the appropriate Java Runtime Envi‐
ronment (JRE) version installed. Running the graphical interface
requires JRE version 1.5 or later. For PRM-based configurations, PRM
C.03.00 or later must be installed on the system being managed by WLM.
(To take advantage of the latest updates to WLM and the GUI, use the
latest version of PRM available.)
The version of the GUI must match the version of the WLM product it
will manage. You can install multiple versions of the GUI on a Micro‐
soft Windows PC.
Starts the WLM global arbiter for cross-partition management or manage‐
ment of Temporary Instant Capacity (TiCAP) or Pay per use (PPU)
resources.
Sends metric values to a named rendezvous point for to forward to WLM.
This tool provides a mechanism for writing data collectors in scripting
languages such as sh, csh, perl, and others. Also, it is convenient for
sending metric data from the command line.
Receives metric values from a named rendezvous point and forwards them
to the WLM daemon. This tool is started by the WLM daemon as requested
in the structures of the SLOs in the configuration file.
can forward data from all sorts of commands to WLM. HP provides the
following commands for use with to collect the specified types of data:
Retrieves data for applications defined in the GlancePlus file
/var/opt/perf/parm.
Retrieves a global (system) metric.
Retrieves general PRM data and PRM data for specific workload
groups (also
known as PRM groups).
Retrieves PRM data regarding logical volumes.
Retrieves data on ARM (Application Response Measurement) trans‐
actions for
applications registered through the ARM API function
Checks the status of a Serviceguard package.
Measures the response time for fetching a URL. You can use this
command
with the WLM Apache Toolkit (ApacheTK) to manage your
Apache-based workloads.
Helps manage the duration of processes in a workload group.
Produces an SQL value or an execution time (walltime) that
results from
executing SQL statements against an Oracle(R) database
instance.
Retrieves metrics on BEA WebLogic Server instances. You can use
this
command with the WLM BEA WebLogic Server Toolkit (WebLog‐
icTK) to manage your WebLogic workloads.
Validates WLM configuration files for integration with Servicecontrol
Manager and HP Systems Insight Manager.
The WLM EMS monitor provides information on how well WLM and the man‐
aged workload groups are performing. monitors the WLM daemon and pro‐
vides EMS resources that an EMS client can monitor.
Services requests from the WLM graphical user interface.
Manages WLM's security certificates
Converts a PRM configuration file into a WLM configuration file.
Identifies SAP processes based on user-defined criteria. Use this util‐
ity in conjunction with WLM's process map (procmap) feature to assign
SAP processes to different workload groups.
WLM Network Operating Environment
WLM's network interfaces are designed to operate correctly to defend
against attacks in a moderate to high threat environment, such as a
DMZ. You may use network protections, such as firewalls, to provide an
additional level of defense and to give you additional time to react
when a security loophole is found.
As of A.03.01, WLM enables secure communications by default when you
start WLM using the /sbin/init.d/wlm script. If you upgraded WLM,
secure mode might not be the default. Ensure that the secure mode
variables are enabled in /etc/rc.config.d/wlm. You also must dis‐
tribute security certificates to all systems or partitions being
managed by the same WLM global arbiter For more information on using
security certificates and other tasks necessary to enable secure
communications, refer to the wlmcert(1M) manpage (also available at
http://www.hp.com/go/wlm).
The WLM and daemons use the following port numbers by default:
9691
9692
Make sure these ports are kept open. To change these port numbers,
refer to the wlmpard(1M) and wlmcomd(1M) manpages (also available at
http://www.hp.com/go/wlm).
HOW TO USE WLM
The following steps show how to use WLM:
1. Create a WLM configuration
The WLM configuration file is the main user interface for control‐
ling WLM. In a WLM configuration, you:
· Define workloads
· Place applications, users, Unix groups, or secure compart‐
ments in workload groups (for workloads based on PSETs or FSS
groups)
· Create one ore more SLOs for each workload (For information
on SLOs, see the section "SLO TYPES" below.)
WLM provides a number of example configurations in /opt/wlm/exam‐
ples/wlmconf/ that you can modify to fit your environment. For an
overview of these examples, see the section "EXAMPLE CONFIGURATIONS"
below. The WLM Toolkits also offer a number of example configura‐
tions. For pointers to those files, see the "EXAMPLES" section in
the wlmtk(5) manpage.
If you prefer not to work directly with a configuration file, you
can use the:
· WLM Configuration Wizard
Invoke the wizard with the command /opt/wlm/bin/wlmcw.
(Because the wizard is an X-windows application, be sure to
set your environment variable before starting it.)
The wizard does not provide all the functionality available
through a configuration file, but it does greatly simplify
the process of creating a configuration. After creating a
configuration file using the wizard, you can view the file to
learn, and become more comfortable with, the syntax and pos‐
sibly create more complex configurations.
· WLM GUI
Invoke the WLM GUI with the command /opt/wlm/bin/wlmgui. (Be
sure to set your environment variable before starting the
GUI.)
does require you to be familiar with the WLM configuration
file syntax. However, it provides forms and tooltips (visible
when your mouse is fixed over a form field) to simplify the
configuration process. requires the WLM communications dae‐
mon, as explained in the wlmgui(1M) manpage, the appropriate
version of the Java Runtime Environment (JRE), and for PRM-
based configurations, the appropriate version of PRM. (To
take advantage of the latest updates to WLM and the GUI, use
the latest version of PRM available.)
2. (Optional) Set up secure WLM communications
Follow the procedure HOW TO SECURE COMMUNICATIONS in the wlmcert(1M)
manpage--skipping the step about starting/restarting the WLM dae‐
mons. You will do that later in this procedure.
3. Use the provided data collectors or create your own
Data collectors supply metrics to the WLM daemon. The daemon then
uses these metrics to:
· Determine new resource allocations to enable the workload
groups to achieve their SLOs
· Set shares-per-metric allocations
· Enable or disable SLOs
You have a number of options when it comes to data collectors:
· The easiest data collector to set up is the one for usage
goals. This data collector is automatically used when you
specify a usage goal.
· The next easiest data collector to set up is using the com‐
mand, command, one of the commands, or one of the other com‐
mands shown above in the discussion.
· You can also set up to forward the stdout of a data-collect‐
ing command to WLM.
· Combining with you can send data to WLM from the command
line, a shell script, or a perl program.
· If you are writing a data collector in C, your program can
interface directly with WLM through the API.
For an overview of data collectors, see the section "ADVANCED WLM
USAGE: HOW APPLICATIONS CAN MAKE METRICS AVAILABLE TO WLM" below.
Data collectors invoked by WLM run as root and can pose a security
threat. Hewlett-Packard makes no claims of any kind with regard to
the security of data collectors not provided by Hewlett-Packard.
Furthermore, Hewlett-Packard shall not be liable for any security
breaches resulting from the use of said data collectors.
For information on creating data collectors, see the white paper
"Writing a Better WLM Data Collector" available at
/opt/wlm/share/doc/howto/perfmon.html.
4. Activate the configuration in passive mode if desired
WLM operates in "passive mode" when you include the option in your
command to activate a configuration. With passive mode, you can see
how WLM will approximately respond to a particular configura‐
tion--without the configuration actually taking control of your sys‐
tem. For more information on this mode, including its limitations,
see the PASSIVE MODE section below.
Activate the WLM file configfile in passive mode as follows:
configfile
To see how WLM responds to the configuration, use the WLM utility
5. Activate the configuration
Activate your configuration--putting WLM in control of your system's
resources--as follows:
configfile
To generate audit data secure communications log statistics use the
following command:
configfile
Alternatively, you can set variables in /etc/rc.config.d/wlm to
automatically activate WLM, generate audit data, and log statistics
when the system boots. In this case, starts with a copy of the last
activated configfile.
When you start WLM by using the /sbin/init.d/wlm script, WLM runs in
secure mode by default. (If you upgraded WLM, secure mode might not
be the default. Ensure that the secure mode variables are enabled in
/etc/rc.config.d/wlm.) Running WLM in secure mode requires that you
set up security certificates and distribute them to all systems in
question. HP recommends using secure mode. If you choose not to
use secure mode, use global arbitration only on trusted local area
networks. For information on securing communications, refer to the
section HOW TO SECURE COMMUNICATIONS in the wlmcert(1M) manpage.
6. Monitor SLO compliance
Using with its command, or its interactive mode allows you to moni‐
tor your SLOs.
Also, the WLM EMS monitor makes various status data available to EMS
clients. You can check this data to verify SLO compliance.
7. Monitor data collectors
Data collection is a critical link in the effective maintenance of
your configured service-level objectives. Consequently, you should
monitor your data collectors so you can be aware when one dies.
When using there are two columns that can indicate the death of a
data collector process: State and Concern. For more information on
these columns, see the wlminfo(1M) manpage.
The WLM EMS monitor also tells you when a data collector dies unex‐
pectedly. You need to configure EMS monitoring requests that notify
you on the death of a data collector.
When a data collector dies, each SLO that uses the data from the
dead collector is affected. As an indication of the problem, each
SLO's EMS resource:
/applications/wlm/slo_status/<SLONAME>
changes to:
Use the EMS configuration interface to set up monitoring requests to
watch for this situation. The EMS configuration interface is avail‐
able in the System Administration Manager (SAM) or System Management
Homepage (SMH) "Resource Management" application group. (SMH is an
enhanced version of SAM.)
8. Configure global arbitration across partitions
Besides controlling CPU allocations within a system or partitions,
WLM can migrate cores across partitions. You can even treat a parti‐
tion as a workload unto itself by not using a structure in the WLM
configuration. (WLM can also control CPU resources for a nested
environment with FSS and PSET-based groups inside virtual partitions
inside nPartitions.)
By default, WLM global arbitration runs in secure mode when you use
the /sbin/init.d/wlm script to start WLM. (If you upgraded WLM,
secure mode might not be the default. Ensure that the secure mode
variables are enabled in /etc/rc.config.d/wlm.) Running in secure
mode requires that you have performed the required steps to set up
security certificates and distribute them. HP recommends using
secure mode. If you choose not to use secure mode, use global arbi‐
tration only on trusted local area networks.
SLO TYPES
WLM supports the following types of SLOs:
· Goal-based SLOs
· Shares-based SLOs
Goal-based SLOs
These SLOs cause WLM to grant more CPU resources (cores) or take away
CPU resources based on reported metrics. These SLOs have either usage
goals or metric goals. Usage-goal-based SLOs specify CPU utilization
goals for a workload group, indicating how much of its allocation the
group must be using before the allocation is changed. With a usage
goal, a workload group's CPU allocation is reduced if its workload is
consuming too little of the current allocation, allowing other work‐
loads to consume more CPU resources if needed. Similarly, if the work‐
load is using a high percentage of its group's allocation, it is
granted more CPU resources. (WLM tracks the metrics for usage goals
internally; no data collector is needed.)
A usage goal has the form:
low_util_bound high_util_bound
WLM automatically changes the CPU allocation for goal-based SLOs to
better achieve their stated goals. The actual CPU allocation granted is
based on the amount of CPU resources needed to meet the goal as deter‐
mined by WLM, the request limits placed on the SLO, and the availabil‐
ity of CPU resources after the needs of all higher priority SLOs have
been met.
Metric-goal-based SLOs are suitable for applications that can generate
metrics. For example, online transaction processing (OLTP) applications
are good candidates for metric-goal-based SLOs. HP recommends using
usage goals, as usage goals can be implemented immediately without
prior knowledge of workload performance.
A metric goal has the following form:
met goal_value
or
met goal_value
where you want the metric named met to be greater than or less than
goal_value.
For more information on metric goals, refer to the appendix in the WLM
User's Guide (/opt/wlm/share/doc/WLMug.pdf), entitled "Advanced WLM:
Using performance metrics." See also the section "ADVANCED WLM USAGE:
HOW APPLICATIONS CAN MAKE METRICS AVAILABLE TO WLM" below. Configura‐
tion information is included in the wlmconf(4) manpage.
Shares-based SLOs
This SLO type allows an administrator to specify a CPU allocation for a
workload group without specifying a goal. The allocation can be fixed
or shares-per-metric.
To have a fixed allocation of x percentage of the CPU resources, use
the keyword as follows:
cpushares = x total;
You can use this same keyword to specify a shares-per-metric alloca‐
tion. With this type of allocation, the associated workload group
receives a given amount of the CPU resources per metric. For example,
with the following statement, a workload group would receive 5 shares
of the CPU resources for each process an application has running in the
group:
cpushares = 5 total per metric application_procs;
The actual CPU allocation granted to the workload group is subject to
the availability of CPU resources after the needs of higher priority
SLOs have been met.
A workload group with a fixed-allocation SLO can coexist on a system
with other workload groups that have goal-based SLOs. Moreover, this
SLO type could be used to allocate resources to optional or discre‐
tionary work.
WLM allows multiple SLOs--assuming they are fixed-allocation or goal-
based SLOs--for workload groups that require more than one SLO to
accommodate a "must meet" goal and optional, lower-priority stretch
goals.
For more information on structures, where you define your SLOs, see
wlmconf(4).
PASSIVE MODE
WLM provides a passive mode that allows you to see how WLM will approx‐
imately respond to a given configuration--without putting WLM in charge
of your system's resources. Using this mode, you can analyze your con‐
figuration's behavior--with minimal effect on the system. Besides being
useful in understanding and experimenting with WLM, passive mode can be
helpful in capacity-planning activities. A sampling of possible uses
for passive mode are described below. These uses help you determine:
· How does a statement work?
Activate your configuration in passive mode then start the
utility. Use to update the metric that is used in the state‐
ment. Alternatively, wait for the condition to change based
on the date and time. Monitor the behavior of the SLO in
question in the output. Is it on or off?
Always wait at least 60 seconds (the default WLM interval) for
WLM's changes to resource allocations to appear in the output.
(Alternatively, you can adjust the interval using the tunable in
your WLM configuration file.)
· How does a statement work?
Activate your configuration in passive mode then start the
utility. Use to manipulate the metric used in the statement.
What is the resulting allocation shown in the output?
· How do goals work? Is my goal set up correctly?
Activate your configuration and monitor the WLM behavior in
the output. What is the range of values for a given metric.
Does WLM have the goal set to the level expected? Is WLM
adjusting the workload group's CPU allocation?
· How might a particular value or the values of other tunables
affect allocation changes?
Create several configurations, each with a different value
for the tunable in question. Activate one of the configura‐
tions and monitor the WLM behavior in the output. Observe how
WLM behaves differently under each of the configurations.
· How does a usage goal work?
In passive mode, a usage goal's behavior might not match what
would be seen in regular mode, but what is its basic behavior
if the application load for a particular workload group is
increased?
Activate your configuration and monitor the output to see how
WLM adjusts the workload group's CPU allocation in response
to the group's usage.
· Is my global configuration file set up as I wanted? If I used
global arbitration on my production system, what might happen
to the CPU layouts?
You can run in passive mode with each partition's daemon run‐
ning in regular mode. Thus, you can run experiments on a pro‐
duction system without consequence.
In addition, passive mode allows you to validate workload group, appli‐
cation, and user configuration. For example, with passive mode, you can
determine:
· Is a user's default workload group set up as I expected?
· Can a user access a particular workload group?
· When an application is run, which workload group does it run
in?
· Can I run an application in a particular workload group?
· Are the alternate names for an application set up correctly?
Furthermore, using metrics collected with passive mode can be useful
for capacity planning and trend analysis. For more information, see
glance_prm(1M).
PASSIVE MODE VERSUS ACTUAL WLM MANAGEMENT
This section covers the following topics:
· The WLM feedback loop
· Effect of and values
· Using in passive mode
· The effect of passive mode on usage goals and metric goals
WLM's operations are based on a feedback loop: System activity typi‐
cally affects WLM's arbitration of service-level objectives. This arbi‐
tration results in changes to CPU allocations for the workload groups,
which can in turn affect system activity--completing the feedback loop.
The diagram below shows WLM's normal operation, including the feedback
loop.
Usage/metrics
Normal operation: System activity ---------------------> WLM
^ v
| |
+--<--<--<--<--<--<--<--<--<--<--+
Allocation changes
In passive mode, however, the feedback loop is broken, as shown below.
Usage/metrics
Passive operation: System activity -------------------> WLM
Thus, in passive mode, WLM takes in data on the workloads. It even
forms a CPU request for each workload based on the data received. How‐
ever, it does not change the CPU allocations for the workloads on the
system. In passive mode, WLM does use the values of the following key‐
words to form shares requests:
·
·
·
However, because WLM does not adjust allocations in passive mode, it
may appear that these values are not used. Use the utility to monitor
WLM in passive mode. Its output reflects WLM behavior and operation. It
shows the amount of CPU resources WLM is requesting for a work‐
load--given the workload's current performance. However, because WLM
does not actually adjust CPU allocations in passive mode, WLM does not
affect the workload's performance--as reported in usage values and met‐
ric values. Once you activate WLM in normal mode, it adjusts alloca‐
tions and affects these values.
For the purposes of passive mode, WLM creates a PRM configuration with
each of your workload groups allocated one CPU share, and the rest
going to the reserved group (If your configuration has PSET-based work‐
load groups, the PSETs are created but with 0 CPU resources.) In this
configuration, CPU capping is not enforced--unlike in normal WLM opera‐
tion. Furthermore, this configuration will be the only one used for the
duration of the passive mode. WLM does not create new PRM configura‐
tions, as it does in normal operation, to change resource allocations.
Consequently, you should not rely on or to observe changes when using
passive mode. These utilities will display the configuration WLM used
to create the passive mode. However, you can use to gather CPU usage
data. As noted above, in passive mode, WLM's feedback loop is not in
place. The lack of a feedback loop is most dramatic with usage goals.
With usage goals, WLM changes a workload group's CPU allocation so that
the group's actual CPU usage is a certain percentage of the allocation.
In passive mode, WLM does not actually change CPU allocations. Thus, an
SLO with a usage goal might be failing; however, that same SLO might
easily be met if the feedback loop were in place. Similarly, an SLO
that is passing might fail if the feedback loop were present. However,
if you can suppress all the applications on the system except for the
one with a usage goal, should give you a good idea of how the usage
goal would work under normal WLM operation.
Passive mode can have an effect on SLOs with metric goals as well.
Because an application is not constrained by WLM in passive mode, the
application might produce metric values that are not typical for a nor‐
mal WLM session. For example, a database application might be using
most of a system. As a result, it would complete a high number of
transactions per second. The database performance could be at the
expense of other applications on the system. However, your WLM configu‐
ration, if it were controlling resource allocation, might scale back
the database's access to resources to allow the other applications more
resources. Thus, the output would show WLM's efforts to reduce the
database's CPU allocation. Because passive mode prevents a reduction
in the allocation, the database's number of transactions per seconds
(and system use) remains high. WLM, believing the previous allocation
reduction did not produce the desired result, again lowers the data‐
base's allocation. Thus, with the removal of the feedback loop, WLM's
actions in passive mode do not always indicate what it would do nor‐
mally.
Because of these discrepancies, always be careful when using passive
mode as an indicator of normal WLM operation. Use passive mode to see
trends in WLM behavior--with the knowledge that the trends may be exag‐
gerated because the feedback loop is not present.
EXAMPLE CONFIGURATIONS
WLM comes with several example configuration files. These examples are
in the directory /opt/wlm/examples/wlmconf/. Here is an overview of the
examples:
distribute_excess.wlm
Example configuration file demonstrating the use of the and key‐
words. This functionality is used to manage the distribution of
resources among workload groups after honoring performance goals
specified in structures.
enabling_event.wlm
A configuration file demonstrating the use of WLM to enable or
disable a service-level objective (SLO) when a certain event
occurs.
entitlement_per_process.wlm
A configuration file that demonstrates the use of a shares-per-
metric goal. A workload group's allocation, or entitlement, is
based directly on the number of currently active processes run‐
ning in the group.
fixed_entitlement.wlm
This simple example configuration illustrates the use of WLM in
granting a fixed allocation (entitlement) to a particular group
of users.
manual_entitlement.wlm
A configuration file to help a new WLM user characterize the
behavior of a workload. The goal is to determine how a workload
responds to a series of allocations (entitlements). For a simi‐
lar configuration that changes the number of CPU resources in
the PSET upon which a workload group is based, see
/opt/wlm/toolkits/weblogic/config/manual_cpucount.wlm.
metric_condition.wlm
Configuration file to illustrate that an SLO can be enabled
based upon the value provided by a metric (in this case, the
metric is provided by a glance data collector provided with the
WLM product). Metrics can be used in both the statement and the
statement of a single SLO.
par_manual_allocation.wlm, par_manual_allocation.wlmpar
These configuration files demonstrate WLM's ability to resize
HP-UX Virtual Partitions and/or nPartitions. With this configu‐
ration, you manually request the number of CPU resources (cores)
for a partition by using the command to feed the request to WLM.
The way WLM manages cores depends on the software enabled on the
complex (such as Instant Capacity, Pay per use, and Virtual Par‐
titions).
Configure WLM in each partition on the system using the .wlm
file. Configure the WLM global arbiter in one partition using
the .wlmpar file.
par_usage_goal.wlm, par_usage_goal.wlmpar
These configuration files demonstrate WLM's ability to resize
HP-UX Virtual Partitions and/or nPartitions, shifting cores
across partitions on a system. A usage goal is placed on the
workload so that WLM will automatically increase the workload's
allocation of cores when more CPU resources are needed. Like‐
wise, WLM will decrease the allocation of cores when the work‐
load is less busy. The way WLM manages cores depends on the
software enabled on the complex (such as Instant Capacity, Pay
per use, and Virtual Partitions). Configure WLM in each parti‐
tion on the system using the .wlm file. Configure the WLM global
arbiter using the .wlmpar file.
performance_goal.template
This file has a different file name extension (.template vs.
.wlm). That is simply because this file distinguishes between
configuration file special keywords and user-modifiable values
by placing the items that a user would need to customize within
square brackets ([]'s). Because of the presence of the square
brackets, the sample file will not pass the syntax-checking mode
of (wlmd -c template). All of the files with names ending in
.wlm will parse correctly.
stretch_goal.wlm
Example configuration file to demonstrate how to use multiple
SLOs for the same workload (but at different priority levels) to
specify a stretch goal for a workload. A stretch goal is one
that we'd like to have met if all other higher-priority SLOs are
being satisfied and there are additional CPU resources avail‐
able.
time_activated.wlm
This configuration file demonstrates the use of WLM in granting
a fixed allocation (entitlement) to a particular group of users
only during a certain time period.
transient_groups.wlm
This configuration file demonstrates how to minimize resource
consumption when workload groups have no active SLOs.
twice_weekly_boost.wlm
A configuration file that demonstrates a conditional allocation
with a moderately complex condition.
usage_goal.wlm
This configuration demonstrates the usage goal for service-level
objectives. This type of goal is different from the typical
performance goal in that it does not require explicit metric
data.
usage_stretch_goal.wlm
This configuration demonstrates the use of multiple SLOs at dif‐
ferent priority levels for several workloads. For each of sev‐
eral workloads, a priority 1 SLO defines a base CPU usage goal,
while a priority 10 SLO defines a stretch goal that provides
additional CPU resources after the priority 1 SLOs are satis‐
fied.
user_application_records.wlm
A configuration file that demonstrates the use of, and prece‐
dence between, user and application records in placing processes
in workload groups.
INTEGRATION WITH OTHER PRODUCTS
WLM integrates with various other products to provide greater function‐
ality. Currently, these other products are:
· Apache web server
· nPartitions
· OpenView Performance Agent for UNIX /
OpenView Performance Manager for UNIX
· Oracle databases
· Pay per use
· Processor sets
· Process Resource Manager (PRM)
· SAP(R) Software
· SAS(R) Software
· Security Containment
· Serviceguard
· HP-UX SNMP Agent
· Systems Insight Manager / Servicecontrol Manager
· Temporary Instant Capacity (TiCAP)
· Virtual partitions
· HP Integrity Virtual Machines (Integrity VM)
· BEA WebLogic Server
The integration with these products is described below.
Apache web server
WLM can help you manage and prioritize Apache-based workloads through
the use of the WLM Apache Toolkit (ApacheTK), which is part of the
freely available product WLM Toolkits (WLMTK) available at
/opt/wlm/toolkits/. WLM can be used with Apache processes, Tomcat, CGI
scripts, and related tools using the HP-UX Apache-based Web Server.
ApacheTK shows you how to:
· Separate Apache from Oracle database instances
· Separate Apache from batch work
· Isolate a resource-intensive CGI workload
· Isolate a resource-intensive servlet workload
· Separate all Apache Tomcat workloads from other Apache work‐
loads
· Separate two departments' applications using two Apache
instances
· Separate module-based workloads with two Apache instances
· Manage Apache CPU allocation by performance goal
For more information, see /opt/wlm/tool‐
kits/apache/doc/apache_wlm_howto.html.
nPartitions
You can run WLM within and across nPartitions. (WLM can even manage CPU
resources for nPartitions containing virtual partitions containing FSS
workload groups.) The way WLM manages CPU resources depends on the
software enabled on the complex (such as Instant Capacity, Pay per use,
and Virtual Partitions). For more information, see the wlmpard(1M) and
wlmparconf(4) manpages.
OpenView Performance Agent (OVPA) for UNIX /
OpenView Performance Manager (OVPM) for UNIX
You can treat your workload groups as applications and then track their
application metrics in OpenView Performance Agent for UNIX as well as
in OpenView Performance Manager for UNIX.
If you complete the procedure below, OVPA/OVPM will track application
metrics only for your workload groups; applications defined in the parm
file will no longer be tracked. GlancePlus, however, will still track
metrics for both workload groups and applications defined in your parm
file.
To track application metrics for your workload groups:
1. Edit /var/opt/perf/parm
Edit your /var/opt/perf/parm file so that the "log" line includes
"application=prm" (without the quotes). For example:
log global application=prm process dev=disk,lvm transaction
2. Restart the agent
With WLM running, execute the following command:
% mwa restart scope
The WLM workload groups must be enabled at the time the scopeux col‐
lector is restarted by the command. If WLM is not running, or is set
to 1 in your WLM configuration, data for some--or all--workload
groups may be absent from OpenView graphs and reports. Also, it may
affect alarms defined in /var/opt/perf/alarmdefs.
Now all the application metrics will be in terms of workload (PRM)
groups. That is, your workload groups will be "applications" for the
purposes of tracking metrics.
Oracle databases
HP-UX WLM Oracle Database Toolkit simplifies getting metrics on Oracle
database instances into WLM. This allows you to better manage Oracle
instances. Benefits include the ability to:
· Keep response times for your transactions below a given level
by setting response-time SLOs
· Increase an instance's available CPU resources when a partic‐
ular user connects to the instance
· Increase an instance's available CPU resources when more than
n users are connected
· Increase an instance's available CPU resources when a partic‐
ular job is active
· Give an instance n CPU shares for each process in the
instance
· Give an instance n CPU shares for each user connection to the
instance
For more information, see wlmoradc(1M).
Pay per use (PPU)
WLM allows you to take advantage of Pay per use (v4, v7, or later)
reserves to meet your service-level objectives. For more information,
see the section "HOW TO USE wlmpard TO OPTIMIZE TEMPORARY INSTANT
CAPACITY AND PAY PER USE SYSTEMS" in the wlmpard(1M) manpage.
Processor sets (PSETs)
Processor sets allow you to group processors together, dedicating those
CPU resources (cores) to certain applications. WLM can automatically
adjust the number of cores in a PSET-based workload group in response
to SLO performance. Combining PSETs and WLM, you can dedicate CPU
resources to a group without fear of the group's needing additional CPU
resources when activity peaks or concern that the group, when less
busy, has resources that other groups could be using. For more informa‐
tion, see wlmconf(4).
Support for processor sets is included with HP-UX 11i v2 (B.11.23) and
later. For HP-UX 11i v1 (B.11.11) systems, you must download software
(free of charge) to support processor sets. For more information,
refer to the WLM Release Notes (/opt/wlm/share/doc/Rel_Notes). Certain
software restrictions apply to using PSET-based groups with HP-UX Vir‐
tual Partitions (vPars), Instant Capacity, and Pay per use. These
restrictions are documented in the WLM Release Notes. When WLM is man‐
aging PSETs, do not change PSET settings by using the
command. Whenever possible, use WLM to control PSETs.
Process Resource Manager (PRM)
You can use WLM to control resources that are managed by PRM. WLM uses
PRM when a structure is included in the WLM configuration. With such
configurations, you can use PRM's informational and monitoring commands
such as and You can also use the and commands, among others. If you use
the command, invoke it with no options or the (unlock) option--do not
use the (reset) option.
Ordinarily, WLM and PRM should not be used to manage resources on the
same system at the same time. In some cases, this could cause inconsis‐
tent behavior and undesirable performance. However, you can use both
products at the same time if the PRM configuration uses FSS groups only
(no PSET-based groups) and the WLM configuration is strictly host-
based. (A strictly host-based configuration is one that does not
include a structure; it is designed exclusively for moving cores across
HP-UX Virtual Partitions or nPartitions, or for activating Temporary
Instant Capacity (TiCAP) cores or Pay per use (PPU) cores.) You might
want to use both products to take advantage of certain features of PRM
that are not included with the latest release of WLM, such as PRM's
mode, enabled with the command. (In this mode, a PRM group's upper
bound for CPU resource consumption is determined by the value, avail‐
able on HP-UX 11i v3 and later. For more information, see the HP
Process Resource Manager User's Guide or prmconfig(1M).)
SAP Software
The HP-UX WLM SAP Toolkit (SAPTK), in conjunction with HP Serviceguard
Extension for SAP (SGeSAP), provides a script (called that identifies
SAP processes based on user-defined criteria and uses WLM's process
maps feature to place the SAP processes in specific workloads managed
by WLM. The tool allows you to identify entire SAP instances or just
subsets of an instance's processes as a separate workload. For example,
you can use to collect batch and dialog processes and place them in
separate workloads. For more information, refer to the HP-UX Workload
Manager Toolkits User's Guide (opt/wlm/toolkits/doc/WLMTKug.pdf) and
the wlmsapmap(1M) manpage.
SAS Software
The WLM Toolkit for SAS Software (SASTK) can be combined with the WLM
Duration Management Toolkit (DMTK) to fine-tune duration management of
SAS jobs. For more information, see hp_wlmtk_goals_report(1M) and wlm‐
durdc(1M).
Security Containment
Combining WLM and Security Containment (available starting with HP-UX
11i v2), you can create "Secure Resource Partitions" that are based on
your WLM workload groups. Secure Resource Partitions provide a level of
security by protecting the processes and files in a given Secure
Resource Partition from other processes on the system. For more infor‐
mation, see the keyword in the wlmconf(4) manpage.
Serviceguard
WLM provides the command which allows you to activate and deactivate a
Serviceguard package's SLOs along with the package. For more informa‐
tion, see sg_pkg_active(1M).
Systems Insight Manager (SIM) / Servicecontrol Manager (SCM)
Systems Insight Manager and Servicecontrol Manager provide a single
point of administration for multiple HP-UX systems. The WLM integration
with these products allows system administrators at the SIM / SCM Cen‐
tral Management Server (CMS) to perform the following activities on
nodes in the SCM cluster that have WLM installed:
· Enable HP-UX WLM
· Disable HP-UX WLM
· Start HP-UX WLM
· Stop HP-UX WLM
· Reconfigure HP-UX WLM
· Distribute HP-UX WLM configuration files to the selected
nodes
· Retrieve currently active HP-UX WLM configuration files from
the nodes
· Check the syntax of HP-UX WLM configuration files, on either
the CMS or the selected nodes
· View, rotate, and truncate HP-UX WLM log files
For more information, see the HP-UX Workload Manager User's Guide
(/opt/wlm/share/doc/WLMug.pdf).
HP-UX SNMP Agent
WLM's SNMP Toolkit (SNMPTK) provides a WLM data collector called snm‐
pdc, which fetches values from an SNMP agent for use as metrics in your
WLM configuration. For more information, see snmpdc(1M).
Temporary Instant Capacity
WLM allows you to take advantage of Temporary Instant Capacity (v6 or
later) reserves to meet your service-level objectives. For more infor‐
mation, see the section "HOW TO USE wlmpard TO OPTIMIZE TEMPORARY
INSTANT CAPACITY AND PAY PER USE SYSTEMS" in the wlmpard(1M) manpage.
Virtual partitions
You can run WLM within and across virtual partitions. (WLM can even
manage CPU resources for nPartitions containing virtual partitions con‐
taining FSS workload groups.) WLM provides a global arbiter, that can
take input from the WLM instances on the individual partitions. The
global arbiter then moves cores across partitions, if needed, to better
achieve the SLOs specified in the WLM configuration files that are
active in the partitions. For more information, see the wlmpard(1M) and
wlmparconf(4) manpages.
HP Integrity Virtual Machines (Integrity VM)
WLM supports HP Integrity Virtual Machines. WLM can run on the
Integrity VM Host or within an Integrity VM (as guest). You can run
WLM both on the VM Host and in a VM (guest), but each WLM runs as an
independent instance.
To run WLM on the Integrity VM Host, you must use strictly host-based
configurations--WLM configurations designed exclusively for moving CPU
resources (cores) across nPartitions or for activating Temporary
Instant Capacity or Pay per use cores. (WLM will not run with FSS
groups or PSETs on Integrity VM Hosts where guests are running.) In
addition, ensure that the minimum number of cores allocated to a WLM
host is greater than or equal to the maximum number of virtual CPUs
(vCPU count) assigned to each VM guest. Otherwise, VM guests with a
vCPU count greater or equal to WLM's minimum allocation could receive
insufficient resources and eventually crash. For example, if an
Integrity VM host has 8 cores and three guests with 1, 2, and 4 virtual
CPUs, respectively, your WLM host should maintain an allocation of at
least 4 cores at all times. You can achieve this by using the WLM key‐
word; for more information on this keyword, see the wlmconf(4) manpage.
To run WLM within an Integrity VM (guest), you cannot use Instant
Capacity, Pay per use, and vPar integration. (However, guests will take
advantage of CPU resources added to the VM host by Instant Capacity,
Temporary Instant Capacity, and Pay per use.) As noted previously, WLM
must continue allocating at least as many cores as the maximum number
of virtual CPUs in any VM guest on the system. In addition, specify a
WLM interval greater than 60 seconds. This helps ensure a fair alloca‐
tion of CPU resources for FSS groups.
For more information on HP Integrity VM, refer to the following web
site and navigate to the "Solution components" page:
www.hp.com/go/vse
BEA WebLogic Server
Using WLM with WebLogic you can move CPU resources to or from WebLogic
Server instances as needed to maintain acceptable performance. By man‐
aging the instances' CPU resources, the instances will tend to use less
net CPU resources over time. You can then use the additional CPU
resources for other computing tasks.
As indicated above, WLM and WebLogicTK control CPU allocation to indi‐
vidual WebLogic instances. However, the latest version of the paper
"Using HP-UX Workload Manager with BEA WebLogic" expands the methods
for controlling instances to control WebLogic Server clusters.
For more information, see /opt/wlm/tool‐
kits/weblogic/doc/weblogic_wlm_howto.html.
TRUNCATING YOUR LOG FILES
WLM has three log files: /var/opt/wlm/msglog for messages, the optional
/var/opt/wlm/wlmdstats for statistics, and the optional
/var/opt/wlm/wlmpardstats for partition statistics.
From time to time, you should truncate your log files to regain disk
space. To truncate the message log while is running, use the command:
If you wish to archive the contents of the message log prior to trunca‐
tion, use the following sequence of commands:
archive_path_name
You can use these same commands to truncate the optional
/var/opt/wlm/wlmdstats log file. This log file is created when you use
the option with For more information on this option, see wlmd(1M). For
information on how to enable automatic trimming of the wlmdstats file,
see the tunable in the wlmconf(4) manpage.
You can also use these commands to truncate the optional
/var/opt/wlm/wlmpardstats log file, which is created when you use the
option with For information on this option, see wlmpard(1M). For infor‐
mation on automatic trimming of the wlmpardstats file, see the keyword
in the wlmparconf(4) manpage.
SUPPORT AND PATCH POLICIES
Visit http://www.hp.com/go/wlm for information on WLM's support policy
and patch policy. These policies indicate the time periods for which
this version of WLM is supported and patched. (Use to print the version
of your WLM.)
The actual CPU allocation granted to the workload group is subject to
the availability of CPU resources after the needs of higher priority
SLOs have been met.
ADVANCED WLM USAGE: HOW APPLICATIONS CAN MAKE METRICS AVAILABLE TO WLM
Time metrics from instrumentable applications
If the desired metric can be measured in units of time, and the appli‐
cation can be modified, we recommend using the ARM API provided by
GlancePlus. WLM will then collect the ARM data from GlancePlus.
Adding ARM calls to an application is as simple as specifying your
application with an call, marking the start of the time period to be
measured with an call, and marking the end of the time period with an
call. For more information on ARM, see the arm(3) manpage (if avail‐
able on your system) or visit http://www.cmg.org/regions/cmgarmw.
Other data collection techniques
If your application cannot be modified to insert ARM calls, or if your
metric does not have time units, then you should implement an external
data collector. There are three types of external data collectors to
consider:
· Independent collectors
· Stream collectors
· Native collectors
These collector types are explained below. Independent collectors use
the command to communicate a metric value to WLM. They are called
"independent" because they are not started by the WLM daemon and they
are not required to run continuously.
This type of collector is ideal if you want to convey event information
to WLM, such as application startup or shutdown.
One caveat of using this type of collector is that on start-up, HP-UX
WLM has no value for the metric until the collector provides one. For
this reason, the collector should be structured to report a value peri‐
odically, even if it has not changed.
If your collector runs continuously, be careful if using pipes. The
pipes may have internal buffering that must either be defeated or
flushed to ensure the data is communicated in a timely manner.
To configure an independent collector for a metric called metricIC,
place the following structure in your configuration file:
tune metricIC {
coll_argv = wlmrcvdc ;
}
Stream collectors convey their metric values to WLM by writing them to
the stdout stream. WLM starts these data collectors when activating a
configuration, and expects them to continue to run and provide metrics
until notified of a WLM shutdown or restart.
Use this type of collector if the metric is available in a file or
through a command-line interface. In this case, the collector can sim‐
ply be a script containing a loop that reads the file or executes the
command, extracts the metric value, writes it on stdout, and sleeps for
one WLM interval. (The current WLM interval length is available through
the environment variable to data collectors started by WLM through a
statement in the WLM configuration.)
Again, as with independent collectors, be careful if using pipes in the
data collector. These pipes may have internal buffering that must
either be defeated or flushed to ensure the data is communicated in a
timely manner.
Because they are started by a daemon process stream collectors do not
have a stderr on which to communicate errors. However, WLM provides the
tunable that allows you to log each collector's stderr to syslog
(/var/adm/syslog/syslog.log) or another file. In addition, a stream
data collector can communicate using either syslog(3C) or logger(1)
with the daemon facility.
To configure a stream collector for a metric called metricSC, place the
following structure in your configuration file:
tune metricSC {
coll_argv = wlmrcvdc collector_path collector_args ;
}
The data collector is an example of a stream collector, as are several
of the collectors that come with WLM Toolkits, such as (ApacheTK) and
(WebLogicTK). Native collectors use the WLM API to communicate
directly with the WLM daemon. Like stream collectors, these collectors
are started by WLM when activating a configuration. WLM expects them to
continue to run and provide metrics until notified of a WLM shutdown or
restart. For tips on writing your own data collectors, see the white
paper at /opt/wlm/share/doc/howto/perfmon.html.
This type of collector is appropriate if the desired metric values are
obtained through calls to a C or C++ language API that is provided by
the source of the metric. One example of such an API is the pstat(2)
family of system calls used to obtain process statistics.
This type of collector establishes a direct connection with WLM using
the WLM API function Then, executed in a loop, the collector calls the
API functions necessary to obtain the metric value, followed by a call
to the WLM API function to pass the value on.
Because they are started by a daemon process native collectors' output
to stdout and stderr is discarded. However, WLM provides the tunable
that allows you to log each collector's stderr to syslog (/var/adm/sys‐
log/syslog.log) or another file. In addition, a native data collector
can communicate using either syslog(3C) or logger(1) with the daemon
facility.
To configure a native collector for a metric called metricNC, place the
following structure in your configuration file:
tune metricNC {
coll_argv = collector_path collector_args ;
}
is an example of a native collector.
AUTHOR
HP-UX WLM was developed by HP.
FEEDBACK
If you would like to comment on the current HP-UX WLM functionality or
make suggestions for future releases, please send email to:
wlmfeedback@rsn.hp.com
FILES
Workload Manager daemon
Workload Manager global arbiter daemon
WLM communications daemon (needed by wlmgui)
WLM Configuration Wizard
WLM GUI (for monitoring and configuring)
Utility for displaying various data
Utility for managing WLM security certificates
WLM message log
EMS monitor utility
rendezvous point send utility
rendezvous point receive utility
system initialization directives
optional statistics log
optional global arbiter statistics log
Example WLM configurations and other items
white paper on writing data collectors
directory with white papers on WLM tasks
directory with HP-UX WLM user's guide and release notes
SEE ALSOwlmd(1M), wlmcw(1M), wlmgui(1M), wlmpard(1M), wlmcomd(1M), wlminfo(1M),
wlmcert(1M), wlmckcfg(1M), wlmemsmon(1M), libwlm(3), wlmconf(4), wlm‐
parconf(4), wlmprmconf(1M), wlmrcvdc(1M), wlmsend(1M), glance_app(1M),
glance_gbl(1M), glance_prm(1M), glance_prm_byvg(1M), glance_tt(1M),
sg_pkg_active(1M), wlmoradc(1M), wlmsapmap(1M), wlmwlsdc(1M)
HP-UX Workload Manager User's Guide (/opt/wlm/share/doc/WLMug.pdf)
HP-UX Workload Manager Toolkits User's Guide (/opt/wlm/tool‐
kits/doc/WLMTKug.pdf)
Using HP-UX Workload Manager with Apache-based Applications
(/opt/wlm/toolkits/apache/doc/apache_wlm_howto.html)
Using HP-UX Workload Manager with BEA WebLogic Server (/opt/wlm/tool‐
kits/weblogic/doc/weblogic_wlm_howto.html)
HP-UX Workload Manager homepage (http://www.hp.com/go/wlm)
Application Response Measurement (ARM) API
(http://www.cmg.org/regions/cmgarmw)
wlm(5)