gwlm(1M)gwlm(1M)NAMEgwlm - Global Workload Manager
SYNOPSISgwlm command [options]
AVAILABILITY
This command is available only on gWLM Central Management Servers (sys‐
tems where you run gwlmcmsd). On HP-UX systems, the command is
/opt/gwlm/bin/gwlm. On Microsoft Windows systems, it is C:\Pro‐
gram Files\HP\Virtual Server Environment\bin\gwlm by default. However,
a different path may have been selected at installation.
To run the command, you must be logged in as root on HP-UX or into an
account that is a member of the Administrators group on Windows.
DESCRIPTION
The gwlm command is a command-line interface for Global Workload Man‐
ager (gWLM). It allows an administrator on the gWLM Central Management
Server (CMS) to interact with gWLM when a web browser is unavailable or
inconvenient.
NOTE: The configuration repository mentioned below is created when you
run the vseinitconfig --initconfig command on the CMS.
Commands and options are the same on HP-UX and Windows. Valid values
for command are:
list List the contents of the configuration repository.
discover Discover potential shared resource domains.
import Import definitions for shared resource domains,
policies, or workloads into the configuration repos‐
itory.
export Export all the definitions in the configuration
repository or only the definitions for the specified
shared resource domains, policies, and workloads.
deploy Deploy a shared resource domain.
undeploy Undeploy a shared resource domain.
manage Manage an additional workload by associating it with
a deployed shared resource domain.
unmanage Stop managing a workload by removing it from its
shared resource domain.
delete Delete definitions for shared resource domains,
policies, or workloads from the configuration repos‐
itory.
rename Rename a shared resource domain, policy, or work‐
load.
license Check gWLM software license status on managed nodes.
monitor Monitor gWLM operation.
history Manage historical data.
agentinfo Display agent information.
reset Reset agent configuration so host can be managed
again.
COMMANDS
This section describes the use of each command. A summary help message
is available for each command by specifying the option --help with the
command. Note that option arguments are introduced by two dash charac‐
ters (--), not a single dash character (-).
list [--policy[=policy]] [--workload[=workload]]
[--srd[=SRD] [--verbose]] [...]
List the contents of the configuration repository.
Arguments
If you do not specify any arguments, the entire contents are listed.
--policy[=policy]
List all policies. Limit the listing to a particular policy
by indicating the policy's name.
--workload[=workload]
List all workloads. Limit the listing to a particular
workload by indicating the workload's name.
--srd[=SRD] [--verbose]
List all shared resource domains. Limit the listing to a
particular shared resource domain by indicating the SRD's
name.
Include --verbose to display the names of all the workloads
and policies used in all the SRDs (or in the specified
SRD). This option also displays the policy type for each
policy listed.
discover [--file=file] [--type=type] [--verbose] host [...]
or
discover [--file=file] --nested --type=type [...] [--verbose] host [...]
Discover potential shared resource domains. A summary of the results
is written to stdout.
To create an SRD, specify --file=file to save detailed results in XML
format then use file as input for an import operation.
Arguments
--file=file Save the results to the specified file in XML format.
--nested Create SRDs with nested partitions, such as fss groups
inside vpars, with gWLM managing resources for all the par‐
tition types.
NOTE: No more than one deployed SRD per complex should have
nested partitions.
--type is required when you specify --nested. Specifying
--type just once manages only compartments of that type for
all the given hosts. Specifying --type for each host man‐
ages different compartments for the different hosts. (You
can indicate each type and its host by specifying --type
immediately followed by its host for each type-host combi‐
nation. Alternatively, you can enter all the --type options
followed by all the hosts, with the type and host matched
based on the order.)
For example, if a complex is divided into four npars each
with Instant Capacity installed, two of the npars could be
further divided into fss groups and the other two npars
could be divided into vpars. gWLM would manage resource
allocations for the fss groups or vpars within a given
npar, but it could also migrate resources among the npars.
Such a command might look like:
# gwlm discover --file=/tmp/mySRD --nested \
--type=fss nparA --type=fss nparB \
--type=vpar nparC --type=vpar nparD
--type=type Set discovery type to one of fss, pset, vpar, npar, or
hpvm. Compartments for your workloads are based on the
type. By default, all types are discovered. Seeing all
types allows you to better decide which type to use. Once
you decide on the type to use, this option allows you to
restrict the discovery results to that type. (gWLM manages
only one compartment type in an SRD, unless you use the
discover --nested option. You must specify --type at least
one time when you specify --nested.)
gWLM also discovers SRDs based on GiCAP groups. (Global
Instant Capacity, or GiCAP, is a feature of the HP Instant
Capacity product. With GiCAP, you can form GiCAP groups,
which enable you to share hardware usage rights among
servers, allowing resources to be deactivated in one system
and activated in another to meet changing system demands.)
Combining GiCAP with gWLM, you can form an SRD using all
the members of a GiCAP group. gWLM then moves usage rights
for cores based on the policies and resource demands of the
group members. In addition, only after all usage rights
available in the group have been used to activate cores,
gWLM can activate TiCAP anywhere in the GiCAP group (assum‐
ing gWLM is enabled to use TiCAP) to meet policies.
--verbose Display verbose output, showing:
+ All the discovered compartment data
+ Diagnostic informational messages about the discovery
(Warnings are displayed regardless of whether --verbose
is used.)
Use this output to better understand how a discovered SRD
was formed or to determine discovery issues.
You may see 'complex' or 'group' in this output. The group
is a GiCAP group, and the name or address given identifies
the Group Manager system.
host Hostname of a managed node to check for potential SRDs.
Specify multiple hosts separated by white space. (The gwl‐
magent process must be running on each specified host.)
import [--file=file] [--clobber] [--mute]
Import a definition for a shared resource domain, policy, or workload
into the configuration repository. The input, in XML form, is taken
from stdin unless you specify the --file option. (The XML is described
in the gwlmxml(4) manpage.)
If you import the definition of an SRD that has the same name as a
deployed SRD, the newly imported SRD is deployed--taking the place of
the first SRD. Similarly, if you import a workload definition or policy
definition and a workload or policy of the same name is in a deployed
SRD, the new definition is used immediately.
Arguments
--file=file Read from the specified XML file.
--clobber Force an overwrite of an existing definition. This option
is required if a definition being imported modifies a defi‐
nition in the repository that is more recent.
--mute Suppress validation warnings. If there are validation
errors though, the validation errors and warnings are dis‐
played.
export
{ --all | --policy=policy | --workload=workload | --srd=SRD } [...]
[--file=file]
Export all the definitions in the configuration repository or only the
definitions for the specified shared resource domains, policies, and
workloads. The output is in XML format and is described in the
gwlmxml(4) manpage. Export multiple items by repeating arguments.
Arguments
--all Export all the definitions in the configuration repository.
--policy=policy
Export the definition for the specified policy.
--workload=workload
Export the definition for the specified workload.
--srd=SRD Export the definition for the specified SRD.
--file=file Redirect the XML output to the specified file.
deploy --srd=SRD [...] [--force] [--mute]
Deploy a shared resource domain, in either advisory mode or managed
mode. (You set this mode in the SRD definition.) When an SRD is
deployed in advisory mode, gWLM simply reports what the resource allo‐
cations would be--without actually affecting allocations on the system.
(Advisory mode is not available if the SRD contains virtual machines,
psets, or fss groups due to the nature of these compartments.) When you
deploy an SRD in managed mode, gWLM begins managing the SRD, migrating
resources among workloads as specified in your policies.
Deploy multiple, nonoverlapping SRDs by repeating the --srd=SRD argu‐
ment.
NOTE: There are several properties you can set in the HP-UX file
/etc/opt/gwlm/conf/gwlmcms.properties that are read by gWLM only when
deploying SRDs. (On Windows, the file is C:\Program Files\HP\Virtual
Server Environment\conf\gwlmcms.properties by default, although a dif‐
ferent path may have been selected at installation.) Read the file and
set any relevant properties before you deploy any SRDs.
Arguments
--srd=SRD Deploy the named shared resource domain (SRD).
--force Force the SRD to be considered deployed in the gWLM config‐
uration repository. This option is useful when the gWLM CMS
and an agent disagree about whether an SRD is deployed or
undeployed.
--mute Suppress validation warnings. If there are validation
errors though, the validation errors and warnings are dis‐
played.
undeploy --srd=SRD [...] [--force]
Undeploy a shared resource domain. Undeploy multiple SRDs by repeating
the --srd=SRD argument.
Arguments
--srd=SRD Undeploy the specified shared resource domain.
--force Force the SRD to be considered undeployed in the gWLM con‐
figuration repository. This option is useful when the gWLM
CMS and an agent disagree about whether an SRD is deployed
or undeployed.
NOTE: When undeploying SRDs based on pset compartments or fss group
compartments, gWLM removes the compartments. gWLM does not remove vir‐
tual machine (hpvm) compartments, vpar compartments, or npar compart‐
ments.
manage --host=host --type={ fss | pset | vpar | npar }
--workload=workload [--mute]
or
manage --host=host --type=hpvm
--workload=workload [--mute]
[--id=name_or_guid]
Add an existing workload to the deployed SRD. The workload, which must
already be defined in the configuration repository, is associated with
a compartment of its own (of the specified type) on the named host. The
workload automatically becomes a part of the SRD managing the specified
compartment type on that host. To add multiple workloads, invoke gwlm
manage multiple times.
A workload can be in only one deployed SRD at a time. (The same work‐
load can be in multiple undeployed SRDs.)
NOTE: When you have workloads based on psets or fss groups: If you let
processes run in the default pset or the default fss group, they will
be competing against all the other processes that are not explicitly
placed in workloads. To ensure appropriate resource allocations for
your processes, place them in workloads by specifying <user> tags or
<application> tags when defining workloads, as explained in the
gwlmxml(4) manpage, or by using the gwlmplace command.
NOTE: Placing an additional workload in an SRD affects resource alloca‐
tions for the workloads originally in the SRD. After adding a workload,
evaluate how allocations are affected using gwlm monitor --srd=SRD
--view=policy. Then adjust the associated policies if needed, as shown
in the "ADJUSTING POLICIES AFTER A 'manage' OR 'unmanage'" section
below.
Arguments
--host=host Specify the host on which the workload will run.
--type={ fss | pset | vpar | npar | hpvm }
Specify the type of compartment in which the workload will
run. If you specify fss or pset, gWLM creates an fss group
or pset for the workload. If you specify vpar or npar, the
vpar/npar must already exist. If you specify hpvm, the vir‐
tual machine must already exist.
--workload=workload
Specify the name of the workload in the configuration
repository to add to the deployed SRD.
workload must have an associated policy (contain a policy
reference). Also, workload must not already be in a
deployed SRD.
Find names of workloads in the repository using the gwlm
list command.
--mute Suppress validation warnings. If there are validation
errors though, the validation errors and warnings are dis‐
played.
--id=name_or_guid
Specify, if desired, one of the following for the virtual
machine to be managed:
+ The name of the virtual machine (as set using HP
Integrity Virtual Machines or HP Integrity Virtual
Machines Manager)
+ The GUID of the virtual machine
(appears in the <nativeId> element in the XML)
gWLM attempts to determine this information automatically;
however, an error is generated if gWLM is not successful.
unmanage --workload=workload [--mute]
Stop managing a workload by removing it from its SRD. (The workload
definition remains in the configuration repository, but it is no longer
associated with its SRD.) To stop managing multiple workloads, invoke
gwlm unmanage multiple times.
You cannot unmanage the last workload in an SRD. To stop managing an
SRD, use the undeploy command.
With virtual machine (hpvm) compartments, you can only unmanage stopped
virtual machines. An unmanaged virtual machine cannot be started while
gWLM remains in control of the virtual machine host.
If the workload's compartment is an fss group or a pset, gWLM destroys
the compartment and moves the processes that were in the compartment to
the default compartment. If the compartment is a vpar, npar, or virtual
machine, gWLM does not destroy the compartment. (A compartment based on
a vpar or npar must have a fixed policy to be unmanaged. A virtual
machine must be stopped to be unmanaged.)
NOTE: Unmanaging a workload affects resource allocations for the work‐
loads remaining in the SRD. After unmanaging a workload, evaluate how
allocations are affected using gwlm monitor --srd=SRD --view=policy.
Then adjust the associated policies if needed, as shown in the "ADJUST‐
ING POLICIES AFTER A 'manage' OR 'unmanage'" section below.
Arguments
--workload=workload
Stop managing the specified workload.
--mute Suppress validation warnings. If there are validation
errors though, the validation errors and warnings are dis‐
played.
delete { --policy=policy | --workload=workload | --srd=SRD } [...]
Delete a definition for a policy, workload, or shared resource domain
from the configuration repository. The definition cannot currently be
in use as part of a deployed configuration. Delete multiple items by
repeating the following arguments.
Arguments
--policy=policy
Delete the definition for the specified policy.
--workload=workload
Delete the definition for the specified workload.
--srd=SRD Delete the definition for the specified shared resource
domain (SRD).
rename { --policy | --workload | --srd } oldname newname
Rename a policy, workload, or shared resource domain. To rename multi‐
ple items, invoke gwlm rename multiple times.
Arguments
--policy Rename the policy about to be specified.
--workload Rename the workload about to be specified.
--srd Rename the SRD about to be specified.
oldname newname
Rename the policy, workload, or shared resource domain that
is named oldname to newname.
license [--host=host] [...]
Check the status of the gWLM software licenses on the managed nodes.
Arguments
--host Check licenses for only the specified hosts.
Output
Here is sample output for this command:
SRD Host Status License
______________ _______ ______ _____________________________________________
sys1.srd sys1 OK License is unrestricted.
sys2.srd sys2 OK License will expire Thu Jun 16 14:01:43 2005.
sys3.srd sys3 Warn License will expire Fri Feb 18 13:48:12 2005.
sys4.srd sys4 Error License expired Thu Feb 17 12:48:12 2005.
In the Status column, the three possible entries are:
OK The managed node has a license installed that is either
unrestricted or expires more than seven days into the
future.
Warn The managed node has a license that expires within seven
days.
Error The managed node has an expired license.
monitor [--count=n]
{ --policy=policy
| --workload=workload [--view={ resource | policy }]
| --srd=SRD [--view=resource] [--nested]
| --srd=SRD --view=policy
| [--srd]}
Monitor gWLM operation. When you specify no arguments, the output is
the same as if you had specified --srd. The output is described in the
section "gwlm monitor OUTPUT DESCRIPTIONS" below.
Arguments
--count=n Set the number of updates to display before exiting. By
default, monitoring continues until interrupted or until
the item being monitored is no longer part of a deployed
SRD.
--policy=policy
Monitor the specified policy in each deployed SRD in which
it is associated with a workload.
--workload=workload [--view=view]
Monitor the specified workload. See --view for information
on selecting the view.
--srd[=SRD [--view=view]]
Monitor deployed shared resource domains. Displays a sum‐
mary view of all deployed shared resource domains. If a
specific shared resource domain is given by SRD, a more
detailed view is presented. See --view for information on
selecting the view.
--nested Monitor nested partitions for the named SRD.
--view=view Choose a view, either resource or policy, for monitoring a
specific SRD or workload. The resource view monitors
resource information for a workload, including size and
utilization. The policy view focuses on how well policies
are being met. By default, the view is set to resource.
history { --truncate=CCYY/MM/DD[:HH:MM] | --purge=days | --flush[=SRD] }
Manage the historical data used in generating reports.
Arguments
--truncate=CCYY/MM/DD[:HH:MM]
Perform the following operations:
+ Locate the last successful configuration save before or
on CCYY/MM/DD[:HH:MM] and remove the configuration data
prior to that save
+ Remove any historical monitoring data on the CMS that has
a timestamp earlier than CCYY/MM/DD[:HH:MM].
For example, with CCYY/MM/DD equal to 2005/01/15 and the
last successful configuration save on 2005/01/10, all con‐
figuration data before that save on 2005/01/10 is removed.
Also, all historical data up through the end of January 14,
2005 is removed.
--purge=days
Remove from the database all historical and configuration
data that is older than the specified number of days.
--flush[=SRD]
Collect the historical data from the managed nodes in the
specified SRD, placing it in the historical database. If no
SRD is specified, collect all historical data from all the
managed nodes.
Use this command if:
+ An SRD has been running for a long period of time without
any configuration changes and you want to view historical
data from the current day
+ You are about to create an advanced report
agentinfo [--host=host] [--srd=SRD]
Display an agent's host, gWLM version, SRD, and CMS. (The agent must be
in a deployed SRD.)
Arguments
--host=host Display information for the agent on host.
--srd=SRD Display information for the agents in the named SRD.
With no arguments specified, agentinfo displays information for all
deployed SRDs.
reset --host=host [...]
Reset the configuration on host so the host can be managed in an SRD
again.
reset is an advanced command for clearing an SRD. The recommended
method for typically removing a host from management is to use the gwlm
undeploy command.
If gWLM is unable to reform an SRD that includes host after host or its
gWLM agent loses contact with the CMS, use reset to clear the SRD on
the specified host.
After using reset, you can configure the host in a new SRD.
gwlm monitor OUTPUT DESCRIPTIONS
The columns you see in the various gwlm monitor output are described
below.
Allocation
The amount of a resource, such as CPU, that gWLM sets aside for
a workload after arbitrating resource requests from the policies
for all the workloads.
A double-dash entry (--) indicates the workload contains nested
partitions. For allocation information, see each individual
nested partition.
In managed mode, gWLM makes an allocation available to a work‐
load. In advisory mode, however, gWLM simply reports what the
allocation would be--without actually affecting resource alloca‐
tions on a system. (Advisory mode is not available for SRDs
containing virtual machines, psets, or fss groups due to the
nature of these compartments.)
Measured
For fixed policies: A compartment's size is the amount of CPU
resources allocated to the compartment.
For all other policies: The current value of a metric being used
in the policy. This metric could be CPU utilization or a metric
you provide in a custom policy.
Policy The name of a policy.
Request
The amount of a system resource that a policy asks gWLM to give
to the policy's workload. (Parameters you specify in defining a
policy restrict its request.)
Shared Resource Domain
The name of a shared resource domain.
Size
The amount of a resource a compartment actually has.
A size appearing in parentheses indicates the item contains
nested partitions. Such a size corresponds to the sum of the
sizes of the nested partitions the item contains.
When gWLM is deployed in advisory mode, size may differ from the
allocation. In advisory mode, utilization is the percentage
resulting from dividing a workload's consumption (how much it is
using) by its size.
Target For fixed policies: The target CPU allocation.
For utilization and OwnBorrow policies: The target utilization
percentage.
For custom policies: The target value entered when creating the
policy.
Type
The compartment type.
Utilization
The percentage resulting from dividing a workload's consumption
(how much it is using) by its allocation (how much gWLM gave
it).
A utilization appearing in parentheses indicates the item con‐
tains nested partitions. Such a utilization corresponds to an
average of the utilizations of the nested partitions the item
contains.
Workload
The name of a workload.
With --nested, workload names are indented to indicate the nest‐
ing of partitions. Items appearing in parentheses contain
nested partitions.
ADJUSTING POLICIES AFTER A 'manage' OR 'unmanage'
The manage and unmanage operations both change the set of workloads in
an SRD. Such a change can affect resource allocations for the SRD's new
set of workloads. Consequently, you should evaluate allocations in the
new SRD (using gwlm monitor --srd=SRD --view=policy) to determine
whether policy changes are needed to ensure resource allocations are as
desired.
If policy changes are needed, you have several options:
+ Change which policy is associated with a workload
1. Export the workload's definition:
gwlm export --workload=workload --file=file
2. Edit the definition to use a different policy:
Change the "<policyReference>" entry
3. Import the definition:
gwlm import --file=file
+ Edit the policy definition
1. Export a policy being used:
gwlm export --policy=policy --file=file
2. Edit the definition, as explained in the gwlmxml(4) manpage.
3. Import the definition:
gwlm import --file=file
This new definition will supersede the policy in effect for all
workloads referencing the given policy's name.
+ Create a new policy
You can create a new policy and then import it into the configura‐
tion repository. You would then change the workload's defini‐
tion--as described earlier in this example--to reference the new
policy.
EXAMPLES
Creating SRDs
The only way to create SRDs is through discovery, as described below:
1. Run the gwlm discover command as follows to form an SRD based on
vpars:
# gwlm discover host --file=file --type=vpar
Use the discover command without the --type option if you would like
to see a listing of the compartment types available on host before
committing to a certain type.
2. Edit file to change the SRD's generated name or mode, if desired.
3. Import the XML file into the configuration repository:
# gwlm import --file=file
4. Deploy the SRD:
# gwlm deploy --srd=SRD
where SRD is the name of the SRD as specified in file, when you are
ready for gWLM to manage the resource allocation for the workloads
in the SRD.
Adding a new workload to an SRD
To add a workload to a deployed SRD:
1. Define a workload, as explained in the gwlmxml(4) manpage, in an XML
file, called file for example.
NOTE: When you have workloads based on psets or fss groups: If you
let processes run in the default pset or the default fss group, they
will be competing against all the other processes that are not
explicitly placed in workloads. To ensure appropriate resource allo‐
cations for your processes, place them in workloads by specifying
<user> tags or <application> tags when defining workloads or by
using the gwlmplace command.
2. Import the XML file into the configuration repository:
# gwlm import --file=file
3. Display the names of the deployed SRDs to which you can add the
workload.
Use either of the following commands:
# gwlm monitor --count=1
# gwlm list --srd
With the second command, look for SRDs with "deployed=true".
NOTE: You can go to Step 5 if you already know the name of the SRD
to which you want to add the workload.
4. Export the SRDs to determine what hosts and types of compartments
they manage:
# gwlm export --srd=SRD
Repeatedly invoke this command, replacing SRD for each SRD found in
the gwlm monitor output in the previous step until you find the SRD
to which you want to add the workload.
5. Add the workload to a deployed SRD of the desired compartment type
using the manage command:
# gwlm manage --host=host --type=type --workload=workload
6. Adjust policies for other workloads if needed. See the section
"ADJUSTING POLICIES AFTER A 'manage' OR 'unmanage'" above for addi‐
tional information.
Removing a workload from an SRD
To remove a workload from an SRD, leaving the workload's definition in
the configuration repository:
1. Determine the name of the workload to remove:
# gwlm list --workload
2. For a workload in an npar or a vpar compartment, set its associated
policy to a fixed policy before unmanaging the workload.
3. Remove the workload from its SRD:
# gwlm unmanage --workload=workload
NOTE: For psets and fss groups, if you unmanage the corresponding
workload, any processes running in the compartment are moved. gWLM
places these processes in new compartments based on application
records or user records. If those compartments do not exist or no
records exist, gWLM places the processes in the default pset or
default fss group. (You create records with the "<user>" and
"<application>" tags in your XML file, as explained in the
gwlmxml(4) manpage. You can also create records through HP Systems
Insight Manager using gWLM's Edit Workloads window.)
4. Adjust policies for other workloads if needed. See the section
"ADJUSTING POLICIES AFTER A 'manage' OR 'unmanage'" above for addi‐
tional information.
Monitoring
The gwlm monitor command offers the output shown below. (Some of the
output has been modified for formatting purposes.)
For an explanation of the column headings, see the section above called
"gwlm monitor OUTPUT DESCRIPTIONS."
The first example lists the deployed SRDs. (You get the same output if
you enter the command gwlm monitor --srd.)
# gwlm monitor
Mon Dec 03 10:44:30 2007
Number of deployed Shared Resource Domains: 1
Shared Resource Domain Allocation Size Utilization
______________________ ____________ ______________ ___________
mysystem1.srd 8 Cores 8 Cores 3.9 %
From the previous command, we got the name of an SRD. We can use that
name to get either a resource view (the default) or policy view of the
SRD, as shown in the following two examples.
# gwlm monitor --srd=mysystem1.srd --view=resource
Mon Dec 03 10:46:15 2007
Shared Resource Domain: mysystem1.srd
Workload Type Allocation Size Utilization
__________________ ________ ____________ ______________ ___________
mysystem1.OTHER pset 4 Cores 4 Cores 19.1 %
mysystem1.prod pset 4 Cores 4 Cores 2.7 %
__________________ ________ ____________ ______________ ___________
Totals 8 Cores 8 Cores 10.9 %
# gwlm monitor --srd=mysystem1.srd --view=policy
Mon Dec 03 10:47:00 2007
Shared Resource Domain: mysystem1.srd
Policy Workload Target Measured Request
_______________ _______________ ____________ ____________ ____________
Owns_4-Max_8 mysystem1.OTHER 75.00 % 9.05 % 1.00 Cores
Owns_4-Max_8 mysystem1.prod 75.00 % 2.30 % 1.00 Cores
From the last two commands, we now have the names of workloads in the
SRD. Using one of those names, we can focus our monitoring on a single
workload in the next two examples, getting the default resource view
and the policy view.
# gwlm monitor --workload=mysystem1.prod --view=resource
Mon Dec 03 10:48:30 2007
Workload: mysystem1.prod
Shared Resource Domain Allocation Size Utilization
______________________ ____________ ______________ ___________
mysystem1.srd 4 Cores 4 Cores 4.4 %
# gwlm monitor --workload=mysystem1.prod --view=policy
Mon Dec 03 10:49:00 2007
Workload: mysystem1.prod
Policy Shared Resource Dom Target Measured Request
_______________ ___________________ ____________ ____________ __________
Owns_4-Max_8 mysystem1.srd 75.00 % 3.54 % 1.00 Cores
We can also focus on a single policy, getting a list of all the work‐
loads being affected by the policy:
# gwlm monitor --policy=Owns_4-Max_8
Mon Dec 03 11:03:00 2007
Policy: Owns_4-Max_8
Shared Resource Dom Workload Target Measured Request
___________________ ________________ _________ ____________ ____________
mysystem1.srd mysystem1.OTHER 75.00 % 4.54 % 1.00 Cores
mysystem1.srd mysystem1.prod 75.00 % 1.74 % 1.00 Cores
This next example shows output for an SRD consisting of nested parti‐
tions.
# gwlm monitor --srd=mysystem1.srd --nested
Thu May 04 10:56:50 2006
Shared Resource Domain: mysystem1.srd
Workload Type Allocation Size Utilization
__________________________ ______ ____________ ____________ ___________
mysystem1.mydomain.com npar 3.00 Cores 3.00 Cores 1.3 %
(mysystem) npar -- (8.00 Cores) (1.5 %)
mysystema.mydomain.com vpar 3.00 Cores 3.00 Cores 1.6 %
(mysystemb) vpar -- (3.00 Cores) (1.1 %)
mysystemb.OTHER fss 3.00 Cores 3.00 Cores 1.1 %
(mysystemc) vpar -- (2.00 Cores) (2.0 %)
mysystemc.OTHER fss 2.00 Cores 2.00 Cores 2.0 %
(mysystem2) npar -- (3.00 Cores) (1.5 %)
mysystem2.OTHER fss 3.00 Cores 3.00 Cores 1.5 %
__________________________ ______ ____________ ____________ ___________
Totals 14.00 Cores 14.00 Cores 1.5 %
MANUALLY ADJUSTING CPU RESOURCES
When an SRD is created, it has a certain number of cores. gWLM manages
the SRD using the same number of cores. If the SRD--or a policy used in
the SRD--is configured to use Temporary Instant Capacity (TiCAP), gWLM
can automatically activate that additional capacity to meet policies.
If neither the SRD nor its policies are configured to use TiCAP, you
may be able to temporarily provide additional resources to a deployed
SRD by:
+ Using an available core from the vpar monitor free pool.
+ Activating an iCAP core.
+ Deleting a core from an unmanaged vpar and then adding it to a
vpar in the SRD.
+ Deactivating a core in an npar and then activating one in an
npar in the SRD.
NOTE: If gWLM detects activated cores for which there is no request, it
deactivates them to avoid spending money on the unneeded capacity.
NOTE: After you manually change system resources (by modifying unman‐
aged partitions or changing bindings, for example), you might see
resize errors on one or more of the managed nodes. However, gWLM should
recover (and stop issuing errors) by the next resource allocation
interval--unless gWLM can no longer access the required resources.
NOTE: Deployed SRDs do not accept manual decreases in the available
resources. gWLM will attempt to reclaim any removed resources.
NOTE: Although a deployed SRD might recognize added resources, policy
maximum values are still in effect and can clip resource requests. Con‐
sider adjusting policy settings to use the added resources.
As already mentioned, gWLM can take advantage of the additional CPU
resources only temporarily. To take full, persistent advantage of the
extra resources using the gWLM command-line interface:
1. Undeploy the SRD containing the systems that were adjusted.
2. Re-create and re-deploy the SRD.
3. Ensure policies used in the SRD do not unintentionally limit
their associated workloads' resource requests.
To take full, persistent advantage of the extra resources using the
gWLM interface in HP SIM:
1. Modify the size of the SRD.
a. Select the SRD affected by the additional resources in the
Shared Resource Domain View.
b. Select the menu item Modify -> Shared Resource Domain.
c. Select the tab Workload and Policies.
d. Adjust the size of the SRD by editing the value, beneath
the table, labeled "Total Size".
e. Select the OK button.
2. Edit policies used in the SRD to ensure they do not uninten‐
tionally limit their associated workloads' resource requests.
gWLM cannot take advantage--even temporarily--of resources added by:
+ Adjustments to entitlements for virtual machines.
+ Changes to a virtual machine's number of virtual CPUs while
gWLM is managing the virtual machine.
+ Creation or deletion of a pset using psrset on a system where
gWLM is managing pset compartments.
+ Performing online cell operations using parolrad.
+ Enabling/disabling Hyper-Threading.
To make use of these additional resources using the gWLM command-line
interface:
1. Undeploy the SRD containing the systems that you want to
adjust.
2. Make your adjustments.
3. Re-create and re-deploy the SRD.
4. Ensure policies used in the SRD do not unintentionally limit
their associated workloads' resource requests.
To make use of these additional resources using the gWLM interface in
HP SIM, follow the procedure given for that interface above.
NOTE: After manually adjusting the number of cores in an SRD, always
confirm the changes after two gWLM resource allocation intervals have
passed. Changes may not be as expected due to gWLM behaviors such as
the ones described below.
* In an SRD with nested partitions, gWLM samples the inner par‐
titions for their sizes before sampling the outer partitions.
Adjusting resources between these samplings can cause gWLM to
report incorrect sizes. If you encounter this issue, try mak‐
ing your adjustment again.
* In an SRD with nested partitions that includes vpars, assume
you manually add cores from an unmanaged vpar. If you later
remove those cores--without returning them to an unmanaged
vpar before gWLM samples compartment sizes--those cores are
deactivated.
MANUALLY ADJUSTING MEMORY RESOURCES
The vparmodify command enables you to move memory from one vpar to
another. However, gWLM cannot move CPU resources while a vparmodify
operation is in progress. If a memory move takes longer than gWLM's
resource allocation interval, gWLM will not be able to satisfy CPU
resource requests for the missed intervals. gWLM resumes allocating
resources once the memory move is complete.
You may see SIM events indicating vparmodify commands executed by gWLM
are failing. The vparmodify commands fail with the following message:
A conflicting resource migration is in progress on this vPar.
Once the pending migration completes, the gWLM operation should com‐
plete.
AUTHORgwlm was developed by HP.
FEEDBACK
If you would like to comment on the current HP gWLM functionality or
make suggestions for future releases, please send email to:
gwlmfeedback@rsn.hp.com
FILES
NOTE: The Windows path given below (C:\Program Files\HP\Virtual Server
Environment\) is the default; however, a different path may have been
selected at installation.
/opt/gwlm/bin/gwlmcmsd
(C:\Program Files\HP\Virtual Server Environment\bin\gwlmcmsd on
Windows)
gWLM daemon that runs on the gWLM CMS
/var/opt/gwlm/gwlmcommand.log.0
(C:\Program Files\HP\Virtual Server Environment\logs\gwlmcom‐
mand.log.0 on Windows)
Log* of gwlm command
* The name of the current log always ends in .log.0. Once this file
grows to a certain size, it is moved to a filename ending in .log.1
and a new .log.0 file is started. If a .log.1 file already exists, it
is renamed .log.2. If a .log.2 file already exists, it is overwrit‐
ten.
By default, the log file size is limited to 20 Mbytes and the number
of log files is limited to 3. You can change these defaults by modi‐
fying the following properties:
com.hp.gwlm.util.Log.logFileSize = 20
com.hp.gwlm.util.Log.logNFiles = 3
in /etc/opt/gwlm/conf/gwlmcms.properties on HP-UX and in C:\Pro‐
gram Files\HP\Virtual Server Environment\conf\gwlmcms.properties on
Windows.
SEE ALSOgwlm(5), gwlmxml(4), vseinitconfig(1M), gwlmplace(1M)gwlm(1M)