© 2017 IBM Corporation
LPAR Weight, Entitlement, and CapacityRevision 2017-06-27.1
Brian K. Wade, Ph.D.IBM z/VM Development, Endicott, [email protected]
© 2017 IBM Corporation
Agenda
What the z/VM administrator sees
Basic mode and LPAR mode
Weight, entitlement, and logical CPU count
Various kinds of capping
Ways to go wrong
Some examples of doing it right
z/VM Performance Toolkit reports
2
© 2017 IBM Corporation
Our z/VM System Administrator, Jane
cp query proc PROCESSOR 00 MASTER CP PROCESSOR 01 ALTERNATE CP PROCESSOR 02 ALTERNATE CP PROCESSOR 03 ALTERNATE CP PROCESSOR 04 ALTERNATE CPPROCESSOR 05 ALTERNATE CPReady; T=0.01/0.01 13:46:02
Hey, I have a six-way! I know that’s enough for my workload, so I’m golden!
In a moment we are going to find out just how wrong that conclusion is!
3
© 2017 IBM Corporation
The Machine in Basic Mode
VM system
In the old days, VM ran right on the hardware. There was no such thing as the PR/SM hypervisor or an LPAR.
If CP QUERY PROC said you had six CPUs, you had six real, physical, silicon CPUs.
Those six CPUs were all yours, all the time.
CPU CPU CPU CPU CPUCPU
4
© 2017 IBM Corporation
The Machine in LPAR Mode
PR/SM Hypervisor
VM4
LPU LPU LPU LPU LPU LPU LPU LPU LPU LPU LPU LPULPU LPU LPULPU
VM1 VM5VM3VM2
Processor Resource/System Manager (PR/SM) owns the physical machine.PR/SM carves the machine into zones called partitions.
PR/SM timeslices partitions’ logical CPUs onto physical CPUs.
A logical CPU is not a source of capacity. It is a consumer of capacity.
CPU CPU CPU CPU CPU CPU CPUCPU
© 2017 IBM Corporation
Poor Jane!
cp query proc PROCESSOR 00 MASTER CP PROCESSOR 01 ALTERNATE CP PROCESSOR 02 ALTERNATE CP PROCESSOR 03 ALTERNATE CP PROCESSOR 04 ALTERNATE CPPROCESSOR 05 ALTERNATE CPReady; T=0.01/0.01 13:46:02
Jane’s six-way system is now running in a partition.She is now competing with many other partitions for the machine’s eight CPUs’ worth of power.Jane has no idea that she might not get six CPUs’ worth of power.
16 logical CPUs (consumers of power)-- running on --
8 physical CPUs (sources of power)
6
© 2017 IBM Corporation
How Does PR/SM Decide?
The CEC administrator assigns each partition a weight.
Weight expresses relative importancein the distribution of CPU power.
The weights determine the partitions’ entitlements.
A partition’s entitlement is the minimum power it can generally expect to be able to get whenever it wants it.
Entitlements come into play only when there is not enough power to satisfy all partitions’ demands.
As long as the physical CPUs have some spare power, all partitions can use whatever they want.
S = number of shared physical CPUs = 8
(my weight)my E = 100 * S * ----------------------
(sum of weights)
Notice:1. Σ E = 100 * S. (the entitlements sum to the capacity)2. E is not a function of the number of logical CPUs.
7
© 2017 IBM Corporation
Entitlement: A Really Simple Example
Partition Weight Weight-
Sum
Calculation Entitlement
FRED 35 90 100 * 18 * (35 / 90) 700%
BARNEY 55 90 100 * 18 * (55 / 90) 1100%
SUM 1800%
Assume this machine has 18 shared physical processors.
Notice:
1. The entitlements sum to the capacity of the shared physical processors.
2. The number of logical CPUs is NOT a factor in calculating entitlement.
By the way: “100%” means “one physical processor’s worth of power”.
8
© 2017 IBM Corporation
Some Notes About How I Like To Set Weights
The underlying arithmetic for calculating entitlement lets the system programmer select weights that sum
to pretty much anything he wants
Personally, I like to pick the weights so they sum to 100 x (number of shared processors in type pool)
This practice lets me see immediately, visually:
– What the LPAR’s entitlement is: entitlement = weight
– What the right number of logical processors is: right number of processors = near to the weight
9
OK Better
LPAR Weight Entitlement Logical
processors
LPAR Weight Entitlement Logical
processors
FRED 35 700 7 to 9 FRED 700 700 7 to 9
BARNEY 55 1100 11 to 13 BARNEY 1100 1100 11 to 13
Sum 90 Sum 1800
18 Shared Physical IFL Processors: Two Equivalent Configurations
© 2017 IBM Corporation
Aside: Cores vs. CPUs vs. Processors
Prior to z13,
– The physical CPC was equipped with physical processors
– In an LPAR’s activation profile, we equipped the LPAR with “processors” (which were of course logical processors)
– PR/SM parceled out an LPAR’s entitlement to the LPAR’s logical processors
– In a z/VM LPAR, QUERY PROCESSOR revealed the logical processors’ IDs
– The various Perfkit utilization reports were all about logical processor utilization
In the z13 family,
– The physical CPC is equipped with physical cores
– In the LPAR’s activation profile, the title on the editing window is still “Processors”, but what you are truly doing is
equipping the LPAR with logical cores
– PR/SM parcels out the LPAR’s entitlement to the LPAR’s logical cores
– If z/VM runs:
• Non-SMT: each logical core yields one logical processor
• SMT-1: each logical core yields one logical processor
• SMT-2: each IFL logical core yields two logical processors, while all other types of logical core yield one logical
processor
• The logical processors of a logical core are always dispatched together on a single physical core
– The Perfkit utilization reports now have divergent meanings:
• FCX126 LPAR, FCX302 PHYSLOG, FCX202 LPARLOG, and FCX306 LSHARACT cite core utilization
• FCX100 CPU, FCX225 SYSSUMLG, FCX144 PROCLOG, and FCX304 PRCLOG cite processor utilization
• Strongly suggested reading: http://www.vm.ibm.com/perf/tips/smtutil.html
The rest of this presentation is a little loose with the core/processor/CPU vocabulary
– I have tried to tighten it up where it really makes a difference
10
© 2017 IBM Corporation
Entitlement (E) vs. Consumption (C)
WILMA and BETTY can use over their entitlements only because FRED is
using under his entitlement.
This can happen whether or not the
physical CPC is saturated.
FRED can use his entitlement whenever he wants.
If there is not enough spare power available to let FRED increase to his entitlement, PR/SM will divert power
away from WILMA and BETTY to satisfy FRED.
C
E
WILMA
E
E
C
C
BETTYFRED
Utiliz
atio
n
Three partitions: FRED, WILMA, and BETTY
© 2017 IBM Corporation
Mixed-Engine Configurations
CP CP CP CP CP IFLCP IFL IFLCP
PR/SM Hypervisor
Weight and entitlement are type-specific attributes.Each LPAR having logical CPs has a CP weight and entitlement.
Each LPAR having logical IFLs has an IFL weight and entitlement.
VM2 VM3VM1 VM5VM4
12
© 2017 IBM Corporation
PR/SM Hypervisor
Dedicated Partitions
CP CP CP CP CP IFL IFL
VM2
VM1
VM3 VM5VM4
VM1’s logical cores are directly assigned to physical cores.Those physical cores are not used for any other partitions.
If you’re running in VM1, life is really good! E=100%*n for all of your types.(Q: what are the differences between this and a shared logical N-way with E=100%*N?)
CP
CPCP IFL
IFL CP
13
© 2017 IBM Corporation
Entitlement and Consumption within a Partition(Horizontal-Mode Partitions: z/VM 6.2 or earlier)
Suppose E = 175% and the partition has 5 logical cores.Each logical core is entitled to (175% / 5) or 35% of a physical core.The logical cores might actually consume more, depending on the availability of spare power.
E
CPU: 0 1 2 3 4
CE EE
Within a single partition, the entitlement is distributed equally across the logical cores.
35%
EC C
C
C
14
© 2017 IBM Corporation
Entitlement and Consumption within a Partition(Vertical-Mode Partitions: z/VM 6.3 or later)
Suppose E = 175% and the partition has 5 logical cores.PR/SM will create 1 Vh @ 100% , 1 Vm @ 75%, and 3 Vl @ 0% entitlement.The logical cores might actually consume more, depending on the availability of spare power.
CPU: 0 (Vh) 1 (Vm) 2 (Vl) 3 (Vl) 4 (Vl)
C
E
Within a single partition, the entitlement is distributed unequally across the logical cores.
E
C C
C
C
15
100%
E E E
© 2017 IBM Corporation
What if the Partition is Capped?(Horizontal-Mode Partitions: z/VM 6.2 or earlier)
Suppose E = 175% and the partition has 5 logical cores and is capped.Each logical core is entitled to (175% / 5) or 35% of a physical core.No logical core will ever run more than 35% busy. Availability of excess power is irrelevant.
In mixed-engine environments, capping is a type-specific concept.For example, a partition’s logical CP cores can be capped and its logical IFL cores not capped.
E
CPU: 0 1 2 3 4
C
E EE
CAPPED: every logical core is held back to its share of the partition’s entitlement.
35%
E
CC
C
C
16
© 2017 IBM Corporation
What if the Partition is Capped?(Vertical-Mode Partitions: z/VM 6.3 or later)
CPU: 0 (Vh) 1 (Vm) 2 (Vl) 3 (Vl) 4 (Vl)
C
EE
C
CC
C
17
100%
E E E
CAPPED: the logical cores together are held back to their LPAR’s entitlement.
Suppose the LPAR has entitlement 175% and five logical cores.PR/SM keeps track of the amount of core dispatch time the LPAR’s cores use altogether.When utilization transiently exceeds entitlement, all logical cores get held back momentarily.
175%
© 2017 IBM Corporation
LPAR Absolute Capping: New for zEC12 and zBC12
In the old days, there was merely a “capped” checkbox, as described on previous chart
– This capped the partition at its entitlement
With zEC12 at Version 2.12.1 at Driver Level 15, IBM introduced something called “absolute
capping”
– Applies only to shared LPARs, of course
– Expressed as a number of cores’ worth of consumption, e.g., 2.05, 10.68, etc.
– The cap is type-specific
– The sum of the logical cores’ consumptions cannot exceed the absolute cap
– When the LPAR bumps up against its cap, all of its logical cores are suspended for a
period of time
18
Legitimate logical-core percent-busy values in a four-core LPAR with absolute cap 2.6
40%
80%
50%
90%
Σ = 260% = 2.6
Logical cores: 0 1 2 3
© 2017 IBM Corporation
LPAR Group Capping: New for z13
Very similar to LPAR absolute capping, except…
– There exists the notion of a “group” of LPARs, and
– It’s the “group” that has the cap, and
– When the group bumps up against the cap, all the group’s LPARs are held back
momentarily
This became available on HMC 2.13.1
19
Legitimate aggregate logical-core percent-busy values for four LPARs with absolute group cap 2.6
40%
80%
50%
90%
Σ = 260% = 2.6
LPARs: FRED WILMA BARNEY BETTY
© 2017 IBM Corporation
Some Guidelines to Use
1. Know your workloads. Especially, know how much power they need!
If you need help, get help from workload sizing experts.
2. For each shared partition, what is its workload’s bare minimum power requirement? Use
that requirement as the partition’s entitlement E.
3. Sum of the E values = bare minimum physical cores needed.
4. Add in some spare physical cores: for PR/SM itself and for comfort, growth, or
emergencies.
5. If in addition you want some dedicated LPARs, add in for those.
6. Set the shared partitions’ weights in proportion to the entitlements you calculated above.
7. For each partition, what is the maximum power you want it ever to be able to consume?
8. Using those maxima, set the logical core counts.
9. Keep track of actual utilization and manage the LPARs’ weights and online core counts
accordingly, using the HMC or SE and CP VARY PROCESSOR or CP VARY CORE.
20
© 2017 IBM Corporation
A More Complete Example
Partition Shr/Ded E needed Weight E calculation E achieved Logical
cores
FRED shared 425% 43 2300 * (43/191) 518% 8
WILMA shared 675% 68 2300 * (68/191) 819% 11
BARNEY shared 800% 80 2300 * (80/191) 963% 10
BETTY dedicated - - - - 4
From thorough study of our workloads we determined:1. FRED needs at least 4.25 cores’ worth of power, and never more than 8,2. WILMA needs at least 6.75 cores’ worth of power, and never more than 11,3. BARNEY needs at least 8.00 cores’ worth of power, and never more than 10.
Sum of the needs = 4.25 + 6.75 + 8.00 = 19.00 cores. We chose a safety factor of 20% => 23 shared cores.Also we have partition BETTY, a 4-way dedicated.So we bought a CEC with 23+4 = 27 physical cores and then did this:
Logical core overcommit ratio = (8+11+10) / 23 = 1.26.
21
© 2017 IBM Corporation
Guidance on Number of Shared Logicals(Shared Logicals Related to Shared Physicals, Within A Type Pool)
0
20
40
60
80
100
120
140
160
180
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64
Nu
mb
er
of
Sh
are
d L
og
ical
Co
res
Number of Shared Physical Cores
Shared Logical Cores as f(Shared Physical Cores)
Green Limit
Yellow Limit
22
Per Gary King, DE, System z Performance, IBM Poughkeepsie:G = R x 15.1 x R-0.4598 Y = R x 29.7 x R-0.5968
For example, for R=12, G=58, Y=81.The zCP3000 tool knows about these formulas.
© 2017 IBM Corporation
Guidance on L:R Ratio(Ratio of Shared Logicals to Shared Physicals, Within A Type Pool)
0
5
10
15
20
25
30
35
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64
L:R
Rati
o
Number of Shared Physical Cores
L:R Ratio as f(Shared Physical Cores)
Green Limit
Yellow Limit
23
Same formulas as previous chart, just divided by R.For R=12, G=4.8 and Y=6.7.
© 2017 IBM Corporation
Vertical-low Cores: Guidance from PR/SM Development
An LPAR should consume its computing power on its entitled logical cores
There should be no more than two vertical-low (Vl) cores per LPAR
Failure to observe these guidelines will result in excess PR/SM overhead
Poor: 3 Vh, 1 Vm, 10 Vl, running 1135% core busy (Perfkit FCX306 LSHARACT)
Better: 9 Vh, 2 Vm, 2 Vl, running 1135% core busy (Perfkit FCX306 LSHARACT)
Become adept at changing the LPARs’ weights and online logical core counts to align the
LPARs’ configurations to their current demands for computing power
– To change the LPARs’ weights, use the HMC or SE
– Give an LPAR enough logical cores to satisfy its maximum utilization, but also
– Bring logical cores into or out of a z/VM LPAR’s configuration according to its
demand, using:
• For non-SMT, CP VARY PROCESSOR
• For SMT-x, CP VARY CORE
24
© 2017 IBM Corporation
Ways Things Go Wrong, part 1
Failure to set entitlement high enough.– 4-member z/OS virtual sysplex running on z/VM
– Each z/OS guest is a virtual 2-way
– How much power does each z/OS guest realistically minimally require?
– What will happen if the partition’s entitlement is well below the workload’s
requirement?
Answer: if correct operation of the workload requires that the partition consume beyond its entitlement, the workload is exposed to failing if the CEC becomes constrained.
25
© 2017 IBM Corporation
Ways Things Go Wrong, part 2
Potential1200%
Entitlement150%
Shared partition with 12 logical cores
When other partitions are quiet, this partition could run 1200% core busy.When other partitions are active, this partition might get as little as 150%.
The system might perform erratically.Users might be confused and unhappy.
Some workloads might fail.
26
© 2017 IBM Corporation
Ways Things Go Wrong, part 3
Consider a CEC with 12 shared physical cores. S = 12
22 partitions.205 logical cores. L = 205
What are the problems?
Q1: If the weights are about equal, about how much entitlement would each
partition get?A1: About (12/22) or about 50%.
Q2: About how many logical cores are in each partition? A2: About (205/22) or about 10.
Q3: Do you see anything wrong with a logical 10-core having entitlement 50%?
High L/S is a cause of high overhead in PR/SM and of suspendtime for the logical cores.
27
© 2017 IBM Corporation
Ways Things Go Wrong, part 4
Consider this shared partition:
1. 12 logical cores2. Entitlement 150%3. Horizontal4. Capped
Each logical core has an entitlement of (150%/12) = 12.5% of a physical core.
Because of the cap, each logical core will be held back to 12.5% busy.
Q1: What if a virtual 1-way guest wants to run 20% busy? Can it do so?
Q2: What if a virtual 2-way guest wants to run 30% busy? Can it do so?
Q3: What if there are many such 2-way guests on the system? What would happen?
28
© 2017 IBM Corporation
z/VM Performance Toolkit Reports
These reports are your friends:
– LSHARACT report: tabulates entitlements
– PHYSLOG report: tabulates physical core use
– TOPOLOG report: displays partition topology
– LPARLOG report: tabulates use by LPARs
29
© 2017 IBM Corporation
Causes for Concern
1FCX306 Run 2013/12/13 09:12:36 LSHARACT Logical Partition Share
From 2013/12/13 Initial To 2013/12/13 09:10:56 For (Not applicable) Result of LPARS Run ________________________________________________________________________
LPAR Data, Collected in Partition EPRF1
Physical PUs, Shared: CP- 34 ZAAP- 2 IFL- 16 ICF- 1 ZIIP- 3 Dedicated: CP- 8 ZAAP- 0 IFL- 0 ICF- 0 ZIIP- 0
____ . . . . . . . . Proc Partition LPU LPAR <LPU Total,%> LPU Type Name Count Weight Entlment TypeCap Busy Excess Conf CP ECT2 4 10 188.9 --- ... ... o CP EPLX1 8 60 1133.3 --- ... ... u why?CP EST1 8 10 188.9 --- ... ... o CP EST2 6 10 188.9 --- ... ... o CP EST3 6 30 566.7 --- ... ... -CP FCFT 16 40 755.6 --- ... ... o exposed?CP K4 6 10 188.9 --- ... ... o CP PHOS 5 10 188.9 --- ... ... o IFL EPLX1 2 50 313.7 --- ... ... u IFL EPLX2 8 45 282.4 --- ... ... o IFL EPLX3 6 45 282.4 --- ... ... o IFL ESTL1 7 50 313.7 --- ... ... o exposed?IFL EST3 4 25 156.9 --- ... ... o IFL FCFT 2 40 251.0 --- ... ... u
The LSHARACT report tabulates entitlements. New in z/VM 6.3 Perfkit!
Changing the weights of EPLX1 and FCFT would move entitlement to FCFT without harming EPLX1.
30
© 2017 IBM Corporation
More Causes for Concern
There are only 12 shared CPs but there are 22 partitions and 205 logical CPUs.
(For R=12, the King formula suggests L between 58 and 81.)
Entitlement is heavily diluted because there are so many partitions for only 12 shared physical CPs.
In addition, logical CPU counts are high, which dilutes entitlement within partitions and also causes excess PR/SM overhead.
Partition xxxx1A is especially in danger because it is capped (see FCX126 LPAR).
1FCX306 Run 2013/11/15 13:43:01 LSHARACT Logical Partition Share
From 2013/11/13 23:09:54 To 2013/11/13 23:39:04 For 1751 Secs 00:29:11 Result of xxxxxxxx Run _________________________________________________________________________
LPAR Data, Collected in Partition xxxx1A
Physical PUs, Shared: CP- 12 ZAAP- 0 IFL- 0 ICF- 4 ZIIP- 0 Dedicated: CP- 0 ZAAP- 0 IFL- 0 ICF- 7 ZIIP- 0
____ . . . . . . . . Proc Partition LPU LPAR <LPU Total,%> LPU Type Name Count Weight Entlment TypeCap Busy Excess Conf CP xxxx0A 12 20 12.4 --- 25.4 13.0 o CP xxxx0B 7 100 61.8 --- 53.4 .0 o CP xxxx0E 7 20 12.4 --- 27.8 15.4 o CP xxxx01 12 20 12.4 --- 2.8 .0 o CP xxxx04 7 10 6.2 --- 6.8 .6 o CP xxxx05 12 200 123.6 --- 130.4 6.8 o CP xxxx07 12 4 2.5 --- 4.0 1.5 o CP xxxx08 7 10 6.2 --- 4.5 .0 o CP xxxx09 12 100 61.8 --- 76.1 14.3 o CP xxxx1A 12 250 154.5 --- 23.6 .0 o CAPPEDCP xxxx11 12 500 309.0 --- 95.4 .0 o CP xxxx14 7 10 6.2 --- 9.8 3.6 o CP xxxx15 7 10 6.2 --- 18.5 12.3 o CP xxxx16 7 6 3.7 --- 4.2 .5 o CP xxxx17 12 500 309.0 --- 49.0 .0 o CP xxxx18 1 13 8.0 --- 3.5 .0 -CP xxxx19 7 20 12.4 --- 16.6 4.2 o CP xxxx21 9 75 46.3 --- 15.6 .0 o CP xxxx22 12 10 6.2 --- 8.5 2.3 o CP xxxx24 12 30 18.5 --- 7.8 .0 o CP xxxx25 7 4 2.5 --- 4.3 1.8 o CP xxxx28 12 30 18.5 --- .0 .0 o
31
© 2017 IBM Corporation
Effect of High L:S Ratio
1FCX302 Run 2013/11/15 13:37:38 PHYSLOG Real CPU Utilization Log
From 2013/11/14 21:09:24 To 2013/11/14 21:20:14 For 650 Secs 00:10:50 Result of xxxxxxxx Run ____________________________________________________________________________
Interval <PU Num> Total End Time Type Conf Ded Weight %LgclP %Ovrhd LpuT/L %LPmgt %Total TypeT/L >>Mean>> CP 12 0 1942 764.42 13.447 1.018 40.548 818.41 1.071 <- 7% PR/SM overhead on CPs>>Mean>> ICF 11 7 1249 727.70 .811 1.001 7.389 735.90 1.011 <- 1% PR/SM overhead on ICFs>>Mean>> >Sum 23 7 3191 1492.1 14.258 1.010 47.937 1554.3 1.042
21:09:44 CP 12 0 1942 667.86 14.968 1.022 45.666 728.49 1.091 21:09:44 ICF 11 7 1249 716.72 1.229 1.002 8.435 726.38 1.013 21:09:44 >Sum 23 7 3191 1384.6 16.197 1.012 54.100 1454.9 1.051
21:10:06 CP 12 0 1942 758.51 15.297 1.020 46.163 819.97 1.081 21:10:06 ICF 11 7 1249 748.41 1.238 1.002 8.027 757.68 1.012 21:10:06 >Sum 23 7 3191 1506.9 16.535 1.011 54.190 1577.7 1.047
On CPs, this CEC’s L:S ratio is (205/12) or 17.1.Usually we see ratios in the neighborhood of 1.5 to 2.0.
New in z/VM 6.3 Perfkit!You can see CEC busy, by physical CPU type, as a function of time.
32
© 2017 IBM Corporation
High L:S Ratio Plus Capping
1FCX126 Run 2013/11/15 13:37:38 LPAR Logical Partition Activity
From 2013/11/14 21:09:24 To 2013/11/14 21:20:14 For 650 Secs 00:10:50 Result of xxxxxxxx Run ____________________________________________________________________________________________________
LPAR Data, Collected in Partition xxxx1A
Processor type and model : 2097-712 Nr. of configured partitions: 45 Nr. of physical processors : 23 Dispatch interval (msec) : dynamic
Partition Nr. Upid #Proc Weight Wait-C Cap %Load CPU %Busy %Ovhd %Susp %VMld %Logld Type TypeCapxxxx1A 16 26 14 250 NO YES 2.9 0 6.0 .2 13.2 5.7 6.6 CP ---
250 YES ... 1 5.0 .1 14.0 4.9 5.7 CP ---250 YES ... 2 5.1 .1 15.1 5.0 5.9 CP ---250 YES ... 3 4.8 .1 12.1 4.7 5.4 CP ---250 YES ... 4 4.8 .1 14.4 4.7 5.5 CP ---250 YES ... 5 4.5 .0 13.8 4.4 5.1 CP ---250 YES ... 6 4.2 .1 10.7 4.1 4.5 CP ---250 YES ... 7 3.9 .0 11.4 3.8 4.3 CP ---250 YES ... 8 4.0 .1 11.7 3.9 4.4 CP ---250 YES ... 9 4.1 .1 12.3 4.0 4.6 CP ---250 YES ... 10 4.0 .0 11.7 3.9 4.5 CP ---250 YES ... 11 4.1 .0 12.0 4.0 4.6 CP ---250 NO ... 12 6.3 .2 .3 6.1 6.1 ICF ---250 NO ... 13 6.5 .1 .2 6.4 6.4 ICF ---
33
© 2017 IBM Corporation
Nicely Done!
1FCX306 Run 2013/11/20 11:20:55 LSHARACT Logical Partition Share
From 2013/11/20 15:01:48 To 2013/11/20 15:14:48 For 780 Secs 00:13:00 Result of xxxxxx Run ________________________________________________________________________________
LPAR Data, Collected in Partition xxxxxxxx
Physical PUs, Shared: CP- 0 ZAAP- 0 IFL- 7 ICF- 0 ZIIP- 0 Dedicated: CP- 0 ZAAP- 0 IFL- 10 ICF- 0 ZIIP- 0
____ . . . . . . . . Proc Partition LPU LPAR <LPU Total,%> LPU Type Name Count Weight Entlment TypeCap Busy Excess Conf IFL xxxxxx 1 10 9.9 --- .5 .0 -IFL xxxxxxxx 3 300 295.8 --- 159.7 .0 -IFL xxxxxxxx 5 400 394.4 --- 335.8 .0 oIFL xxxxxxxx 10 DED ... --- ... ... .
LPU counts match up well to entitlements.L/S = 9/7 = 1.29.
34
© 2017 IBM Corporation
I Thought This Was Brilliant
1FCX306 Run 2013/08/27 11:02:00 LSHARACT Logical Partition Share
From 2013/08/24 23:52:00 To 2013/08/25 00:50:00 For 3480 Secs 00:58:00 Result of PP2 Run ____________________________________________________________________________
LPAR Data, Collected in Partition PP2
Physical PUs, Shared: CP- 3 ZAAP- 0 IFL- 11 ICF- 1 ZIIP- 0 Dedicated: CP- 0 ZAAP- 0 IFL- 0 ICF- 0 ZIIP- 0
____ . . . . . . . . Proc Partition LPU LPAR <LPU Total,%> LPU Type Name Count Weight Entlment TypeCap Busy Excess Conf IFL PPU 1 2 22.0 ... 1.8 .0 -IFL PP1 2 8 88.0 ... 8.7 .0 o IFL PP2 7 60 660.0 ... 374.5 .0 -IFL PP6 11 30 330.0 ... 246.8 .0 o
PPU, PP1, and PP2 have fairly tight leashes… small distance from E to potential.PP6 has a small requirement, but it gets to use everything no one else is using.In other words, PP6 runs mostly on spare power.
35
© 2017 IBM Corporation
What Are the Partitions Using?
1FCX202 Run 2013/11/20 11:20:55 LPARLOG Logical Partition Activity Log
From 2013/11/20 15:01:48 To 2013/11/20 15:14:48 For 780 Secs 00:13:00 Result of xxxxxxxx Run ___________________________________________________________________________________________________________
Interval <Partition-> <- Load per Log. Processor --> End Time Name Nr. Upid #Proc Weight Wait-C Cap %Load %Busy %Ovhd %Susp %VMld %Logld Type TypeCap>>Mean>> xxxxxxx1 10 27 1 10 NO NO .0 .5 .0 ... ... ... IFL --->>Mean>> xxxxxxx2 12 24 3 300 NO NO 9.4 53.2 .5 ... ... ... IFL --->>Mean>> xxxxxxx3 18 25 10 DED YES NO 58.8 100.0 .0 ... ... ... IFL --->>Mean>> xxxxxxx4 19 23 5 400 NO NO 19.8 67.2 .8 1.8 66.1 67.3 IFL --->>Mean>> Total .. .. 17 710 .. .. 88.3 78.7 .3 ... ... ... .. ---
15:02:18 xxxxxxx1 10 27 1 10 NO NO .0 .5 .0 ... ... ... IFL ---15:02:18 xxxxxxx2 12 24 3 300 NO NO 12.2 69.3 .4 ... ... ... IFL ---15:02:18 xxxxxxx3 18 25 10 DED YES NO 58.8 100.0 .0 ... ... ... IFL ---15:02:18 xxxxxxx4 19 23 5 400 NO NO 22.7 77.1 .5 7.2 76.5 82.4 IFL ---15:02:18 Total .. .. 17 710 .. .. 94.0 83.9 .2 ... ... ... .. ---
15:02:48 xxxxxxx1 10 27 1 10 NO NO .0 .5 .1 ... ... ... IFL ---15:02:48 xxxxxxx2 12 24 3 300 NO NO 9.9 56.0 .3 ... ... ... IFL ---15:02:48 xxxxxxx3 18 25 10 DED YES NO 58.8 100.0 .0 ... ... ... IFL ---15:02:48 xxxxxxx4 19 23 5 400 NO NO 17.1 58.0 .9 1.2 56.8 57.5 IFL ---15:02:48 Total .. .. 17 710 .. .. 86.1 76.7 .3 ... ... ... .. ---
Notes:1. %Load: what fraction of the machine’s physical capacity is being used by this partition?2. %Busy: how busy is the average logical CPU of this partition?3. This report is not terribly useful in mixed-engine environments. Use FCX126 LPAR.
36
© 2017 IBM Corporation
Summary
Know your workloads’ needs.
Translate those needs into entitlements.
Plan enough physical cores to fulfill the entitlements.
Add in a little spare.
Add in your dedicated LPARs.
Calculate those weights correctly.
Follow configuration guidelines.
Be careful with capping!
Do change the LPARs’ weights and logical core counts as needed, so as to let the workloads run on entitled power and so as to keep the number of online vertical-low logical cores to two or less per LPAR.
Use z/VM Performance Toolkit. It is your friend!
37