• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 873
  • Last Modified:

question about lpar2rrd..

wmp, as you know I have lpar2rrd on our IVMs, but a question comes to my mind..\

I have attached, taken from the lpar2rrd site, 3 images:

1- Used cpu is greater than the assigned cpu. [borrowed.png]
2- Used cpu is smaller than the assigned cpu [assigned.png]
3- Used are almost the assigned cpu. [just_used_assigned.png]

My question is:
Which is the recommended setup on the IVMs(HMC) for assign cpu units to my lpars?
Do I have to assign exact what they are using?
Or should I give less cpu units to all LPARs and let them to borrow from the borrow pool?


  • 6
  • 5
3 Solutions

although I don't find any attachments, here my thoughts.

It depends a bit on whether there is contention for CPU on your managed system.
If you have sufficient CPU resources to satisfy all the needs of all the LPARS at any time it's quite
meaningless whether the actual CPU consumption execeeds the assigned value or not!

If LPARs can get into contention for CPU, however, you should keep in mind that in such situations an LPAR will just get the assigned quantity and not more, because there is nothing left to "borrow"!

"Borrowing" is not quite correct in any case - Pavel obviously is a bit sloppy here in his lpar2rrd vocabulary. Borrowing in a narrow sense means the possibility of taking away CPU cycles from LPARs running in dedicated mode. The normal case is just "shared processing" where CPU cycles are freely distributed between LPARs (but see my notes about contention above!)

Distribution of CPU cycles between "shared processor" LPARs (nearly) doesn't cost any CPU cycles, so there is no noticeable loss of processing power, whereas "borrowing" is a bit more expensive.

So the rule of thumb if you have dedicated LPARs should be giving them as much CPU as they need (averaged over a longer time period), so that even under heavy load the LPARs can do their work, and "borrowing" doesn't happen too often.

I assume, however, that we're talking about "shared" partitions here.
OK, the above rule is not wrong in such an environment either, but at least you shouldn't care for (even numerous) peaks in CPU consumption, handling them is what the concept of shared processors was meant for.

Even in contention situations we still have the instrument of "Uncapped Weight", so you can influence how "free" CPU cycles are distributed among LPARs competing for them.


sminfoAuthor Commented:

wmp, really sorry... I knew I need some rest because I didn't attach the images.. I'll attach them on wednesday.. I want you to see the images.. or wait..

[borrowed.png] is http://lpar2rrd.sourceforge.net/demo/hmc08/server3-9119-595/pool/d.png

[assinged.png] is http://lpar2rrd.sourceforge.net/demo/hmc08/server51-9117-570/pool/w.png

[just_used_assigned.png] is  http://lpar2rrd.sourceforge.net/demo/hmc08/server56-9118-575/pool/d.png

And yes, all cpu is "shared"

I reread your mail, but dont sure at all, if I should give on the IVM the exact cpu processing units the lpar is going to use or I should give a slower cpu PUnits and let the rest of lpars to use the "borrowed" cpu PU.?

All PS700 blades have 4 CPU PUnits for some lpars, so I'm confuse about what CPU unit should I have to give them seeing now the real used of all lpars on the lpar2rd.

I'm now at home and have to leave now for a couple of hours..



for the purpose of a somewhat "clean" setup you could of course try to assign a sensible amount of processing units to each of your LPARs.

It's all a matter of contention, and a matter of how important it is for you that critical LPARs have always sufficient CPU to do their work timely.

In times of CPU under-utilization it's indeed not much more than an aesthetic problem whether the red line is above or below the green-yellow frontier.

However, you should always keep in mind that in times of heavy CPU load it will happen that the LPARs won't get more than their assigned share.

Let's say that your system has four 4GHz CPUs and that you assigned 4 virtual CPUs and 0.4 PUs to each of 10 LPARs.
You will find that in times of full throttle an LPAR will still see 4 CPUs, but that these will behave as if they were 400MHz chips (yet "lsattr" will still show the 4GHz value, don't get confused!).

It's up to you to decide for which LPARs you could afford such decrease in throughput and for which you couldn't.

To repeat it: All this is of virtually no importance when there is sufficient CPU to allot!

Please let me know whether I could make this a bit clearer or not.
I'm always here for a further discussion!


Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

sminfoAuthor Commented:
hi wmp.. really busy.. tomorow I'll  be back!!! ( like terminator) :-)
sminfoAuthor Commented:
ok..  I don't fully understand yet this part "You will find that in times of full throttle an LPAR will still see 4 CPUs, but that these will behave as if they were 400MHz chips (yet "lsattr" will still show the 4GHz value, don't get confused!)."

But, I think I MUST have always a yellow bar in the "CPU pool" on all blades to let, if it's the case, one lpar need more cpu it "borrowed" from the yellow bar. isn't it?

In our case, we've assigned all cpu units ( no yellow bar) and that's not good I think. correct me if I'm wrong.
So the final idea is to assign the exact cpu units to our lpars and always let a yellow part on all vios, isn't it??
The yellow part just tells you how many processing units are left for configuring new LPARs.

If there is nothing you can't create new LPARs, because there are no more units to assign to them.

This is not related to how cycles are distributed. It doesn't matter whether these cycles are taken from unassigned CPUs (yellow) or from the rest of the shared pool.

Under heavy load all cycles will get used up, and the red line cannot cross the upper border of the coloured area, whether part of it is yellow or not.

I admit, these colour games are a bit misleading.

There is a third area of unused, yet assigned units, which we can only see somewhat  indirectly - it's the part above the red line up to the yellow area's lower boundary.

With lparstat  you can see the "app" (Available Pool Processors) column, which is the unused part of the shared pool (this pool contains all available CPU cycles, assigned and unassigned).

To actually see this value you must authorize the respective LPAR to access it - on IVM you must use the command line for this:

chsyscfg -r lpar -i "name=lparname,allow_perf_collection=1"

The "4GHz/400MHz" part was not meant to explain the "CPU Pool" view, but to explain the single LPAR view.
Here is a green area meaning "Assigned" and a red line showing the actual consumption.
That's what I've been calling an "aesthetic" problem - only for reasons of a clean setup the red line should most of the time stay in the green area - but that's not mandatory at all - see my explanations in the previous comment.


sminfoAuthor Commented:
Umm.. understand now..

I check our IVms and I see some lpars with allow_perf_collection=1 and allow_perf_collection=0 but I dont see differences in  the graphs on lpar2rrd.. it's normal?

So, this third area should be the minimal, I mean the assigned cpu units should be very close to the real "normal" used cpu unit, no?

Then, what I'll do? Monitor every cpu units of all lpar for 1  month to see the average and then assign them later according the data..
sminfoAuthor Commented:
oopss.. I forgot.. should I change the allow_perf_collection to 1 on all lpars?

Who changed this parameter in our case, because the most of lpars hasn't this value set to 1
allow_perf_collection has no meaning for lparutil data - these come via HMC/IVM from the machine's firmware.

The setting allows a partition's access to special registers which are not needed for normal LPAR operation, just for performance monitoring directly from the partition, outside lparutil.

Seems you're an aesthete - yes, it looks somewhat "better" when the red line does not leave the green area too often.
But please - do not take every peak into account, most probably you will not have enough CPU to cover them all.
That's exactly what micropartitioning was invented for: Taking unused CPU away from "idle" partitions to satisfy LPARs which need it, thus increasing the overall utilization of your system and saving your money - no need for "reserve" power just for a few peaks.

If you don't use nmon or lparstat for monitoring (or the topas CEC panel) you don't need allow_perf_collection.

As I said, lpar2rrd does not need "app", because the lparutil data contain the CPU consumption of all LPARs and "app" is just the difference of available and used units.

A single LPAR does not see the consumption of other partitions so it can't calculate a difference. That's why the absolute value must be supplied if some kind of overall monitoring is desired directly from an LPAR.

Who changed it?

Well, when using an HMC you have the opportunity to enable perf_collection during LPAR creation (or even later, in the parition's "properties" panel).

Under IVM you can't enable perf_collection during LPAR creation nor via GUI afterwards, but only via CLI as I described above.

So I really can' tell you who changed it, sorry.

sminfoAuthor Commented:
ok wmp.. thanks for your time...

BTW, aesthete.. never heard in my life...

See ya!

Featured Post

How to Use the Help Bell

Need to boost the visibility of your question for solutions? Use the Experts Exchange Help Bell to confirm priority levels and contact subject-matter experts for question attention.  Check out this how-to article for more information.

  • 6
  • 5
Tackle projects and never again get stuck behind a technical roadblock.
Join Now