In this thread wmp answered the difference between load average and CPU idle time.
I am a bit confused by what this means "load average is the number of runnable processes"
Here is the situation. We do telephone interviews. Each of 250 or so interviewers has a terminal session open. (some telnet, some dumb terminals) and in that session they are running the process for our software that handle displaying questions/recording answers. This process is called "survent". And for each interviewer a single survent process runs for the entire shift.
usually each survent process takes 0.1-0.3% of CPU -- and load average is relatively low. (3-8)
Every once in a while the load average jumps up to 20-30, and lots of these survent process's start using much more CPU. I see a lot of them dance around 1-5% each in topas.
Obviously I need to work with the vendor who makes the survent software -- but all the explanations they have given so far for why it would use more CPU, dont really fit with what is happening.
So I am hoping if I can understand better what "runnable process" means, maybe that can help me point the vender in the right direction of where to look.
NOTE: that there is not really any change in what the interviewers are doing, when all this happens. They are reading screens and entering codes for answers. They are doing this all the time. So I dont see why runnable processes would suddenly increase.
Any input/ideas would be greatly helpful.
Thanks wmp! Since you are surely the one who is going to be answering this. :)