Peformance Monitor Counters

Can someone please help to interpret the following counters :

%Processor Time (scale 1.000) avg 10.00, min 0.00, max 46.00

%Disk Time (scale 1.000) avg 100.00, min 0.00, max 2039.00

Avg. Disk Read Queue Length (scale 100.000) avg 0.700, min 0.000, max 20.787

Avg. Disk Write Queue Length (scale 100.000) avg 0.020, min 0.000, max 0.093

Available MBytes (scale 1.000) avg 241.00, min 240.00, max 243.00

Pages/Sec (scale 1.000) avg 7.485, min 0.000, max 26.006
trevally8Asked:
Who is Participating?
 
Brian PierceConnect With a Mentor PhotographerCommented:
It all really depends what you mean by "interpret"

I see no cause for concern in any of the counters, but as with most things of this nature the results are pretty meaningless unless there is something to compare them with. It's normally recommended that you "baseline" a machines soon after building it and note the values of various counters such as those above. You can then reanalyse the counters at some later date ans compare them with the baseline results and begin to look for significant changes.
0
 
trevally8Author Commented:
I understand what you saying.

There is no basis for comparision.

Is it possible for anyone to tell me in detail what each counter and the corresponding results meant?

I think it will benefit others expert exchange members.

Please do not flame me if anyone thinks that I am taking the easy way out.

I have already read a lot in the Internet about this topic but just cannot decipher what the results meant.

0
 
dovidmichelConnect With a Mentor Commented:
They really are meaningless.

As previously stated you need a baseline to work from.

Task Manager is a simple easy to use tool for judging performance, and typically sufficient. Performance Monitor is great because of it full spectrum of objects and counters.

Task Manager can show you a lot more by going to the Processes tab, select View, Add Columns.
 
In either case you really do need that baseline.
Start when the system is running great, chart it.
Then recreate the problem and chart it.
Now compare the two.

In regards to your stats, best I can say they all look normal. The Disk Time went pretty high but then that would be normal for heavy disk activity. But basically it is meaningless. It works mainly off two values, count and time. The counter for an object can spike off the cart but if it did not happen at the time of the problem who cares. Also scale comes into view, it the scale is incorrect it can make a problem look like nothing, or nothing look like a problem.

0
Cloud Class® Course: Python 3 Fundamentals

This course will teach participants about installing and configuring Python, syntax, importing, statements, types, strings, booleans, files, lists, tuples, comprehensions, functions, and classes.

 
trevally8Author Commented:
As I have already mentioned, I know the stats were meaningless unless there is a baseline to compare.

What I am asking is eg:

%Processor Time (scale 1.000) avg 10.00, min 0.00, max 46.00

%Disk Time (scale 1.000) avg 100.00, min 0.00, max 2039.00

Avg. Disk Read Queue Length (scale 100.000) avg 0.700, min 0.000, max 20.787

Avg. Disk Write Queue Length (scale 100.000) avg 0.020, min 0.000, max 0.093

Available MBytes (scale 1.000) avg 241.00, min 240.00, max 243.00

Pages/Sec (scale 1.000) avg 7.485, min 0.000, max 26.006

What does the above meant?

I am not asking whether the stats are good or not.
0
 
dovidmichelConnect With a Mentor Commented:
the scale is 1, well duh
seriously you can move the scale
as it is here it is meaningless, put when using a perf mon log loaded in perf mon you can move the scale up or down.
if the scale is too low then the line for this counter goes off the scale
if the scale is too high then it looks just like a straight line
picture this, you are looking at a graph
you see a line going straight across the bottom
you move the scale and now that straight line has peaks and dips
that is what the scale is all about
the rest is just as it seems, no tricks there
you have the average, the minimum, and the maximum values that counter reached during the time of monitoring.

there are some docs on perf mon at the microsoft site but it really does take a lot to know it. not only how to use it but what objects and counters are the best to use for a given problem. But then a lot of times you are not sure just what is causing the problem (well duh thats why your using it to begin with) so you add a lot of objects and counters to obtain a more complete picture of what is going on with the system when the problem happens.

Great now you have a graph with a hundred or so objects and it just looks like a mess. So you start going through each in turn to see if it was normal or relates to the problem. Until finally you have only those relating to the problem left. As you can imagine it can take hours.

For some things it can really be great, and some apps add their own counters which can also be very helpful. However for the most part I hardly ever use it any more, too much work. Mostly instead I just use Task Manager any more.
0
 
trevally8Author Commented:
NO
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.