The formula for Memory Pages Per Second

I am watching some of our servers and I am starting to notice an odd trend. Ok maybe not a trend, but a memory overload, I guess you would call it.  We have some servers that have 512 RAM with 1 CPU of a XEON Dual Core 3.6 Gig with Pages/Second averaging 46 pages with a max of about 976. Then we have other servers that have 4 gig RAM with dual CPUs of a XEON quad core 2.0 Gig with Pages/Second averaging about 920ish and maxing at about 3833. These servers are just the extreme ends. We have multiple locations. I have alarms set up to email when memory gets to high. I have been getting alot of those lately. The functions for each server is almost identical at each location. There is no sql on any server or exchange or anything like that. The software they run is unfortunately AVG and Counterspy. For the rest of it, they are generally just file servers. The emails I received are based on a formula I found that states that the pages per second should be set in range of 50% to 70% or the RAM installed in the server. The trend I am seeing is that the more RAM I have in a server, the more pages/second happen and consequently the more emails I get that say there is a problem. Now that the background is in place. The main question is this. In relation to the RAM installed at what level of pages/second should I become concerned? This whole thing started when a drive failed due to high disk thrashing on a low memory and low cpu server that was the only server at a critical location. The reason behind why we were lax at upgrading that server was that it is a sight that could not be down. And we PAID for that decision.
LVL 1
mehhercAsked:
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

x
 
PlaceboC6Connect With a Mentor Commented:
A busy file server can suffer from a low kernel memory resource issue which will cause lack of access and a really slow console.

Here is some information on that as well:

http://blogs.technet.com/askperf/archive/2007/03/07/memory-management-understanding-pool-resources.aspx
0
 
PlaceboC6Commented:
Maybe this will help you.  Pages/sec doesn't mean you are definitely running out of memory.

http://support.microsoft.com/kb/139609
0
 
mehhercAuthor Commented:
Thanks for the quick response. i do understand it does not necessarily mean I am running out memory. It is just indicative of something accessing the memory. I know everything accesses the memory, true, but I am trying to monitor the activity to prevent the thrashing. The correlation is this. When the pages/second is high the HDD disk queue increases which then make the platters and arms work harder to catch up which creates undue work that the HDDs have to do. When I sit at a server and watch those values and listen to the HDDs, yes I put my ear to cabinet, you them churning harder and harder. I am trying to figure out basically a formula to guessimate when a problem might occur. It just seems that when the pages/second go up, issues/problems occur.
0
Get your problem seen by more experts

Be seen. Boost your question’s priority for more expert views and faster solutions

 
PlaceboC6Commented:
Here's the problem.  The high pages a sec could simply mean that process A is accessing data in the memory space for process B.

If you want to narrow down what is causing the high IO,  you can display all counters (in perfmon) for:

Process
----
IO Read Bytes/Sec
IO Write Bytes/Sec

Then scroll through the many processes to see which ones have the highest values.

This link may help as well:

http://www.microsoft.com/downloads/details.aspx?familyid=09115420-8C9D-46B9-A9A5-9BFFCD237DA2&displaylang=en

As far as correlating the pages/sec going up to a problem...I suppose I would need to know what sort of problem.  It is possible that if the server is REALLY busy,  that kernel resources (x86) could become exhausted which could lead to a slow console and lack of network connectivity.

Truth is,  you really need to look at a lot of counters together to really know what's going on.  pages/sec by itself is not a good indicator.

This link, although written for an Exchange server, pretty much applies to everything.  It is very useful and I use it as a guideline when troubleshooting performance.

http://technet.microsoft.com/en-us/library/bb124328.aspx


0
 
mehhercAuthor Commented:
PlaceboC6,
I will have to update you later on what I find. Just wanted to give the issues of what happens. When the pages/second go high, the disk queue length goes up, then users complain about the "sluggish" servers, accessing files with a high queue on the HDDs of course will cause sluggish access to files. The high queues cause high and unnecessary disk usage and increasing the possibility of disk failure. I am majorly in need of stream lining of the servers, and this is just one of the steps in the process of completing that task.
0
 
mehhercAuthor Commented:
I apologize about being so long in getting back. Per the 2 links supplied on 1-31, the servers appear to fall within tolerable ratings/settings. The link posted on 2-01 actual kicked my memory. It has been such a long time since I actually looked at virtual memory settings for servers. I forgot about the stream lining of the virtual memory. I do it normally on workstations, but usually leave it at defaults for servers. My page file usage has dropped from about midpoint on the task manager graph to close to the bottom now. My CPU utilization has dropped. Disk access has dropped some. It has also helped drop my kernel memory a little bit (it shouldn't have, but it did). This all takes me back to the idea of the K. I. S. S. method. In case you don't know it, the Keep It Stupid Simple. or the ever famous insulting way, Keep It Simple, Stupid. Thanks for helping kick start some gray matter that hasn't been used for a while. I am going to make adjustments on all servers and monitor. You gotta love watching a heart beat graph.
0
 
mehhercAuthor Commented:
I appreciate the mental jump start.
0
All Courses

From novice to tech pro — start learning today.