Solved

Page Faults

Posted on 2002-04-01
3
248 Views
Last Modified: 2010-04-13
hi,
may number of page faults at a great extent cause a slow down on cpu? i noticed that microsoft sql(2000) service manager caused more than 10 million page faults while cpu loads seem normal(with a 54 days of server uptime). is it normal? as i restart the service manager cpu get relaxed and responsiveness improved. i need some comprehensive information on paging mechanism of nt kernel to explain this. i'm looking for your comments.
thanks
0
Comment
Question by:omavideniz
3 Comments
 

Expert Comment

by:AugCH
Comment Utility
If I remember well page faults occur when your server is swapping files from RAM to disk. If so that slows down your server.

Information about this is in the Microsoft NT 4 server course (using performance monitor part)

CHA
0
 
LVL 44

Expert Comment

by:CrazyOne
Comment Utility
Could be a bad RAM module or two. Try pulling the modules one at a time and test to see what happens. Perhaps the CPU is overheating. Might want to monitor it and the fan or fans on the CPU.


The Crazy One
0
 
LVL 14

Accepted Solution

by:
AvonWyss earned 100 total points
Comment Utility
The page faults are nothing to worry about. When memory mapped files are used as data IO mechanism, there will by design be many page faults: for evey page that is accessed in the mapping but not yet (or no longer) in physical memory. Databases (and while I dont have the source code of SQL server, I'm pretty sure this is valid also for it) use memory mapped files usually because they are very efficient and require less overhead (OS calls) than normal files when lots of small data chunks are being read/written.

The improved responsiviness may more be because of the huge amount of memory that is being allocated by SQL server for inmemory records and indexes and caches. The increased memory load results in less (or virtually no) free physical RAM, so that memory of less-used applications get swapped to disk. When the memory is to be loaded again in memory, this takes time and will give the impression of less responsiviness of the system.

Good practice suggest to have system partition, swapfile partition, and data partition (databases for instance) on separated physical (!) drives. This greatly enhances performance since most disk operations do not have to be queued then.
0

Featured Post

How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

Join & Write a Comment

Suggested Solutions

NTFS file system has been developed by Microsoft that is widely used by Windows NT operating system and its advanced versions. It is the mostly used over FAT file system as it provides superior features like reliability, security, storage, efficienc…
Never store passwords in plain text or just their hash: it seems a no-brainier, but there are still plenty of people doing that. I present the why and how on this subject, offering my own real life solution that you can implement right away, bringin…
It is a freely distributed piece of software for such tasks as photo retouching, image composition and image authoring. It works on many operating systems, in many languages.
Internet Business Fax to Email Made Easy - With eFax Corporate (http://www.enterprise.efax.com), you'll receive a dedicated online fax number, which is used the same way as a typical analog fax number. You'll receive secure faxes in your email, fr…

772 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

12 Experts available now in Live!

Get 1:1 Help Now