SQL server 2005 Memory Utilization

Posted on 2008-10-24
Last Modified: 2012-05-05
Hi Experts iam relatively new to sql These are the counters can you please explain the values , how they can be compared and a conclusion can be drawn and how they are related to each other.

1.Process: Working Set 3.78GB
2.SQL Server: Buffer Manager: Buffer Cache Hit Ratio 99.88%
3.SQL Server: Buffer Manager: Total Pages  202608 Permon Counter
4.SQL Server: Memory Manager: Total Server Memory (KB) 1.62GB
Question by:Sandeepiii
  • 5
  • 3
  • 2
LVL 51

Expert Comment

by:Ted Bouskill
ID: 22801725
With the 'Buffer Cache Hit Ratio' higher is better.  At nearly 100% that means SQL isn't reading from disk and nearly 100% of the data is coming from the RAM cache.  That is VERY good.

Total pages is the number of data pages in memory which is roughly 1.5 GB and of course total server memory is the total amount SQL is using.

If your server has 3.5 GB of RAM you are in good shape.
LVL 51

Expert Comment

by:Mark Wills
ID: 22801890

1)  process working set : if close to VM in task manager, then there is no external pressures on SQL - meaning it is using available memory and the migher, means it is not being swapped out (though VM and physical memoery are different). Ideally this matches physical memory.

2)  Buffer Cache Hit Ratio : getting pages from memory cache - should be > 96%  the higher the better.

3) Total Pages : just an indication of size - not really all that useful (but buffer manager measures are important - see below). high counts might indicate smaller page sizes, page life expectancy + cache hit ratios are more important.

4) total server memory : amount of physical memory SQL is using.

found that most statistics run in the 80/20 model where more than 80% is either very good or very bad depending on what you are measuring. In terms of efficiency measures then > 80% is a good thing, in terms of resource afvailability then > 80% might indicate "pressures" ie competitiion for resources.

from the above would indicate that memory is running OK... to really check, you need to gather a few more stats :

1) % processor time : if > 80% then needs a lot of "think" time - query plans, memory, poorly written queries (ie tons of loops and table scans)
2) processor queue length : if > 20 (or 5 times number of processors) then other applications on system, or, recompiles, or compiles (and autostats on frequently changing data sets). Might want to then check batch per sec and compile / recompile per sec.
3) page life expectancy : higher number the better if less than say 200 then it is being flushed far too frequently - look at checkpoint processing, logs add memory - ok if it spikes not not consistantly high.
4) Lazy writes per sec : should flow consistantly - thing like large data cache from buffers to disk can cause added CPU pressure - also check checkpoints per sec and combined with 3 indicate buffer management, mainly trend type information and probably best looked as as a group...
5) Buffer Cache Hit Ratio : as mentioned above > 80 %
6) Cach Hit Ratio : SQL plans : should be > 80 % otherwise indicates low plan re-use (time for stats updates)

Arguably the "easiest" litmus test for SQL Server health is the Buffer Manager object, biggest problems are :
Low Buffer cache hit ratio
Low Page life expectancy
High number of Checkpoint pages/sec
High number Lazy writes/sec

Then %processor time, processor queue lengh and context switching - competing recources for "think" / "compute" time.

Then there are Disk IO measures - but that is a whole different story and compunded by RAID weightings etc (e.g. raid 5 twice as slow write compared to raid 10)
Avg. Disk Queue Length : lower the better (think single thread ie < 2)
Avg. Disk Sec/Read  : lower the better think in terms  < 20ms  - same again for write
Physical Disk: %Disk Time : lower the better (ideal around 20%) if > 50% then problems (fragmentation / IO bottlenecks)
Avg. Disk Reads/Sec (and same again for writes) : these numbers will go bottom up when disks are > 80% capacity (ie exponentially increases disk access time)

And then there is more - knowing what to do with those measures, where to fix etc... Generally, performance is often SQL code / application, then hardware (physical) and so on...

LVL 51

Expert Comment

by:Mark Wills
ID: 22801895
"resource afvailability then > 80%"  should have said  "resource consumption then > 80%"
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.


Author Comment

ID: 22802486
thanks total server memory : amount of physical memory SQL is using  iam little confused physical memory means RAM only or something else. because there is RAM memory of 3.5gb so this means that it is not utilizing  RAM .is this total server memory counter means
Can you please explain on 80/20 model little more.
LVL 51

Accepted Solution

Mark Wills earned 450 total points
ID: 22803742
SQLServer:Memory Manager: Total Server Memory (KB): well, technically speaking it is actually the committed memory from the buffer pool buffer pool being consumed at the time, not really the total memory used by SQL server, it excludes things like connection memory, optimizer memory, locks and a few other bits... It needs to be  less than physical memory (80% comes to mind).

SQLServer:Memory Manager: Target Server Memory (KB):Tells you how much Dynamic memory SQL Server can consume. If the SQLServer:Memory Manager: Total Server Memory (KB) counter is more or equal than the SQLServer:Memory Manager: Target Server Memory (KB) counter, if greater than say 80% then your machine most definitely could use more physical memory. In fact check the working set (process) memory as a more accurate indicator of memory consumption. The sql server memory measure does need a few of them combined to get the real picture (and then need some special considerations for AWE).

SQL Server Memory manager measures are also impacted by low and high memory settings, and really need to be left alone, unless you know what is really happening in terms of memory allocation and distribution and knowingly, knowledgably, making the deliberate adjustments.

Physical memory is RAM, where as virtual memory is the swap file, dynamic memory is the memory allocation immediately required by an application. Virtual memory is really the swap file where least often used pages of memory (are not flushed) but swapped out of RAM onto disk. While Disk IO is generally a bad thing, the way Virtual memory works, it is probably the fastes disk IO you will see. All disk IO is kernal mode, so essentially serial in operation, however a swap file IO will escalate before other outstanding IO requests - effectively becoming the next IO request. Some machines have on board caching and seperate IO channels to help make this operation even faster (but then those specialised chips and controllers do get very expensive very quickly). Virtual memory should be at least the same size as physical memory - or better still, look at the numbers Windows would prefer to allocate. SQL Server uses the same "notification" services as the Windows operating system as to using High or Low memory (ie for Virtual Memory / Page swapping).

Could write a book on it, and still not quite explain everything... for example, Virtual Address Space (VAS) is the number of address that can be consumed - 64 bit obviously more than 32-bit, but, an application can run out of it while physical memory may still be available. Go figure... And then try to explain to someone with AWE active that yes, technically it does increase addressable memory, but sql server only really offset hash space into those areas. memory that is allocated through the AWE mechanism cannot be paged out, which could have a detrimental affect for some applications, in 64-bit is is known as locked pages (even though awe is kind of an oxy-moron in 64-bit parlance). So, in order to use AWE, the Lock Pages in Memory privilege must be enabled. But that is a different story...

So, you can see that there is actually a heck of a lot behind your question. Often it is "an easy" explanation to say SQLServer:Memory Manager: Total Server Memory is the amount of physical memory (RAM) being consumed - and technically that is not correct, but often avoids lengthy explanations. You are quite correct to question it. The total use of physical memory on the system is more accurately the combination of:
1) Process object :
1.a) Working Set counter for each process
2) Memory object : three major indicators
2.a) Cache Bytes counter for system working set
2.b) Pool Nonpaged Bytes counter for size of unpaged pool
2.c) Available Bytes

The 80 / 20 rule is my own "rule of thumb" it just seems to work that way. When a disk get 80% full then read / write performance starts to deteriorate considerably. When Processor time is 80% or more then CPU is having problems (could be due to a wide number of things). 80% of performance problems are normally SQL code / management problems - like indexes (sometimes over-indexed), statistics, unoptimised code, table design, caching, or architectural problems like file allocation (ie database files on physical disks), log management, database sizing etc.  then there is 80% of problems can be fixed with 20% effort in terms of identification (might take a long time to actually execute those findings). The last 20% of problems will take 80% effort and just not worth the pain - really have to know the performance measure and problems backward to even venture into those waters, and then the money thrown at the problem could readily be spent on more physical attributes to achieve the same outcomes with less effort (probably 20%).

Lot more to your question than just the 250 points would indicate the depth of detail you wanted to know about ;)

Why are you asking ? Do you think you have performance problems ? Then start gathering some of the other metrics, the numbers you have really are insufficient to diagnose, but on the surface they look good...
LVL 51

Assisted Solution

by:Ted Bouskill
Ted Bouskill earned 50 total points
ID: 22803817
Just to be clear, the O/S always needs at least 1GB available for it's own use.  For example, on your system I'd recommend you cap SQL to a maximum of 2.5 GB.  If SQL usage spikes paging to disk is a performance killer.  It's better to cap memory usage and have SQL clear some objects out of memory and read from disk than serialize memory and page to disk.

Author Comment

ID: 22803972
how to increase points

Author Closing Comment

ID: 31509698
The answer was very explainatory, absorbing and very detailed .Excellant.
LVL 51

Expert Comment

by:Mark Wills
ID: 22804064
Well, in reality, it does kind of depend on your operating system and version of sql server.

Windows 2003 (standard) will support 4 gig, but in reality, the lower 2 gig is reserved for "private" memory allocation (application space) and the upper 2 gig for operating system. Enabling SQL Server 2005 standard to use more than the 2 gig, then you do have to use the /3GB switch and configure appropriately. Then there are things like Hot Swap Memory auto set the /PAE setting, and NUMA aware memory where it automatically assume the AWE options are available.

Then the min server memory and the max server memory really refers to the buffer pool and how SQL server is going to request (or stop requesting) memory. If it is a dedicated server with a dedicated sql server engine (not multiples) then I still think it is best left to the operating environment before you start playing with some of those settings. Have a look at : to better understand what min and max are really all about.

If your are running enterprise, or 64 bit and have shared applications, or multiple instances then yes, it becomes more important (if not critical with multiple instances of sql server) to interven an manually set some of those memory settings...

So, with SQL Server 2005 Standard Edition , unless you are suffering, then still reckon don't touch. In reality those settings are designed to restrict / control the amount of memory allocated to buffer pool in sql server, not really designed to dynamically allocate more memory than can be natively managed.

Have a look at books on line - search for memory (SQL Server) and look at : servers (describes memory, and "must do" set min and max if running multiple instances), have a look at NUMA - it is interesting (if that way inclined), have a look at max server memory option (describes buffer pool) then link to Optimizing Server Performance Using Memory Configuration Options down the bottom. Some of the other links kind of pass over the definition of min and max referring to generically "sql server memory" and not really technically 100% accurate as to what that is, often leading to a misunderstanding of what min and max server settings are used for...

LVL 51

Expert Comment

by:Mark Wills
ID: 22804106
Thanks for you kind words, and didn't see the notification until after I had posted the above - thank you for increasing the points as well (although the kind words are more appreciated )...

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
SQL Select * from 6 43
Help with simplifying SQL 6 54
Getting max record but maybe not use Group BY 2 32
LAG_ROWID - how do I get the right order using this query? 2 15
by Mark Wills Attending one of Rob Farley's seminars the other day, I heard the phrase "The Accidental DBA" and fell in love with it. It got me thinking about the plight of the newcomer to SQL Server...  So if you are the accidental DBA, or, simp…
In this article I will describe the Backup & Restore method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
Two types of users will appreciate AOMEI Backupper Pro: 1 - Those with PCIe drives (and haven't found cloning software that works on them). 2 - Those who want a fast clone of their boot drive (no re-boots needed) and it can clone your drive wh…
This video shows how to use Hyena, from SystemTools Software, to bulk import 100 user accounts from an external text file. View in 1080p for best video quality.

791 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question