Link to home
Start Free TrialLog in
Avatar of Sandeep rathore
Sandeep rathoreFlag for India

asked on

SQL server 2005 Memory Utilization

Hi Experts iam relatively new to sql These are the counters can you please explain the values , how they can be compared and a conclusion can be drawn and how they are related to each other.

1.Process: Working Set 3.78GB
2.SQL Server: Buffer Manager: Buffer Cache Hit Ratio 99.88%
3.SQL Server: Buffer Manager: Total Pages  202608 Permon Counter
4.SQL Server: Memory Manager: Total Server Memory (KB) 1.62GB
Thanks
Avatar of Ted Bouskill
Ted Bouskill
Flag of Canada image

With the 'Buffer Cache Hit Ratio' higher is better.  At nearly 100% that means SQL isn't reading from disk and nearly 100% of the data is coming from the RAM cache.  That is VERY good.

Total pages is the number of data pages in memory which is roughly 1.5 GB and of course total server memory is the total amount SQL is using.

If your server has 3.5 GB of RAM you are in good shape.

1)  process working set : if close to VM in task manager, then there is no external pressures on SQL - meaning it is using available memory and the migher, means it is not being swapped out (though VM and physical memoery are different). Ideally this matches physical memory.

2)  Buffer Cache Hit Ratio : getting pages from memory cache - should be > 96%  the higher the better.

3) Total Pages : just an indication of size - not really all that useful (but buffer manager measures are important - see below). high counts might indicate smaller page sizes, page life expectancy + cache hit ratios are more important.

4) total server memory : amount of physical memory SQL is using.


found that most statistics run in the 80/20 model where more than 80% is either very good or very bad depending on what you are measuring. In terms of efficiency measures then > 80% is a good thing, in terms of resource afvailability then > 80% might indicate "pressures" ie competitiion for resources.

from the above would indicate that memory is running OK... to really check, you need to gather a few more stats :

1) % processor time : if > 80% then needs a lot of "think" time - query plans, memory, poorly written queries (ie tons of loops and table scans)
2) processor queue length : if > 20 (or 5 times number of processors) then other applications on system, or, recompiles, or compiles (and autostats on frequently changing data sets). Might want to then check batch per sec and compile / recompile per sec.
3) page life expectancy : higher number the better if less than say 200 then it is being flushed far too frequently - look at checkpoint processing, logs add memory - ok if it spikes not not consistantly high.
4) Lazy writes per sec : should flow consistantly - thing like large data cache from buffers to disk can cause added CPU pressure - also check checkpoints per sec and combined with 3 indicate buffer management, mainly trend type information and probably best looked as as a group...
5) Buffer Cache Hit Ratio : as mentioned above > 80 %
6) Cach Hit Ratio : SQL plans : should be > 80 % otherwise indicates low plan re-use (time for stats updates)

Arguably the "easiest" litmus test for SQL Server health is the Buffer Manager object, biggest problems are :
Low Buffer cache hit ratio
Low Page life expectancy
High number of Checkpoint pages/sec
High number Lazy writes/sec

Then %processor time, processor queue lengh and context switching - competing recources for "think" / "compute" time.

Then there are Disk IO measures - but that is a whole different story and compunded by RAID weightings etc (e.g. raid 5 twice as slow write compared to raid 10)
Avg. Disk Queue Length : lower the better (think single thread ie < 2)
Avg. Disk Sec/Read  : lower the better think in terms  < 20ms  - same again for write
Physical Disk: %Disk Time : lower the better (ideal around 20%) if > 50% then problems (fragmentation / IO bottlenecks)
Avg. Disk Reads/Sec (and same again for writes) : these numbers will go bottom up when disks are > 80% capacity (ie exponentially increases disk access time)

And then there is more - knowing what to do with those measures, where to fix etc... Generally, performance is often SQL code / application, then hardware (physical) and so on...

performance-gains.jpg
"resource afvailability then > 80%"  should have said  "resource consumption then > 80%"
Avatar of Sandeep rathore

ASKER

thanks total server memory : amount of physical memory SQL is using  iam little confused physical memory means RAM only or something else. because there is RAM memory of 3.5gb so this means that it is not utilizing  RAM .is this total server memory counter means
Can you please explain on 80/20 model little more.
ASKER CERTIFIED SOLUTION
Avatar of Mark Wills
Mark Wills
Flag of Australia image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
how to increase points
The answer was very explainatory, absorbing and very detailed .Excellant.
Well, in reality, it does kind of depend on your operating system and version of sql server.

Windows 2003 (standard) will support 4 gig, but in reality, the lower 2 gig is reserved for "private" memory allocation (application space) and the upper 2 gig for operating system. Enabling SQL Server 2005 standard to use more than the 2 gig, then you do have to use the /3GB switch and configure appropriately. Then there are things like Hot Swap Memory auto set the /PAE setting, and NUMA aware memory where it automatically assume the AWE options are available.

Then the min server memory and the max server memory really refers to the buffer pool and how SQL server is going to request (or stop requesting) memory. If it is a dedicated server with a dedicated sql server engine (not multiples) then I still think it is best left to the operating environment before you start playing with some of those settings. Have a look at : http://msdn.microsoft.com/en-us/library/ms180797(SQL.90).aspx to better understand what min and max are really all about.

If your are running enterprise, or 64 bit and have shared applications, or multiple instances then yes, it becomes more important (if not critical with multiple instances of sql server) to interven an manually set some of those memory settings...

So, with SQL Server 2005 Standard Edition , unless you are suffering, then still reckon don't touch. In reality those settings are designed to restrict / control the amount of memory allocated to buffer pool in sql server, not really designed to dynamically allocate more memory than can be natively managed.

Have a look at books on line - search for memory (SQL Server) and look at : servers (describes memory, and "must do" set min and max if running multiple instances), have a look at NUMA - it is interesting (if that way inclined), have a look at max server memory option (describes buffer pool) then link to Optimizing Server Performance Using Memory Configuration Options down the bottom. Some of the other links kind of pass over the definition of min and max referring to generically "sql server memory" and not really technically 100% accurate as to what that is, often leading to a misunderstanding of what min and max server settings are used for...



Thanks for you kind words, and didn't see the notification until after I had posted the above - thank you for increasing the points as well (although the kind words are more appreciated )...