• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1507
  • Last Modified:

Very bad buffer hit ratio

I have an unbelievably bad buffer hit ratio and I have no clue as to why it is so bad. I keep increasing the db_block_buffers and for a few days the hit ratio sits at >95%. Then it suddenly drops off to about 55%. I have also increased the db_file_multiblock_read parameter to 32(my db_block_size is 2k unfortunately and I can't rebuild the db as it is a live environment, I also can't keep bouncing the db to increase the db_block_buffers).

I have also had a look in the v$sqltext and v$sqlarea for bad queries with large disk reads. To be honest there weren't any bad offenders. Most queries had a hit ratio of about 97%. One query was particularly bad due to a missing index which I have now fixed.  I did notice that a procedure I run to estimate stats has a massive amount of disk reads. Would this cause my buffer hit ratio to seem artificially low?

Any ideas would be much appreciated.
  • 2
  • 2
  • 2
  • +4
1 Solution
why don't u try to see the share pool area because of increasing the db_block_buffer there will be swaping and paging which inturn will effect the performance. u even check the sort area and even try to fix the sort area because u have to evenly balance the memory allocation.

Srikant Sarda
1- Look at the operating system swaping/paging activity. If you have unix you can use vmstat (pi po fr columns must be low) and swap -l
2- Then you can check if your db_block_buffers is enough by setting DB_BLOCK_LRU_EXTENDED_STATISTICS in initORA.ora . if you set the value of the parameter to 100, Oracle will collect 100 rows of statistics, each row reflecting the addition of one buffer, up to 100 extra buffers. After some time the view X$KCBRBH shows you how performans will change if you had more database buffers.
3- look value for "table scans (long tables)" in v$sysstat. If it is too high low hit ratio is ok.
efarquhaAuthor Commented:
The value for "table scans (long tables)" is 75283. Is this a high value? The operating system is VMS I'm afraid. As far as I'm aware the Oracle processes are never swapped out on VMS. You have to reserve an area of memory for the SGA. I haven't used up all that memory yet.

There is hard page faulting but I think that is related to user connections.

I will try using the setting DB_BLOCK_LRU_EXTENDED_STATISTICS and see what happens.
The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

I think you answered your own question. Most likely your estimate stats job is skewing your buffer hit ratio. Your long table scans seems to be high, but it's difficult to determine without knowing the total number of requests.
I would stop running this job for a while and see if your buffer hit ratio improves. Actually, if you know that your queries are fine, you don't need to update stats frequently, you relly need to do this if you know that your volumes changed significantly.

Tsun Tsu would have had you ask yourself first if it's really important. Do your users complain about the database performance?

I've noticed that on some databases that the buffer hit ratio does not really reflect the percieved performance. Sometimes it's just a matter of thinking about what the DB does for most of the day. Perhaps 90% of the time the buffer hits are great and you have an overnight routine that destroys the validity statistic (like your estimate statistics job).

75K is not bad for the long table scans statistics depending on whether it's OLTP or reporting. If it's OLTP and a mission critical app, you need to track down the culprit queries. If it's reporting, 75K is low.


IF it is not an OLAP databese, table scans (long tables) is too high. You should investigate that. I have a 220GB databese used for SAP. The database is open for a month and the long table scans is just 658. As you may allready know there ar a lot of reporting in SAP.
Your Buffer Hit ratio can get skewed if you have batch processes doing reporting type SQL, Or a few one time "bad queries" because they go into the stats that Oracle keeps.  Also if your Hit ratio is falling suddenly, you may have users that are running ad-hoc queries.  
To get a better view of the buffer hit ratio, you should save the the values of gets and get misses at the begining of measuring period (say 9:00 am), let the database run for the day and sample gets and get misses again in the evening (say 4:00 pm). Subtract the begining values from their corresponding ending values and use these delta values to find the actual buffer hit ratio.  
First of all,  I have found that the Blocksize of 2K is a killer -- it is best only for stress testing oracle on your hardware.  
Some downtime to rebuild with a larger blocksize would definately help.

*  Have you looked at chained/ migrated rows?  If you have a high #, then you could be getting multiple block reads to retrieve a single row.  A small blocksize might be an indicator that many of your rows are too big for the block?  Another point would be that suffienct PCTFREE has not been set to allow row growth?
use this query to look at table fetch by continued rows:
  select * from v$sysstat where STATISTIC# =126;

*  Bumping up sometimes does help, but there is a point of diminishing returns.
Run these queries to figure out if your waiting for the system to find free buffers:
  select * from v$system_event where upper(event) like 'BUF%';
 select * from v$WAITSTAT;

You did not specify the Oracle Version,  if you are in version 8.x,  look at using the Multiple Buffer Pools (KEEP, RECYCLE & Default) -- some large tables sometimes skew the MRU/LRU for data blocks that are really transitory in nature.    Specifying a large table to use the Recycle pool does help!   Very frequently accessed segments are not affected by large segment reads, because their buffers are warmed frequently enough that they do not age out of the cache. The main trouble occurs with "warm" segments that are not accessed frequently enough to survive the buffer flushing caused by the large segment reads.
You need to investage this option.
Basically, setting it requires some additiona init.ora parameters & altering the table's default pool.   Example:
init.ora changes:
db_block_buffers        = 1500
db_block_lru_latches    = 8
buffer_pool_keep        = (buffers:250, LRU_LATCHES:1)
buffer_pool_recycle     = (buffers:500, LRU_LATCHES:2)

(the above sets the cache to 1500 blocks, and dedicates 750 to the alternate pools)
To change the table's default pool would be:
ALTER TABLE table_name... STORAGE (buffer_pool_clause);
Possible change some indexes's as well....(remember that they are all fighting the MRU/LRU for space as well)...
ALTER INDEX index_name... STORAGE (buffer_pool_clause);

These suggestions should help.
One additional item:   Statistics  --

which optimizer are you using -- it is also important for the  version (my experience:  forget it before version 8)

1)  If you init.ora parameter says:  
optimizer_mode          = choose

Then you could be in trouble with your hits rates if stale statistics exist.  The optimizer would be choosing poor paths of execution.
Either change it back to RULE or refresh the stats or drop the statistics
To refresh the statis, use the
   schema           VARCHAR2,
   method           VARCHAR2,
   estimate_rows    NUMBER   DEFAULT NULL,
   estimate_percent NUMBER   DEFAULT NULL,
   method_opt       VARCHAR2 DEFAULT NULL)

(this would do a 40% estimate analysis of the SYSADM schema).

efarquhaAuthor Commented:
I'll accept your suggestions as the answer. I still haven't got to the bottom of my problems. I have been analyzing my stats regularly using the method you suggested. I have used many of your suggestions and they are useful. Thanks
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

  • 2
  • 2
  • 2
  • +4
Tackle projects and never again get stuck behind a technical roadblock.
Join Now