Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17


Very bad buffer hit ratio

Posted on 2000-05-09
Medium Priority
Last Modified: 2008-02-01
I have an unbelievably bad buffer hit ratio and I have no clue as to why it is so bad. I keep increasing the db_block_buffers and for a few days the hit ratio sits at >95%. Then it suddenly drops off to about 55%. I have also increased the db_file_multiblock_read parameter to 32(my db_block_size is 2k unfortunately and I can't rebuild the db as it is a live environment, I also can't keep bouncing the db to increase the db_block_buffers).

I have also had a look in the v$sqltext and v$sqlarea for bad queries with large disk reads. To be honest there weren't any bad offenders. Most queries had a hit ratio of about 97%. One query was particularly bad due to a missing index which I have now fixed.  I did notice that a procedure I run to estimate stats has a massive amount of disk reads. Would this cause my buffer hit ratio to seem artificially low?

Any ideas would be much appreciated.
Question by:efarquha
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 2
  • 2
  • 2
  • +4

Expert Comment

ID: 2792070
why don't u try to see the share pool area because of increasing the db_block_buffer there will be swaping and paging which inturn will effect the performance. u even check the sort area and even try to fix the sort area because u have to evenly balance the memory allocation.

Srikant Sarda

Expert Comment

ID: 2793014
1- Look at the operating system swaping/paging activity. If you have unix you can use vmstat (pi po fr columns must be low) and swap -l
2- Then you can check if your db_block_buffers is enough by setting DB_BLOCK_LRU_EXTENDED_STATISTICS in initORA.ora . if you set the value of the parameter to 100, Oracle will collect 100 rows of statistics, each row reflecting the addition of one buffer, up to 100 extra buffers. After some time the view X$KCBRBH shows you how performans will change if you had more database buffers.
3- look value for "table scans (long tables)" in v$sysstat. If it is too high low hit ratio is ok.

Author Comment

ID: 2793225
The value for "table scans (long tables)" is 75283. Is this a high value? The operating system is VMS I'm afraid. As far as I'm aware the Oracle processes are never swapped out on VMS. You have to reserve an area of memory for the SGA. I haven't used up all that memory yet.

There is hard page faulting but I think that is related to user connections.

I will try using the setting DB_BLOCK_LRU_EXTENDED_STATISTICS and see what happens.
What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.


Expert Comment

ID: 2793789
I think you answered your own question. Most likely your estimate stats job is skewing your buffer hit ratio. Your long table scans seems to be high, but it's difficult to determine without knowing the total number of requests.
I would stop running this job for a while and see if your buffer hit ratio improves. Actually, if you know that your queries are fine, you don't need to update stats frequently, you relly need to do this if you know that your volumes changed significantly.

Expert Comment

ID: 2795721

Tsun Tsu would have had you ask yourself first if it's really important. Do your users complain about the database performance?

I've noticed that on some databases that the buffer hit ratio does not really reflect the percieved performance. Sometimes it's just a matter of thinking about what the DB does for most of the day. Perhaps 90% of the time the buffer hits are great and you have an overnight routine that destroys the validity statistic (like your estimate statistics job).

75K is not bad for the long table scans statistics depending on whether it's OLTP or reporting. If it's OLTP and a mission critical app, you need to track down the culprit queries. If it's reporting, 75K is low.



Expert Comment

ID: 2795784
IF it is not an OLAP databese, table scans (long tables) is too high. You should investigate that. I have a 220GB databese used for SAP. The database is open for a month and the long table scans is just 658. As you may allready know there ar a lot of reporting in SAP.

Expert Comment

ID: 2808846
Your Buffer Hit ratio can get skewed if you have batch processes doing reporting type SQL, Or a few one time "bad queries" because they go into the stats that Oracle keeps.  Also if your Hit ratio is falling suddenly, you may have users that are running ad-hoc queries.  
To get a better view of the buffer hit ratio, you should save the the values of gets and get misses at the begining of measuring period (say 9:00 am), let the database run for the day and sample gets and get misses again in the evening (say 4:00 pm). Subtract the begining values from their corresponding ending values and use these delta values to find the actual buffer hit ratio.  

Accepted Solution

dbalaski earned 300 total points
ID: 2809257
First of all,  I have found that the Blocksize of 2K is a killer -- it is best only for stress testing oracle on your hardware.  
Some downtime to rebuild with a larger blocksize would definately help.

*  Have you looked at chained/ migrated rows?  If you have a high #, then you could be getting multiple block reads to retrieve a single row.  A small blocksize might be an indicator that many of your rows are too big for the block?  Another point would be that suffienct PCTFREE has not been set to allow row growth?
use this query to look at table fetch by continued rows:
  select * from v$sysstat where STATISTIC# =126;

*  Bumping up sometimes does help, but there is a point of diminishing returns.
Run these queries to figure out if your waiting for the system to find free buffers:
  select * from v$system_event where upper(event) like 'BUF%';
 select * from v$WAITSTAT;

You did not specify the Oracle Version,  if you are in version 8.x,  look at using the Multiple Buffer Pools (KEEP, RECYCLE & Default) -- some large tables sometimes skew the MRU/LRU for data blocks that are really transitory in nature.    Specifying a large table to use the Recycle pool does help!   Very frequently accessed segments are not affected by large segment reads, because their buffers are warmed frequently enough that they do not age out of the cache. The main trouble occurs with "warm" segments that are not accessed frequently enough to survive the buffer flushing caused by the large segment reads.
You need to investage this option.
Basically, setting it requires some additiona init.ora parameters & altering the table's default pool.   Example:
init.ora changes:
db_block_buffers        = 1500
db_block_lru_latches    = 8
buffer_pool_keep        = (buffers:250, LRU_LATCHES:1)
buffer_pool_recycle     = (buffers:500, LRU_LATCHES:2)

(the above sets the cache to 1500 blocks, and dedicates 750 to the alternate pools)
To change the table's default pool would be:
ALTER TABLE table_name... STORAGE (buffer_pool_clause);
Possible change some indexes's as well....(remember that they are all fighting the MRU/LRU for space as well)...
ALTER INDEX index_name... STORAGE (buffer_pool_clause);

These suggestions should help.

Expert Comment

ID: 2809281
One additional item:   Statistics  --

which optimizer are you using -- it is also important for the  version (my experience:  forget it before version 8)

1)  If you init.ora parameter says:  
optimizer_mode          = choose

Then you could be in trouble with your hits rates if stale statistics exist.  The optimizer would be choosing poor paths of execution.
Either change it back to RULE or refresh the stats or drop the statistics
To refresh the statis, use the
   schema           VARCHAR2,
   method           VARCHAR2,
   estimate_rows    NUMBER   DEFAULT NULL,
   estimate_percent NUMBER   DEFAULT NULL,
   method_opt       VARCHAR2 DEFAULT NULL)

(this would do a 40% estimate analysis of the SYSADM schema).


Author Comment

ID: 2816702
I'll accept your suggestions as the answer. I still haven't got to the bottom of my problems. I have been analyzing my stats regularly using the method you suggested. I have used many of your suggestions and they are useful. Thanks

Featured Post

New benefit for Premium Members - Upgrade now!

Ready to get started with anonymous questions today? It's easy! Learn more.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Working with Network Access Control Lists in Oracle 11g (part 1) Part 2: So, you upgraded to a shiny new 11g database and all of a sudden every program that used UTL_MAIL, UTL_SMTP, UTL_TCP, UTL_HTTP or any oth…
How to Create User-Defined Aggregates in Oracle Before we begin creating these things, what are user-defined aggregates?  They are a feature introduced in Oracle 9i that allows a developer to create his or her own functions like "SUM", "AVG", and…
This video explains at a high level about the four available data types in Oracle and how dates can be manipulated by the user to get data into and out of the database.
This video shows how to configure and send email from and Oracle database using both UTL_SMTP and UTL_MAIL, as well as comparing UTL_SMTP to a manual SMTP conversation with a mail server.

715 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question