Solved

Analyze Table made performance worse!  -- help!

Posted on 2008-06-16
11
1,900 Views
Last Modified: 2013-12-19
I ave a relatively large database that had some minor performance issues.  I had been going through and Analyzing tables and computing statistics.  This improved overall efficiency and the number of buffer gets went down dramatically.


That is, until I analyzed a table that was really out of wack.  The table stats claimed to have 400,000 rows, but a select count(*) showed about a million.  So I analyzed the table and computed statistics.  

When I did that, a query that was only using about 30 buffer gets per transaction now uses over 1,000 buffer gets for the same transaction.

The explain plan shows that the index is no longer being used.  Instead it is doing a full table scan.

What's up with that?  How do I fix it?
0
Comment
Question by:_TAD_
  • 4
  • 3
  • 2
  • +2
11 Comments
 
LVL 73

Expert Comment

by:sdstuber
ID: 21795666
how did you analyze?
analyze or dbms_stats?

did you analyze the indexes with the table?
Did you gather histograms?  (should you have? usually no, but sometimes yes you must)
is the table partitioned?  did you gather partition level statistics?


what is the query?
and what are the original and new plans?
0
 
LVL 22

Author Comment

by:_TAD_
ID: 21795873
I ran the following SQL:

Analyze Table <TableName> Compute Statistics;

The last analyzed data time stamp on the idexes match the date-time stamp of the table

no histograms were gathered

The table is not partitioned.

Further, overall query performance against this table has improved tremendously when using other indexes.  The problem is that this one particular index is performing much, much worse -- and the query is run several hundred times a day.
0
 
LVL 29

Expert Comment

by:MikeOM_DBA
ID: 21796182

As sdstuber said, you need to analyze the indexes also.

0
 
LVL 22

Author Comment

by:_TAD_
ID: 21796406
indexes analyzed.  No change
0
 
LVL 18

Expert Comment

by:rbrooker
ID: 21796685
do not use analyse, it is being deprecated.

what i use is : dbms_stats.gather_table_stats( 'owner', 'table_name', block_sample => true, cascade => true, estimate_percent => 10, method_opt => 'FOR ALL COLUMNS SIZE AUTO' );

try that and see what happens.
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 
LVL 22

Author Comment

by:_TAD_
ID: 21797226
While I understand that Analyze is, in some circumstances, not as efficient as dbms_stats.  I have not heard that it is being depricated.  It is my understanding that some statistics like high water mark cannot be gotten through dbms_stats, but instead must use analyze.

That being said, I did a full compute, not an estimation.  the dms_stats must be a really awesome tool to produce more accurate statistics with a 10% estimation than a full analysis of 100% of the data.
0
 
LVL 18

Accepted Solution

by:
rbrooker earned 250 total points
ID: 21797347
you can use whatever estimate percent you choose, if you want 100%, then use 100%.  thats not the point.

read this article, long but good.

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4347359891525
0
 
LVL 73

Assisted Solution

by:sdstuber
sdstuber earned 250 total points
ID: 21797438
what is the query along with the old and new execution plans?

also, what about histograms?  you said you did not gather them,  should you have?
Were they there previously?
0
 
LVL 2

Expert Comment

by:vorb1s
ID: 21824190
Hi,
  fastest solution to try - delete statistics:
analyze table TABLE_NAME delete statistics;

painful, but right solution (in 9i only):
  think about changing the SQL SELECTs, use some indexing hints etc.

you can also try change index cost for session (1-1000 and more :-) like thist (100 - default):
alter session set optimizer_index_cost_adj=100;
Try some lower values and look how the cost in explain plan changes.

Vorbis
0
 
LVL 22

Author Comment

by:_TAD_
ID: 21826667


I wound up rewriting the daemon procedures instead.

The original process....

Count the number of records that... <condition>
If count > 0 Then run process.
   Process... load temp table & cursor use prev. count to itterate through processes one at a time.


New Process....

Select 1 where exists... <condition>

Count the number of records that... <condition>
If count > 0 Then run process.
   Process... load temp table & cursor use prev. count to itterate through processes one at a time.



The original code USED to only use 21 buffer gets right up until the processing.   After the compute of statistics it started to use over 1,000.  Not a big deal normally, but the process runs a few hundred times a day (running the count SQL each and every time).

Now the process does the "where exists" and that uses 11 additional Buffer Gets over the old process (which currently uses 1,000+) -- but only where there is something there to process.  

In a nutshell, I am still taking a big performance hit, but instead of having the highest number of buffer gets in a single day, the new code is somewhere in the middle -- and I can live with that.
0
 
LVL 73

Expert Comment

by:sdstuber
ID: 21827148
can you post your before and after code with the before and after execution plans?

we'll all help, but we need something to work from.
0

Featured Post

Is Your Active Directory as Secure as You Think?

More than 75% of all records are compromised because of the loss or theft of a privileged credential. Experts have been exploring Active Directory infrastructure to identify key threats and establish best practices for keeping data safe. Attend this month’s webinar to learn more.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
Migrating an SQL 2008 database to Oracle 12c 3 88
history tablespace temp usage 2 30
Web Service from a stored procdure oracle 10 49
use lov values 2 24
How to Create User-Defined Aggregates in Oracle Before we begin creating these things, what are user-defined aggregates?  They are a feature introduced in Oracle 9i that allows a developer to create his or her own functions like "SUM", "AVG", and…
From implementing a password expiration date, to datatype conversions and file export options, these are some useful settings I've found in Jasper Server.
This video shows setup options and the basic steps and syntax for duplicating (cloning) a database from one instance to another. Examples are given for duplicating to the same machine and to different machines
Via a live example, show how to take different types of Oracle backups using RMAN.

919 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

19 Experts available now in Live!

Get 1:1 Help Now