Avatar of stevetheski
Flag for United States of America asked on

oracle DBMS_STATS slow

All I am wondering why the following gather stats command will take 24hrs on serverA however if i export the table and import it (including indexes) to serverB the stats are collected in 10 minutes.

ServerA and Server are identical vm's on the same metal box

ownname =>  'SIEBEL', 
tabname => 'S_OPTY',  
block_sample => FALSE,  
method_opt => 'FOR ALL COLUMNS  size SKEWONLY', 
granularity => 'ALL', 
 cascade => FALSE, 

Open in new window

also with the above stats should index stats be run also?

Oracle Database

Avatar of undefined
Last Comment

8/22/2022 - Mon
slightwv (䄆 Netminder)

You might have a lot more extents on ServerA.  They might be all over the place in the files.

The blocks in the extents on serverA might be all jumbled around (scientific term) with a lot of chained rows etc...

All that said, I wouldn't expect the 24 hour to 10 minute difference.  You might want to also open an SR with Oracle Support on this one.

>>also with the above stats should index stats be run also?

The docs have this info:

This procedure gathers table and column (and index) statistics.

How big the table is? Are you running the same DBMS_STATS on both the server?

Is your table partitioned?  Try estimate percent 20%

ownname =>  'SIEBEL',
tabname => 'S_OPTY',  
estimate_percent => 20,  
 cascade => FALSE,

I have read the docs and other blogs the cascade=false to me has to do with partitioning however some where i read that it pertains to indexes also. Just looking for clarification.

I agree with the blocks/extents all over the place especially considering the table has 276 columns and the imported table is mostly organized due to the import re-org.
If I move the table to a new tablespace/datafile the size of the table on disk will most likely shrink. will that also help with performance? I came from a DW/DSS role to a relational role on this project (which i have no real access) and 2 or 3 times a year we migrated tables. The tables that were updated a lot got smaller but the performance pretty much stayed the same. Once again this was a heavily partitioned DW with mostly bulk inserts other than a few of the "smaller" tables. Just thinking out loud if the table is less fragmented it should be faster. but tom kyte says indexes see little to no performance when they are reorganized so thats why i am asking your opinion.

the table is about 3g and has a millions of rows. How many to be exact well i can't say. I am the development DBA and do not have access to the prod server. I have noticed a lot of things I would have done differently based on AWR reports.  I also suggested changing the % to 20 but that really doesn't answer the question. The table is not partitioned this is something i have been looking into for both an archiving and performance enhancements.

The worst part is i do not know and haven't received the answer to "what is the DEFAULT_DEGREE on each server" knowing these guys it may be serial on the prod server :-o

would either of you ever use method_opt => 'FOR ALL COLUMNS  size SKEWONLY' on a table with 276 columns. I know if you had your way you wouldn't have a table with 276 rows, neither would I. :)
Experts Exchange has (a) saved my job multiple times, (b) saved me hours, days, and even weeks of work, and often (c) makes me look like a superhero! This place is MAGIC!
Walt Forbes
slightwv (䄆 Netminder)

Log in or sign up to see answer
Become an EE member today7-DAY FREE TRIAL
Members can start a 7-Day Free trial then enjoy unlimited access to the platform
Sign up - Free for 7 days
Learn why we charge membership fees
We get it - no one likes a content blocker. Take one extra minute and find out why we block content.
Not exactly the question you had in mind?
Sign up for an EE membership and get your own personalized solution. With an EE membership, you can ask unlimited troubleshooting, research, or opinion questions.
ask a question

Thanks for all the info so far. especially considering we are all on the same page. Who created the table? only one thing can be worse than old main frame folks. THE DEVELOPERS.

I have a lot of proposals to "clean up" the database. One of the funny things i did was suggested a few bitmap indexes on the DSS db. the queries wwent from hours to about a minute. however it increased load time from 2 hrs to 2.5 hrs. and they said no. so to save about 100 processing hours a day running reports they would rather save less than 30 mins loading data.

Yeah it's GOVT contracting at it's best. do nothing blame everyone else.
slightwv (䄆 Netminder)

>>so to save about 100 processing hours a day running reports they would rather save less than 30 mins loading data.

WOW!!!!  Did you try to force them to justify that position?   Aside from 'job security'...

TRUST me:  I've had my share of dealing with Gov't contractors on both sides of the fence.

There is always enough time to do it wrong but never enough time to correct previous mistakes!  NEVER go backwards!!! ALWAYS forwards!!!

im trying being the new guy on the block i am also trying to not ruffle feathers.
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.

going to leave this open for another week or so to see if anyone else chimes in about the  method_opt => 'FOR ALL COLUMNS  size SKEWONLY', or anything else.
slightwv (䄆 Netminder)

You might ask about methind_opt as a new question.  Most Experts that monitor Oracle questions have probably already seen this one and didn't feel they had anythig to offer on the performance.

Are we sure it is really taking that much time?  What is it waiting for, if anything?  I have seen this before, and the issue was that the stats process is waiting for a lock on the table.  Some careless interactive user left an uncommitted transaction on the table and the lock could not be acquired to do the stats.
Your help has saved me hundreds of hours of internet surfing.