Oracle 12c

gs79 used Ask the Experts™
Its a broad question. The queries and plsql jobs are running slower on a 12c  database upgraded from 11g. All things being same, hardware, parameters etc, is 12c slower than 11g out of the box? Are there any extra parameters that I need to turn on/off in 12 c Please let me know where I should be looking to make the jobs as fast as in 11g

Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
David VanZandtOracle Database Administrator III
All things being the same, no. Therefore, it's likely that something is not the same as you presume. The answer could be as simple as a different block size for the data files, or perhaps the table statistics haven't been optimized since the live data was laid down.

One way I would use to deconstruct the complexity, is to take representative statements , whether they are queries or updates. Capture and compare  the execution time as well as the resources used. Then see if that gives you a quick insight  where there may be differences.
Most Valuable Expert 2011
Top Expert 2012
is your database a pluggable inside a container?  If so, are you sharing resources that you weren't previously?

You can always try changing the compatibility and optimizer settings back to 11g if you don't like the 12c optimizer.  

Look at the advisor views, anything there that should be changed?
Geert GOracle dba
Top Expert 2009
It's The Stupid Sql's !

And the resource they consume

The plsql jobs time is the sum of the time for the queries inside
So you needn't bother looking at those directly

unless you have nested to the 143th level ... or even deeper

v$sql contains the current slow (and fast) statements being run
I only ever use 2 querries to find the slow ones:
The SessionWait query > to see the active ones
The TopBf query > to see what the highest resource consumers for buffer gets are
Ensure you’re charging the right price for your IT

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

Abhimanyu SuriDatabase Engineer
Top Expert 2016
1. Do you have historical data available from 11g instance ?
    If so, compare the SQL execution statistics for SQLs in general, especially for the top SQLs generated as output of "GEERT GRUWEZ's SQL"

2. Compare explain plans

exec dbms_xplan.display_awr('SQL_ID') will give you all historical plans for a SQL provided plan was changed and historical data is still available.

3. Review AWR

4. Check statistics on tables / indexes
    12c has a new feature of extended statistics collection by default, if SQL has directives and is eligible of optimization ( Extended stat has been there since 11gR1 but was more of a manual process ) . Oracle  gathers the collective information for group of columns


Also, it will be visible as a virtual column in dba_tab_col_statistics

5. Check top wait events from ASH data and then drill it down further for example :

select to_char(sample_time,'mmddrr hh24:mi'),event,sum(time_waited)/1000,count(1)
from (g)v$active_session_history where sample_time << between the duration of test or peak load >> group by to_char(sample_time,'mmddrr hh24:mi'),event

Once you have top events , you can modify the same SQL by including SQL_ID , program etc. as per your environment.
Geert GOracle dba
Top Expert 2009

for 5, just run the swe query in sqlplus while the load is running

you'll visually see which sql's are running
if the same query keeps showing, then that's the one you need to tune

you don't really those expensive items like ASH, AWR  from the tuning pack
Abhimanyu SuriDatabase Engineer
Top Expert 2016

Hi Geert,

Since your SQL is a join of v$session it won't show data for recent past.
Thats why I requested that.

It is a good way to drill down to the absolute second, which leads to correction to_char(sample_time,'mmddrr hh24:mi:ss')


Thanks all.

I am trying out the various solutions posted here. Will get back to you soon
Most Valuable Expert 2012
Distinguished Expert 2018
First thing I would probably do is tell the Optimizer to go back to your 11g versions.

I've read about some "bad" behaviors with 12c in other systems.


Use the HINT or ALTER method.

If your 11g was
select  /*+ optimizer_features_enable('') */  blah blah blah...


Sorry for delay in getting back. I got side tracked from this

@sdstuber I think the database is a pluggable database as per v$database. CDB value is set to NO. But there are no other databases connected so I am assuming all the resources are used by this database?

@slightvw I modifed the session to use the optimizer feature to the previous version. Didnt see any significant improvements there

Also we are collecting the statistics everyday after the batch is complete in both the environments (12c and 11g). So the stats on the table are upto date. Is there anything I should look into. The program I am using to bench mark uses a parallel ppelined function to load 24 million rows into a current year partition of table which has 130 columns. It bulk collect into source and associative arrays are used for lookups. The partition index is made unusable before the start of the process and rebuild after loading. The process takes 16-18 min in 11g where has 28-30 min in 12c..

Please let me know if you think any other place I need to look into



I will open a new thread as a continuation of this with last my comment as op. The issue is still there and do not what could be causing this

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial