Avatar of Julie Kurpa
Julie Kurpa
Flag for United States of America asked on

Imported 12c schema data uses more tablespace in the 19c instance.

We have recently migrated from Oracle 12c to 19c using Datapump to copy application schemas to the 19c instance. 


Used datapump to export from Oracle 12.1.0. in Windows 2012 to Oracle 19.12 in Windows 2016. 


When setting up the 19c instance, I set the tablespaces to be the same size as those in the 12c environment.   Same names. 


When importing the data, the tablespaces had to grow to accommodate the data.  In fact, they grew quite a bit. 


Does anyone have an idea as to why the data use need more space in the 19c instance than it did in the 12c instance?



Windows OSWindows Server 2012Oracle Database

Avatar of undefined
Last Comment
Julie Kurpa

8/22/2022 - Mon
Kent Olsen

Check to see if the tables have the same block size and same pct_free value.

If they're the same, compare the table details (in all_tables, user_tables, etc.) to see what's different on the two systems.  A check of all_objects might be in order, too, but I'd expect all_tables to tell you where the differences are.



DrSQL - Scott Anderson

A few other things to look for:
1) Compression - if the old DB used compression and the new one does not
2) Direct parallel loads - these are faster, but each parallel thread uses only NEW extents, so it might “waste” some space
3) Indexes created by default in different tablespace - in your old database you may have created your indexes manually and put them in a spec tablespace. Depending on current metadata being used, some primary key indexes might be created automatically and in thee default tablespace.
4) Materialized view logs - if you have any, they might normally be pretty small, but this operation could have made them quite large

Good luck!
Julie Kurpa

ASKER
Thanks!  I will start comparing.

I used Datapump to do the initial creation of the schema tables with the assumption that it would import the tables with the exact same block size, freespace, and tablespace as the source database.  
Wouldn't Datapump export and import the tables exactly?
I started with Experts Exchange in 2004 and it's been a mainstay of my professional computing life since. It helped me launch a career as a programmer / Oracle data analyst
William Peck
DrSQL - Scott Anderson

Block size is set at database creation (or tablespace creation, although it’s a bad idea in general to have multiple block sizes as it means extra caching issues), not at table creation. And a lot depends on how the objects were created (there are a few options, like having a single extent) and whether you also had all the tablespaces created.
Julie Kurpa

ASKER
Thanks. :)
I pulled the DDL for all the tablespaces from the source DB and ran the DDL in the target DB.  
Then created the schemas using DDL pulled from the source DB.  
Did schema export from the source DB and imported into the destination DB.

show parameter db_block_size in both DBs results are 8,192.

Here is the DDL used to create one of the tablespaces:

  CREATE TABLESPACE MYTBSP DATAFILE
  SIZE 12884901888
  AUTOEXTEND ON NEXT 262144000 MAXSIZE 32767M,
  SIZE 10485760000
  AUTOEXTEND ON NEXT 262144000 MAXSIZE 32767M
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;
 
so far that seems like an exact set up right?

I just queried the tables in dba_tables in both source and target DBs.  The PCT_free, intitial_extent, and compression all match between the two.

This is very perplexing!  :)


DrSQL - Scott Anderson

How about pct_increase like Kent mentioned? Or any of the other possibilities I mentioned earlier?

Maybe you could post an example table create from the datapump dmp file? Just run it to list only annd rows=no and grab a table create or two - maybe one with a primary key?
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
Julie Kurpa

ASKER
I'm working my way down the list.  :)

Is compression set at a DB level?  I see there is no compression set at the table level.  It shows "disabled" in dba_tables.

PCT_Increase for the tables is blank when I query them in dba_tables.  

Not sure about Direct Parallel Loads.  I used Datapump with minimum frills.  No parallel parameter set.

Here's the export:

expdp impuser/xxxxxxxx@my12cDB  directory=dpumpdir schemas={comma separated list of schemas}  dumpfile=exp_for_19c.DMP logfile=exp_for_19c_log.LOG

Here's the import:
impdp impuser/xxxxxxxx@my19cDB directory=dpumpdir dumpfile=EXP_FOR_19C.DMP logfile=IMP_FOR_19C_log.log table_exists_action=truncate


For indexes, I'm thinking that the schema import would put them all in the same tablespace they were exported from if the tablespace existed.  And they all do.

Materialized view logs?  Hmmm I'm checking this. I know there are 2 materialized views  but not sure about the logging.  

This didn't just happen on one database, I just finished migrating 6 databases from 12c to 19c and they all had the same behavior.  I was thinking it must be something set by default in 19c that is different than 12c.  
There are a lot of queue tables though.

DrSQL - Scott Anderson

If it’s not specified at the table level, pct_increase uses the tablespace default. You can see it I. User_tablespaces when logged on as sys or / as sysdba. The queue table comment is also interesting if there’s nothing dequeing (no subscribers active), although that wouldn’t have been an immediate issue.

there’s always tweaking of the block header size and differences based on transaction latch area, etc. usual it’s not a lot.

Compression is not active if your tables say disabled.

a final issue is the O/S blocking or clustering factor. Just because Oracle works with 8k “blocks” doesn’t mean it aligns perfectly with your storage. Pretty unusual for it to be a big deal, but it happens.

I still don’t know how much storage the old database versus the new one uses.
Julie Kurpa

ASKER
I'm checking tablespaces now.

All the databases I migrated to 19c so far work fine except for one.  We've been plagued with performance issues where the ORACLE.EXE (in Windows) will use all the CPU.

We increased CPU from 8 to 16 which at least allows the users to get their work done.  

There are a lot of parse errors being written to the alert log from the application (that did not appear when running the application in 12c). After 4 months, the vendor finally made some changes to correct the parse errors.  We haven't reduced CPU back to 8 yet because things are still kind of wonky and I think there may be some 19c issues that we are dealing with as well.

For example, a nightly batch job suddenly runs 10 hours instead of 5 hours last night.  No reason.  Same wait events as prior runs, same data, same explain plans.  Ugh.

This is the database with all the queue tables.  

I have two SRs open with Oracle and haven't gotten anywhere with them.  None of their suggested parameter changes have helped and they don't seem to listen.

This is just me venting.  I may open a question on this to see if any experts can help me with tuning 19c.  Unfortunately I don't have the old 12c of this particular database available any more.  
Your help has saved me hundreds of hours of internet surfing.
fblack61
Julie Kurpa

ASKER
I just checked dba_tablespaces all tablespaces have 8,192 for Block size and nothing for PCT_INCREASE except for Temp which has a zero.

For your comment "I still don’t know how much storage the old database versus the new one uses.", is there a quickie query I can do to provide you what you are looking for?
Julie Kurpa

ASKER
Here's a query that may tell you what you want.


DrSQL - Scott Anderson

I strongly suggest you do open a question about performance on that database, because it sounds like a plan migration or cache issue. Look up plan versioning and how to compare execution plans and see what you have on the old db (if it’s still possible) to the new db. If that’s the issue, great, but if not then you might have a library cache or object versioning issue.

but for your current issue, the big mystery is mydata1. Try comparing the objects and segments in that tablespace. Maybe we can narrow it down.

select owner,segment_name,segment_type,bytes/1024/1024 mb from dba_segments
group by owner,segment_name,segment_type
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
Julie Kurpa

ASKER
There are quite a few objects.  Instead of posting it, can you tell me what I'm looking for with the bytes?
DrSQL - Scott Anderson

I would suggest changing my “bytes/1024/1024” to be “trunc(bytes/1024/1024)” and adding an ”order by 1,2,3” and then putting the 2 results in a file comparison tool (notepad++ can do this). See which object are off by the furthest amount, or if there’s something completely new. If that’s still too many, you could load the data into a couple of tables and join them together and get the size difference and sort desc.
Julie Kurpa

ASKER
ok I will.  
Much appreciation for your help.   It's going to take me a little bit to compare these as there are over 2,000 results.  
Will probably load it into a couple of tables as you suggest to help with the comparison.  

I'll update on Monday.  
Experts Exchange has (a) saved my job multiple times, (b) saved me hours, days, and even weeks of work, and often (c) makes me look like a superhero! This place is MAGIC!
Walt Forbes
Kent Olsen

I'm traveling and haven't been able to keep up.  But you're in good hands!!

One more thing to check is the default character set to see if they are the same.  The new database may use more 16-bit characters.  It's a long-shot, but an easy check.
DrSQL - Scott Anderson

One more query suggestion:
select owner,segment_name,segment_type from newDBtables
minus
select owner,segment_name,segment_type from oldDBtables

this will show you anything that’s in the new database’s mydata1 tablespace, but wasn’t there in the old one.
Julie Kurpa

ASKER
I finished comparing between 12c and 19c.  Everything matched up perfectly except for a handful of tables.  Their differences are very small.  


Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
DrSQL - Scott Anderson

And the “minus” query I shared over the weekend?
Kent Olsen

Let's make sure that nothing else was written to that tablespace.

SELECT * FROM all_tables WHERE tablespace_name = 'MYDATA1';

Can you compare the results of that query from both servers?  There are some odd things in the numbers you posted above, but the first thing that Oracle will do is make sure that the tablespace has the same tables.

Also make sure that the recycle_bin isn't holding deleted objects.


Julie Kurpa

ASKER
I ran this query on both the 12c and 19c databases and compared them line by line using Notepad++.  The output matched perfectly except for those tables that I posted.

select owner,segment_name,segment_type,trunc(bytes/1024/1024) from dba_segments where tablespace_name='ICARE01' group by owner,segment_name,segment_type,BYTES;

I should have posted the question at the time I populated the database and before it became a production environment where the data is refreshed through the application (it's a weekly web load).  Now I think queries will not show anything helpful.  I was a bit surprised that I got such great responses.

Perhaps what I can do is set up another 19c DB and follow the same steps I did to migrate the data.  I can document that here as I move along. 
This is the best money I have ever spent. I cannot not tell you how many times these folks have saved my bacon. I learn so much from the contributors.
rwheeler23
DrSQL - Scott Anderson

it actually looks like the 19c tables are mostly smaller in the exception set. I was expecting at least one object to appear in the “select-from-new minus select-from-old”, but you’re probably right about it being a little late for a good analysis. 
Julie Kurpa

ASKER
I've not closed this question because I am going to set up an environment to test this.  
Kent Olsen

Do you still have the original environment available?

If so, you might create your test environment, and a database link to the original environment.

Then load the table in the test environment with INSERT into newtable SELECT * FROM oldtable@link;

If the size looks correct, there's something in the import process.  If the size is again too large, there's something in the 19C storage process that Oracle needs to explain.

If you don't have the original environment, you can always do the test from the new environment to the test environment, too.
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
DrSQL - Scott Anderson

Sounds like a good idea
Julie Kurpa

ASKER
All 12c environments have been decommissioned except for two.  

I'll install 19c on server and migrate the DB from the 12c server to the 19c server and see if it repeats.  They are with Windows Server 2012.

If it does not repeat, I'll upgrade the OS to Windows Server 2016 (which is what the new 19c environments were installed on) and try it again.  

Perhaps it has to do with 19c or perhaps it's Windows.  
Kent Olsen

My money's on it being 19C.  Oracle manages the space in the containers, not the O/S.

All of life is about relationships, and EE has made a viirtual community a real community. It lifts everyone's boat
William Peck
DrSQL - Scott Anderson

I mostly agree with Kent, although disk configuration can have an impact if you’re not using exactly the same specs. One O/S might also have more file system overhead than another. Clustering factor, RAID, and striping can affect total storage used. This is especially true if you also had different Oracle block sizes (although we already ruled that out). But it still seems like there were a lot of anomalies in sizes for your analysis, so I hope you can limit the variables a little more this time.

Although, to clarify, the impact of any O/S impact is only going to show in the file system. Bytes/blocks in the data dictionary are strictly Oracle storage.
Julie Kurpa

ASKER
getting poked by EE to update question. No progress in setting up the environment yet for the experiment.
Kent Olsen


It's just EE's way to verify that the question isn't abandoned.  :)

Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
Julie Kurpa

ASKER
I've finally got a test environment set up to test the export/import and tablespace usage.  Here's the detail:

Source DB:  12c (12.1.0.2.171130 ) database on ServerA called (MYDB);  Windows Server 2012 R2
Target DB:    19c (19.12) database on ServerB (ORCL); Windows Server 2012 R2

The TargetDB is empty right now.  I'll generate the tablespace DDL from the Source DB and create the tablespaces on theTargetDB.  

Then export data from the Source DB and import into the Target DB.

Is there anything I should look at or take note of before beginning the process?  




Kent Olsen

Check the size of the container files (tablespace files) before and after the import.

Julie Kurpa

ASKER
Son of a gun.  I think I see what happened....yup...it appears that I was not paying attention when I generated the DDL for the tablespaces.

I used the dbms_metadata.get_ddl to extract the DDL for the tablespaces.
 
For some tablespaces, dbms_metadata.get_ddl uses the initial create size and then does an "alter database datafile...resize" to the size it actually is.  

It must be doing that for tablespaces that have extended since their initial size.  

Here's what the dbms_metadata.get_ddl just generated for one of the tablespaces:

 CREATE TABLESPACE MYTBSP DATAFILE                                          
  SIZE 12884901888                                                              
  AUTOEXTEND ON NEXT 262144000 MAXSIZE 32767M,                                  
  SIZE 10485760000                                                              
  AUTOEXTEND ON NEXT 262144000 MAXSIZE 32767M                                   
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192                                       
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT                                  
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO                                      
   ALTER DATABASE DATAFILE                                                      
  '+DATA/MYDB/DATAFILE/icare01.268.970588515' RESIZE 34358689792          

Open in new window


I recall the alter statement was failing with the alter command so I removed it because the OMF name was wrong.

But I never actually looked closely at the sizing...assuming it was setting it to the correct current size. So no wonder it had to extend when I was importing the data.  What a dope.

  
Experts Exchange is like having an extremely knowledgeable team sitting and waiting for your call. Couldn't do my job half as well as I do without it!
James Murphy
Kent Olsen

I do love a simple explanation.

Congratulations on finding that!
Julie Kurpa

ASKER
Thanks everyone for hanging in with me and for your helpful suggestions.  
ASKER CERTIFIED SOLUTION
Julie Kurpa

Log in or sign up to see answer
Become an EE member today7-DAY FREE TRIAL
Members can start a 7-Day Free trial then enjoy unlimited access to the platform
Sign up - Free for 7 days
or
Learn why we charge membership fees
We get it - no one likes a content blocker. Take one extra minute and find out why we block content.
Not exactly the question you had in mind?
Sign up for an EE membership and get your own personalized solution. With an EE membership, you can ask unlimited troubleshooting, research, or opinion questions.
ask a question