Solved

how do i improve performance during data pump import

Posted on 2014-09-22
25
1,415 Views
Last Modified: 2014-10-07
I am importing data from tables that have a huge amount of partitions from 200 to over 500 and growing. It is terabytes of data and I need to migrate the database from a 10g RAC database to a 11g RAC database in a virtual environment. I used datapump to export the tables which took several hours. However, when I tried to import a table it had taken 5 days and then before it could complete it was killed in ordered to shutdown the server for another reason. I need to resume the import as well as import the other similar partitioned tables. I don't know if there is anything i can do to improve the amount of time for the imports. Networks links are not possible. I have multiple dump files for each table. 1 table has a minimum of 12 dump files and the largest table has 28 dump files. I am using the parallel=4 parameter. Is there anything else I can do that would improve the amount of time it takes to import?
0
Comment
Question by:sikyala
  • 8
  • 5
  • 4
  • +2
25 Comments
 
LVL 18

Assisted Solution

by:sventhan
sventhan earned 100 total points
Comment Utility
If you wanted to use expdp/impdp then try using parallel clause to improve the speed.

But its not a recommended practice to use exp/imp when you're dealing with the multi terabyte database.

You can consider rman clone then upgrade to 11g, or try using transportable tablespace.

Thanks,
sve.
0
 

Author Comment

by:sikyala
Comment Utility
I am using datapump and the parallel clause. Are there any initialization parameters that I could change?
0
 
LVL 18

Expert Comment

by:sventhan
Comment Utility
Nope.

Can you post your import script with the parfiles?

how big the table is?  What is your CPU usage when your run the import? Is there anything else running on this server?

Find out what its waiting when you are doing the import.
0
 

Author Comment

by:sikyala
Comment Utility
I execute command line

impdp <username>/>password> directory=weekly dumpfile=amfer_data_%u.dmp parallel=4 tables=amfer_data content=data_only logfile=logs:amfer_data.log
0
 
LVL 29

Assisted Solution

by:MikeOM_DBA
MikeOM_DBA earned 200 total points
Comment Utility
Have you tried doing it using transportable tablespaces? or RMAN?

Or, clone the  database and upgrade.
0
 
LVL 76

Assisted Solution

by:slightwv (䄆 Netminder)
slightwv (䄆 Netminder) earned 100 total points
Comment Utility
Some things to look at:
try two imports:
first with no indexes then import the indexes.

Is the database in archive log mode?  I wonder if the long delays is caused by excessive log archiving.

Were the tablespaces pre-created with appropriate sizing?  If they were pre-created small and set to auto-extend, this could cause problems.  Extending a tablespace/datafile is a very expensive process.
0
 

Assisted Solution

by:HNC24
HNC24 earned 100 total points
Comment Utility
As above suggested you have to do two import,

Import dump with parameter EXCLUDE=INDEX,STATISTICS (Statistics will need to be Re-Analyzed)

Create Index definition by using INDEXFILE to a filename and import, so no data will be imported,
but you must have edit this file and have to provide password on CONNECT statements

Use parameter ANALYZE=N , which will ignore analyze statements

Use parameter BUFFER to a large value

You can try with hidden parameter _disable_logging = true to which can reduce the redo (not advisable)
0
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
Comment Utility
>>You can try with hidden parameter _disable_logging = true to which can reduce the redo (not advisable)

DO NOT DO THIS!!!

You can corrupt the database.  Tom Kyte has an extensive thread on this:
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3387504900346765531

In that same thread he also points out why there isn't much a performance gain using it.
0
 

Expert Comment

by:HNC24
Comment Utility
Yes he should not use _disable_logging on production DB.
But you can test on test db you can see the difference :).
0
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
Comment Utility
>>But you can test on test db you can see the difference :).

To what end?  Just to prove what Tom Kyte has already stated:  You will only get a SMALL performance gain.

I have no desire to potentially corrupt even a TEST database.

Besides, it is wasted effort.

This is along the same lines of:
I wonder if I delete my largest datafile, if backups will be a lot faster.  Let's try it on TEST to confirm it.  Hey, it worked.  It's a shame I cannot do this in production.  Time to rebuild the test database.
0
 

Expert Comment

by:HNC24
Comment Utility
As i said use that parameter is not advisable .. still stick to it.

But i do kind of testing when i get time .. to check out what kind of errors come and so.
0
 
LVL 18

Expert Comment

by:sventhan
Comment Utility
>> need to migrate the database from a 10g RAC database to a 11g RAC

If its a migration effort, I would say use the Transportable Tablespace method. It would take around 3 hours for a 1TB of data in a decent hardware.

The export/import is not a prefer method.

Oracle support has a step by step document on this.
0
Highfive Gives IT Their Time Back

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

 
LVL 29

Expert Comment

by:MikeOM_DBA
Comment Utility
Finally someone agrees with me...Thanks.
0
 
LVL 18

Expert Comment

by:sventhan
Comment Utility
Mike -

That's what I've been saying since the beginning (my first post). At the same time I'm in the process of migrating 40TB database on 11g on Linux 6.

Thanks!
0
 
LVL 29

Expert Comment

by:MikeOM_DBA
Comment Utility
Oh, yeah! now I noticed.

Last year I did several "migrations" that included upgrade from 10 to 11 and I found the fastest method was either transportable tablespaces or by "cloning" the old-fashion way: copying the dbf files.
If you do it right, you don't even need 10g installed on the target server for the upgrade.

Fortunately it was same OS (AIX) higher version.
Good luck!
0
 

Author Comment

by:sikyala
Comment Utility
>> Have you tried doing it using transportable tablespaces? or RMAN? Or, clone the  database and upgrade.

Unfortunately due to restrictions and the instability of production I am unable to do any of these options
0
 

Author Comment

by:sikyala
Comment Utility
>> Is the database in archive log mode? precreate tablespaces....

No I disabled archivelog mode 2 weeks ago in order to do the imports. The tablespaces were created with more than enough disk space before the import. I imported the schema and all of the db users objects without the data first. Then I expdp/impdp the data only for the partitioned table.
0
 

Author Comment

by:sikyala
Comment Utility
>> Create Index definition by using INDEXFILE to a filename and import, so no data will be imported, but you must have edit this file and have to provide password on CONNECT statements

I am not familiar with this option. Would I need to modify the export parameters to include only index information or how would I import just the indexes then just the data if I am understanding what you are suggesting?
0
 

Author Comment

by:sikyala
Comment Utility
>> Yes he should not use _disable_logging on production DB. But you can test on test db you can see the difference :).

Yes I am trying to import production data into our test environment
0
 

Author Comment

by:sikyala
Comment Utility
>> If its a migration effort, I would say use the Transportable Tablespace method. It would take around 3 hours for a 1TB of data in a decent hardware.

The production database is 9 terabytes. The hardware is not decent. It is old and is past warranty on parts. In fact there is nowhere to store a full backup of the database due to lack of disk space even in ASM. The data in the database was analyzed to identify the critical data that would need to be restored in case the production database becomes unavailable. I don't know how I could specify only that data in a transportable tablespace export. Is that possible?
0
 

Expert Comment

by:HNC24
Comment Utility
If you can not use transportable tablespace or RMAN,

You must have to use datapump utility  (EXPDP/IMPDP ) , It is easy to use same as export/import.
check this link for the same EXPDP IMPDP

Datapump is a fast & flexible compared to older one and specially on bigger job.
Datapump VS Exp/Imp
0
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
Comment Utility
>>You must have to use datapump utility  (EXPDP/IMPDP )

They are already using datapump.  The original question is about improving the response time of a datapump import.
0
 

Expert Comment

by:HNC24
Comment Utility
Its my mistake .. Wrong place
0
 
LVL 29

Accepted Solution

by:
MikeOM_DBA earned 200 total points
Comment Utility
Yes I am trying to import production data into our test environment
I have similar situation where I have to create/maintain a QA and Training environments in 10% of the production space.

Solution:

1) Pre-create tables
2) Full copy of all non-transacional (support) tables like codes, activities, products, etc...
3) Using the datapump "query" clause choose a sub-set of the data (for example in my case only some company "locations" are selected).
4)  Use datapump over network link to avoid physical backup files.

HTH

PS: Do not try to kill two birds with same stone...
a) Create test non-rac
b) Upgrade to 11g
c) Then convert to rac
0
 

Author Closing Comment

by:sikyala
Comment Utility
Thanks. I will try clone and transportable tablespace when i migrate from test to production in the virtual environment
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

Have you ever had to make fundamental changes to a table in Oracle, but haven't been able to get any downtime?  I'm talking things like: * Dropping columns * Shrinking allocated space * Removing chained blocks and restoring the PCTFREE * Re-or…
I. Introduction There's an interesting discussion going on now in an Experts Exchange Group — Attachments with no extension (http://www.experts-exchange.com/discussions/210281/Attachments-with-no-extension.html). This reminded me of questions tha…
This video shows how to copy a database user from one database to another user DBMS_METADATA.  It also shows how to copy a user's permissions and discusses password hash differences between Oracle 10g and 11g.
This video shows syntax for various backup options while discussing how the different basic backup types work.  It explains how to take full backups, incremental level 0 backups, incremental level 1 backups in both differential and cumulative mode a…

771 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

10 Experts available now in Live!

Get 1:1 Help Now