Go Premium for a chance to win a PS4. Enter to Win

x
?
Solved

Slow Export using DATAPUMP with BLOBS.

Posted on 2011-02-21
5
Medium Priority
?
4,289 Views
Last Modified: 2016-10-03
I am trying to export a schema that is 170GB. One of the tables in that schema is about 100GB, most of which is blobs.

When I do the whole schema, it takes about 7-8 hours to exports.

When I do a partial export by excluding the 100GB table, it takes 40 minutes.

I ran the segment advisor and shrank some tables, but that did not help either.

Is there any other way I can improve performance?

Oracle version is 10.2.0.5.0.

Thanks
0
Comment
Question by:Waqasulhaq
5 Comments
 
LVL 4

Expert Comment

by:pinkuray
ID: 34948099
what are the parameter that you are using for datapump
0
 
LVL 48

Expert Comment

by:schwertner
ID: 34951144
It is prety normal to take 8 hours to export 170 GB.

The way you have to use is to use multiple dump files and parallel operations:

Using DBMS_DATAPUMP package with PL/SQL:

DBMS_DATAPUMP.add_file(
    handle    => l_dp_handle,
    filename  => v_file_dump_name,
    filesize  =>'1500M',
    directory => v_dir,
    filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

    DBMS_DATAPUMP.SET_PARALLEL(
    handle    => l_dp_handle,
    degree    => 4);


You can find the appropriate parameters in the list of Data Pump export parameters and on the net.

The above is excerpt from a package created by me  for the company I worked in the past.

If you experience troubles let me know.
0
 

Author Comment

by:Waqasulhaq
ID: 34966978
hello, sorry I could not post earlier.

Below is what my parfile looks like.
////////////////////////////
dumpfile=file1,file2,file3,file4
Filesize=45000M
parallel=4
schema=schema_name
/////////////////////////////
The server is a quad core server, so it should be able to handle the parrallel processing.

I was just confused as to why it takes so long for Datapump to export BLOBs compared to normal data. 20gb of 600 tables takes 20 mins, 110GB of blobs takes 6+ hours.
0
 
LVL 48

Accepted Solution

by:
schwertner earned 2000 total points
ID: 34968042
Yes, 20gb of 600 tables takes 20 mins.
But it takes so quick because a big chunk of these 20 GB are indexes.
Oracle do not export the indexes physically.
It exports rather the DDL statement CREATE INDEX ......
By the import this statement is executed and if you pay attention to the log you will see that IMPORT INDEXES takse long time, more then the c reation of the tables.
In your case the BLOBs should be physically exported. There is no alternative way.
This is the reason for the long time of the export.
0
 

Expert Comment

by:hsd xx145
ID: 41826317
looking fo a solutio
0

Featured Post

Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Note: this article covers simple compression. Oracle introduced in version 11g release 2 a new feature called Advanced Compression which is not covered here. General principle of Oracle compression Oracle compression is a way of reducing the d…
This post first appeared at Oracleinaction  (http://oracleinaction.com/undo-and-redo-in-oracle/)by Anju Garg (Myself). I  will demonstrate that undo for DML’s is stored both in undo tablespace and online redo logs. Then, we will analyze the reaso…
Via a live example show how to connect to RMAN, make basic configuration settings changes and then take a backup of a demo database
This video explains at a high level about the four available data types in Oracle and how dates can be manipulated by the user to get data into and out of the database.
Suggested Courses

877 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question