Slow Export using DATAPUMP with BLOBS.

I am trying to export a schema that is 170GB. One of the tables in that schema is about 100GB, most of which is blobs.

When I do the whole schema, it takes about 7-8 hours to exports.

When I do a partial export by excluding the 100GB table, it takes 40 minutes.

I ran the segment advisor and shrank some tables, but that did not help either.

Is there any other way I can improve performance?

Oracle version is 10.2.0.5.0.

Thanks
WaqasulhaqAsked:
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

x
 
schwertnerConnect With a Mentor Commented:
Yes, 20gb of 600 tables takes 20 mins.
But it takes so quick because a big chunk of these 20 GB are indexes.
Oracle do not export the indexes physically.
It exports rather the DDL statement CREATE INDEX ......
By the import this statement is executed and if you pay attention to the log you will see that IMPORT INDEXES takse long time, more then the c reation of the tables.
In your case the BLOBs should be physically exported. There is no alternative way.
This is the reason for the long time of the export.
0
 
pinkurayCommented:
what are the parameter that you are using for datapump
0
 
schwertnerCommented:
It is prety normal to take 8 hours to export 170 GB.

The way you have to use is to use multiple dump files and parallel operations:

Using DBMS_DATAPUMP package with PL/SQL:

DBMS_DATAPUMP.add_file(
    handle    => l_dp_handle,
    filename  => v_file_dump_name,
    filesize  =>'1500M',
    directory => v_dir,
    filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

    DBMS_DATAPUMP.SET_PARALLEL(
    handle    => l_dp_handle,
    degree    => 4);


You can find the appropriate parameters in the list of Data Pump export parameters and on the net.

The above is excerpt from a package created by me  for the company I worked in the past.

If you experience troubles let me know.
0
 
WaqasulhaqAuthor Commented:
hello, sorry I could not post earlier.

Below is what my parfile looks like.
////////////////////////////
dumpfile=file1,file2,file3,file4
Filesize=45000M
parallel=4
schema=schema_name
/////////////////////////////
The server is a quad core server, so it should be able to handle the parrallel processing.

I was just confused as to why it takes so long for Datapump to export BLOBs compared to normal data. 20gb of 600 tables takes 20 mins, 110GB of blobs takes 6+ hours.
0
 
hsd xx145Commented:
looking fo a solutio
0
All Courses

From novice to tech pro — start learning today.