Waqasulhaq
asked on
Slow Export using DATAPUMP with BLOBS.
I am trying to export a schema that is 170GB. One of the tables in that schema is about 100GB, most of which is blobs.
When I do the whole schema, it takes about 7-8 hours to exports.
When I do a partial export by excluding the 100GB table, it takes 40 minutes.
I ran the segment advisor and shrank some tables, but that did not help either.
Is there any other way I can improve performance?
Oracle version is 10.2.0.5.0.
Thanks
When I do the whole schema, it takes about 7-8 hours to exports.
When I do a partial export by excluding the 100GB table, it takes 40 minutes.
I ran the segment advisor and shrank some tables, but that did not help either.
Is there any other way I can improve performance?
Oracle version is 10.2.0.5.0.
Thanks
what are the parameter that you are using for datapump
It is prety normal to take 8 hours to export 170 GB.
The way you have to use is to use multiple dump files and parallel operations:
Using DBMS_DATAPUMP package with PL/SQL:
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => v_file_dump_name,
filesize =>'1500M',
directory => v_dir,
filetype => DBMS_DATAPUMP.KU$_FILE_TYP E_DUMP_FIL E);
DBMS_DATAPUMP.SET_PARALLEL (
handle => l_dp_handle,
degree => 4);
You can find the appropriate parameters in the list of Data Pump export parameters and on the net.
The above is excerpt from a package created by me for the company I worked in the past.
If you experience troubles let me know.
The way you have to use is to use multiple dump files and parallel operations:
Using DBMS_DATAPUMP package with PL/SQL:
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => v_file_dump_name,
filesize =>'1500M',
directory => v_dir,
filetype => DBMS_DATAPUMP.KU$_FILE_TYP
DBMS_DATAPUMP.SET_PARALLEL
handle => l_dp_handle,
degree => 4);
You can find the appropriate parameters in the list of Data Pump export parameters and on the net.
The above is excerpt from a package created by me for the company I worked in the past.
If you experience troubles let me know.
ASKER
hello, sorry I could not post earlier.
Below is what my parfile looks like.
////////////////////////// //
dumpfile=file1,file2,file3 ,file4
Filesize=45000M
parallel=4
schema=schema_name
////////////////////////// ///
The server is a quad core server, so it should be able to handle the parrallel processing.
I was just confused as to why it takes so long for Datapump to export BLOBs compared to normal data. 20gb of 600 tables takes 20 mins, 110GB of blobs takes 6+ hours.
Below is what my parfile looks like.
//////////////////////////
dumpfile=file1,file2,file3
Filesize=45000M
parallel=4
schema=schema_name
//////////////////////////
The server is a quad core server, so it should be able to handle the parrallel processing.
I was just confused as to why it takes so long for Datapump to export BLOBs compared to normal data. 20gb of 600 tables takes 20 mins, 110GB of blobs takes 6+ hours.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
looking fo a solutio