Solved

Slow Export using DATAPUMP with BLOBS.

Posted on 2011-02-21
5
3,773 Views
Last Modified: 2016-10-03
I am trying to export a schema that is 170GB. One of the tables in that schema is about 100GB, most of which is blobs.

When I do the whole schema, it takes about 7-8 hours to exports.

When I do a partial export by excluding the 100GB table, it takes 40 minutes.

I ran the segment advisor and shrank some tables, but that did not help either.

Is there any other way I can improve performance?

Oracle version is 10.2.0.5.0.

Thanks
0
Comment
Question by:Waqasulhaq
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
5 Comments
 
LVL 4

Expert Comment

by:pinkuray
ID: 34948099
what are the parameter that you are using for datapump
0
 
LVL 48

Expert Comment

by:schwertner
ID: 34951144
It is prety normal to take 8 hours to export 170 GB.

The way you have to use is to use multiple dump files and parallel operations:

Using DBMS_DATAPUMP package with PL/SQL:

DBMS_DATAPUMP.add_file(
    handle    => l_dp_handle,
    filename  => v_file_dump_name,
    filesize  =>'1500M',
    directory => v_dir,
    filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

    DBMS_DATAPUMP.SET_PARALLEL(
    handle    => l_dp_handle,
    degree    => 4);


You can find the appropriate parameters in the list of Data Pump export parameters and on the net.

The above is excerpt from a package created by me  for the company I worked in the past.

If you experience troubles let me know.
0
 

Author Comment

by:Waqasulhaq
ID: 34966978
hello, sorry I could not post earlier.

Below is what my parfile looks like.
////////////////////////////
dumpfile=file1,file2,file3,file4
Filesize=45000M
parallel=4
schema=schema_name
/////////////////////////////
The server is a quad core server, so it should be able to handle the parrallel processing.

I was just confused as to why it takes so long for Datapump to export BLOBs compared to normal data. 20gb of 600 tables takes 20 mins, 110GB of blobs takes 6+ hours.
0
 
LVL 48

Accepted Solution

by:
schwertner earned 500 total points
ID: 34968042
Yes, 20gb of 600 tables takes 20 mins.
But it takes so quick because a big chunk of these 20 GB are indexes.
Oracle do not export the indexes physically.
It exports rather the DDL statement CREATE INDEX ......
By the import this statement is executed and if you pay attention to the log you will see that IMPORT INDEXES takse long time, more then the c reation of the tables.
In your case the BLOBs should be physically exported. There is no alternative way.
This is the reason for the long time of the export.
0
 

Expert Comment

by:hsd xx145
ID: 41826317
looking fo a solutio
0

Featured Post

Salesforce Made Easy to Use

On-screen guidance at the moment of need enables you & your employees to focus on the core, you can now boost your adoption rates swiftly and simply with one easy tool.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Working with Network Access Control Lists in Oracle 11g (part 1) Part 2: http://www.e-e.com/A_9074.html So, you upgraded to a shiny new 11g database and all of a sudden every program that used UTL_MAIL, UTL_SMTP, UTL_TCP, UTL_HTTP or any oth…
How to Unravel a Tricky Query Introduction If you browse through the Oracle zones or any of the other database-related zones you'll come across some complicated solutions and sometimes you'll just have to wonder how anyone came up with them.  …
This video explains at a high level with the mandatory Oracle Memory processes are as well as touching on some of the more common optional ones.
Via a live example, show how to take different types of Oracle backups using RMAN.
Suggested Courses

631 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question