Solved

Slow Export using DATAPUMP with BLOBS.

Posted on 2011-02-21
5
2,864 Views
Last Modified: 2016-10-03
I am trying to export a schema that is 170GB. One of the tables in that schema is about 100GB, most of which is blobs.

When I do the whole schema, it takes about 7-8 hours to exports.

When I do a partial export by excluding the 100GB table, it takes 40 minutes.

I ran the segment advisor and shrank some tables, but that did not help either.

Is there any other way I can improve performance?

Oracle version is 10.2.0.5.0.

Thanks
0
Comment
Question by:Waqasulhaq
5 Comments
 
LVL 4

Expert Comment

by:pinkuray
ID: 34948099
what are the parameter that you are using for datapump
0
 
LVL 47

Expert Comment

by:schwertner
ID: 34951144
It is prety normal to take 8 hours to export 170 GB.

The way you have to use is to use multiple dump files and parallel operations:

Using DBMS_DATAPUMP package with PL/SQL:

DBMS_DATAPUMP.add_file(
    handle    => l_dp_handle,
    filename  => v_file_dump_name,
    filesize  =>'1500M',
    directory => v_dir,
    filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

    DBMS_DATAPUMP.SET_PARALLEL(
    handle    => l_dp_handle,
    degree    => 4);


You can find the appropriate parameters in the list of Data Pump export parameters and on the net.

The above is excerpt from a package created by me  for the company I worked in the past.

If you experience troubles let me know.
0
 

Author Comment

by:Waqasulhaq
ID: 34966978
hello, sorry I could not post earlier.

Below is what my parfile looks like.
////////////////////////////
dumpfile=file1,file2,file3,file4
Filesize=45000M
parallel=4
schema=schema_name
/////////////////////////////
The server is a quad core server, so it should be able to handle the parrallel processing.

I was just confused as to why it takes so long for Datapump to export BLOBs compared to normal data. 20gb of 600 tables takes 20 mins, 110GB of blobs takes 6+ hours.
0
 
LVL 47

Accepted Solution

by:
schwertner earned 500 total points
ID: 34968042
Yes, 20gb of 600 tables takes 20 mins.
But it takes so quick because a big chunk of these 20 GB are indexes.
Oracle do not export the indexes physically.
It exports rather the DDL statement CREATE INDEX ......
By the import this statement is executed and if you pay attention to the log you will see that IMPORT INDEXES takse long time, more then the c reation of the tables.
In your case the BLOBs should be physically exported. There is no alternative way.
This is the reason for the long time of the export.
0
 

Expert Comment

by:hsd xx145
ID: 41826317
looking fo a solutio
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Join & Write a Comment

Using SQL Scripts we can save all the SQL queries as files that we use very frequently on our database later point of time. This is one of the feature present under SQL Workshop in Oracle Application Express.
From implementing a password expiration date, to datatype conversions and file export options, these are some useful settings I've found in Jasper Server.
This video shows, step by step, how to configure Oracle Heterogeneous Services via the Generic Gateway Agent in order to make a connection from an Oracle session and access a remote SQL Server database table.
This video shows how to recover a database from a user managed backup

747 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

12 Experts available now in Live!

Get 1:1 Help Now