Link to home
Start Free TrialLog in
Avatar of structuredweb
structuredweb

asked on

What is the best way to move big tables with CLOBC between databases ?

We have around 20 big tables in our oracle 9.2.0.8 database which have a CLOB field. we want to upgrade to Oracle 11g, using a new installation of the 11g database and import the data from old database. the problem is that export/import takes so much time for the CLOB tables, that we dont have the time window for it. ( all other tables, even very large don't have this performance issue)

What is the best way of moving large tables with CLOB field between databases?

Thanks
Avatar of Sean Stuber
Sean Stuber

you can try pushing the data across database link or parallelize your export/import.  Do one for each table instead of bundling them.
Avatar of structuredweb

ASKER

Thanks for the quick response.

I just tried pushing CLOB's ( or any LOB ) over dblink, and is unsupported -
see note 119897.1 on metalink.
I tried a few workarounds ( selecting only the CLOB from the remote, creating view over the remote table etc ... ) which none worked.

as for parallelizing the import, that might help, but still, a table with millions of records, with CLOB's will take a lot of time to import.

any other ideas? ( sqlloader, moving CLOB's to files and reload, procedure that convert them to char chunks and build them in the other end .... )

Thanks
ASKER CERTIFIED SOLUTION
Avatar of Sean Stuber
Sean Stuber

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
You are right it does work with the syntax you provided.
I tried to speed test it, and got even worse time then bringing the whole table using import -
updating 30K CLOB field records, out of 3M records table, is taking more then an hour, which is the time that it took to bring the entire 3M table using import.

how many records did you moved? did you have any configuration to SQLNet? ( which i can see wait evens for ) and/or any other configuration to make it faster ?
I never moved anywhere close to 3M records.  Most ever was maybe a few thousand.  The clobs I moved ranged from a few dozen bytes to a few meg, but total volume was still probably less than you are trying to move.

I did not make any special sqlnet config.
You could explore transportable tablespaces.

an introduction here.
http://www.oracle-base.com/articles/9i/TransportableTablespaces9i.php
I'll try to help more if the author has further ideas or issues. but I don't see it getting a lot faster.

30K x 3M rows = 90G of data.  In one hour, that's really not too bad of a time.  I understand you might not want that long of a lag, but that's a lot of data to move in a short period of time.
You can help the import by preallocating the target tablespace

Another option is to import everything except the big data, then import just the rows that will actually be needed for the first few hours of user activity.  While users go about their business, you can back fill the remainder of the data.