improve performance of procedure

I need help improving performance on an export/import procedure.

I need to export all content of a table, delete all the content, and import it again, into the same table.

Exporting goes alright (not super fast but ok).
When trying to import my computer crashes!
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

jwp81Author Commented:
I am using DB2 .

My table contains 1000000 rows.

The export method I used:
db2 => export to myfile.ixf of ixf      
db2 (cont.) => select * from person;
SQL3132W  The character data in column "PASSWORD" will be truncated to size

(this last SQL notification worries me!)

The import method I tried:
db2 import from myfile.ixf of ixf replace into person

My table creation statement:
create table Person (  person_id int not null,  username varchar(30 ),  password blob,  first_name varchar(30),  last_name varchar(30),   home_street varchar(100),  home_city varchar(30),  home_zip_code int,  home_state varchar(30),  business_street varchar(100),  business_city varchar(30),  business_zip_code int,  business_state varchar(30),  home_phone varchar(10),  business_phone varchar(10),  email varchar(20),  language varchar(12),  PRIMARY KEY (person_id));


db2 "export to 'clob_test_export.txt' OF DEL LOBS TO '/data/bin/test/lobs/'

That does not result in the expected output... and then:
In clob_test_export.txt it says:

And ls displays:
-rw-r----- 1 db2inst1 db2iadm1 144309 Aug 3 08:44 my_clob_export.001
So - the export just handles it slightly different.
It has create a file "my_clob_export.001" - and here there are two clobs:
One from 0 with a length of 72184 bytes. And the other starts at position
72184 and is 72125 bytes long.

Exploring SQL Server 2016: Fundamentals

Learn the fundamentals of Microsoft SQL Server, a relational database management system that stores and retrieves data when requested by other software applications.

Kent OlsenDBACommented:
Hi jwp,

Is this an exercise or do you have a specific reason for wanting to dump/load the same table?  If the table has been fragmented, you might consider a REORG on the table instead of dump/load.

Good Luck,
To import use commitcount so that your data import is done in smaller transactions

db2 import from myfile.ixf of ixf commitcount 1000 replace into person

tangchunfeng's recommendation on using lobsinfile is worth using as well to avoid truncation,


Kent OlsenDBACommented:
A commit at regular intervals is a good idea, but you might be able to speed it up even more.

Since you're loading this table "from scratch", you'll get better performance by turning off logging.  That means that if an error occurs, DB2 won't be able to do a rollback and it will mark the table as corrupt.  But that's not much different than using a commit at regular intervals in that an error will roll back to the last commit, leaving the table only partly populated and not really much good to you.  Plus, since DB2 isn't writing to the log file it's saving time.

This will be one of the fastest ways to import the data:

--  Fastest way to empty a table in DB2:


--  Disable logging


--  Import the data:


Good Luck,

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
jwp81Author Commented:

Thanks for your answers.
I tried it all and got a bit better performance (and first of all no crash!) by a few minutes (from 14 m to 12) by turning off the log. smart idea (after that, commitcount didnt do anything good).

But, I JUST tried using 'load' instead of 'import' (still having log turned off, still using lobs), and it improved my performance DRAMATICALLY. (went from 12 min to 1 min!)

Do you have any good explanations for this?
Kent OlsenDBACommented:
Load and Import are completely different processes.  Import is closer to a SQL INSERT statement in functionality and inner workings than it is to Load.   And Load is a batch (offline) process that read/writes the tables directly without most of the overhead associated with SQL.

jwp81Author Commented:
right. But what is the tradeoff from using load instead of import?
jwp81Author Commented:
the 'load' was the key for my performance improvement, that I figured out on my own.
Kent OlsenDBACommented:
Hi jwp,

Sorry for not getting back sooner -- I was out of the office all afternoon.

There are actually quite a few differences.  IBM has even gone to the trouble to list them "side by side" on their "infocenter" website.

A general rule (that's mostly accurate) is that, IMPORT treats the items that it's dealing with as a database and ensures that all of the items adhere to the rules of the database.  LOAD treats the items as data and skips most of the single-row DBMS overhead.  One great example is that when IMPORT inserts a row, it fires all necessary triggers and updates all the appropriate indexes before moving to the next row.  LOAD does not fire the triggers and rebuilds the table's indexes after all rows are loaded.  If you're dealing with a large volume of data the performance differences can be huge.

It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.