Here is the question.
I load a large ammount of data in one single table, in DB2.
This happens daily. If I want to load the new data, I have a few options:
One is to delete each row, the other one is to drop the whole table.
Each has it's own characteristics.
If I decide to delete, keeping the lock on row is not good. It takes a long time.
I could change the lock on the whole table, then delete.
If I drop the table and recreate it, it is fast.
However the indexes are lost too.
This does not seem to matter on a single table.
However, if I want to generalize the process that loads data to a database, and create a mechanism that can do so by being initialized different configuration settings, I need to provide for most complex soloutions like: more then one table is being involved, there are relationships between tables, and maybe only some of those tables need to lose the data.
Any ideea ?