Hi,
We are developing a program which reads from a table, processes the data in a COBOL/RPG program, stores the records in an array and once a certain record count (lets say 2500 records) is reached we INSERT those records in block into the target table. We COMMIT after every block INSERT.
For error handling processes, if one record fails during the COBOL/RPG processing of a block of 2500 (say for instance that the 2200th record failed), we want to COMMIT all prior records and not COMMIT the record that is being processed (the one that failed). To do this efficiently, i was hoping to set a SAVEPOINT after every successful record process. When a record in a block fails, i COMMIT till the last SAVEPOINT.
My question is, will it impact my program performance if I add a SAVEPOINT after every successful record that is processed? We are anticipating this program to process around 200 million records.
An alternate approach would then be to not use SAVEPOINT. I would COMMIT after every block INSERT and when there is an error in processing an enrollee. In this case, I will have to go back into the target table and remove the changes for the last partially processed record.
Any pointers?
Regards
Ali.
You might want to look into MULTI ROW Insert feature of db2
you can decide there that you want the operation to be non-atomic which means - if some rows fail, all the other rows are inserted, which sounds like what you are looking for