We have been provided a database utility (set of scripts) which is running very slow in our environment.
In the transaction main table we need to update 4 fields. The account number and branch number details are to be updated.
Approach taken by Utility
All the information of update ( old branch number , new branch number , old account number , new account number) is stored first in the work table. This process is running in justified time as there are only 500000 records to be updated.
The main transaction table is the partitioned table with 26 partitions. The approach taken is –
1. For every partition a temp table is created.
CREATE TABLE TDB.temp_TB_1538418432_425 AS SELECT
FROM TDB.TB_TX_TXN T WHERE 1=0;
ALTER TABLE TDB.temp_TB_1538418432_425 NOLOGGING;
2) This temp table is populated with all the records from temp table (with updated values) + all the other records.
INSERT /*+ APPEND */ INTO TDB.temp_TB_1538418432_425 SELECT
<ALL OTHER VALUES FROM MAIN TABLE>
FROM TDB.TB_TX_TXN partition(TB_1538418432_425) T, TDB.tb_rf_rainbow_crossref xref
WHERE T.account_sk = xref.account_sk (+);
Outer join is used to get all the records. ( 25 million records in 1 partition)
3) Now the main partition is updated with the temp partition (which is having new updated values)
ALTER TABLE TDB.TB_TX_TXN EXCHANGE PARTITION TB_1538418432_425 WITH TABLE TDB.temp_TB_1538418432_425;
The same process is running for 26 partitions and is taking extremely long time to finish.
I am not able to understand that if the ultimate target is to update 500000 records in database why is the approach taken to get all the partitions data (which is in millions) at the first place.
Could someone please comment on the above design and provide some suggestions for alternate less time consuming design.