[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 343
  • Last Modified:

Database utility (script) : Running extremely slow in NFT environment


We have been provided a database utility (set of scripts) which is running very slow in our environment.


Requirement:

In the transaction main table we need to update 4 fields. The account number and branch number details are to be updated.


Approach taken by Utility

All the information of update ( old branch number , new branch number , old account number , new account number) is stored first in the work table. This process is running in justified time as there are only 500000 records to be updated.

The main transaction table is the partitioned table with 26 partitions. The approach taken is –
1.      For every partition a temp table is created.

CREATE TABLE TDB.temp_TB_1538418432_425 AS SELECT 
<All Fields> 
FROM TDB.TB_TX_TXN T WHERE 1=0;
ALTER TABLE TDB.temp_TB_1538418432_425 NOLOGGING;

Open in new window



2)      This temp table is populated with  all the records from temp table (with updated values)  + all the other records.
INSERT /*+ APPEND */ INTO TDB.temp_TB_1538418432_425 SELECT 
NVL(xref.NEW_ACCOUNT_BUCKET_SK, T.ACCOUNT_BUCKET_SK) 
NVL(xref.NEW_BRANCH, T.BRANCH), 
<ALL  OTHER VALUES FROM MAIN TABLE> 
FROM TDB.TB_TX_TXN partition(TB_1538418432_425) T, TDB.tb_rf_rainbow_crossref xref 
WHERE T.account_sk = xref.account_sk (+);

Open in new window


             Outer join is used to get all the records. ( 25 million records in 1 partition)

3)      Now the main partition is updated with the temp partition (which is having new updated values)
ALTER TABLE TDB.TB_TX_TXN EXCHANGE PARTITION TB_1538418432_425 WITH TABLE TDB.temp_TB_1538418432_425;

Open in new window


The same process is running for 26 partitions and is taking extremely long time to finish.

I am not able to understand that  if the ultimate target is to update 500000 records in database why is the approach taken to get all the partitions data (which is in millions) at the first place.

Could someone please comment on the above design and provide some suggestions for alternate less time consuming design.
0
n78298
Asked:
n78298
1 Solution
 
mrjoltcolaCommented:
At this point, with 25 million records, using "exchange partition" may actually be slower than just loading / updating directly to the main table, because when you exchange partitions you force global indexes to be rebuilt.

Perhaps it used to work ok with smaller data, or perhaps a DBA read about partition exchange and thought it sounded applicable.

I'd do away with the exchange part and just push into the live partition. But the only way to know for sure is to build a test environment and test the alternate solution.
0

Featured Post

Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now