Database utility (script) : Running extremely slow in NFT environment

Posted on 2011-04-30
Last Modified: 2012-05-11

We have been provided a database utility (set of scripts) which is running very slow in our environment.


In the transaction main table we need to update 4 fields. The account number and branch number details are to be updated.

Approach taken by Utility

All the information of update ( old branch number , new branch number , old account number , new account number) is stored first in the work table. This process is running in justified time as there are only 500000 records to be updated.

The main transaction table is the partitioned table with 26 partitions. The approach taken is –
1.      For every partition a temp table is created.

CREATE TABLE TDB.temp_TB_1538418432_425 AS SELECT 
<All Fields> 
ALTER TABLE TDB.temp_TB_1538418432_425 NOLOGGING;

Open in new window

2)      This temp table is populated with  all the records from temp table (with updated values)  + all the other records.
INSERT /*+ APPEND */ INTO TDB.temp_TB_1538418432_425 SELECT 
FROM TDB.TB_TX_TXN partition(TB_1538418432_425) T, TDB.tb_rf_rainbow_crossref xref 
WHERE T.account_sk = xref.account_sk (+);

Open in new window

             Outer join is used to get all the records. ( 25 million records in 1 partition)

3)      Now the main partition is updated with the temp partition (which is having new updated values)

Open in new window

The same process is running for 26 partitions and is taking extremely long time to finish.

I am not able to understand that  if the ultimate target is to update 500000 records in database why is the approach taken to get all the partitions data (which is in millions) at the first place.

Could someone please comment on the above design and provide some suggestions for alternate less time consuming design.
Question by:n78298
    1 Comment
    LVL 40

    Accepted Solution

    At this point, with 25 million records, using "exchange partition" may actually be slower than just loading / updating directly to the main table, because when you exchange partitions you force global indexes to be rebuilt.

    Perhaps it used to work ok with smaller data, or perhaps a DBA read about partition exchange and thought it sounded applicable.

    I'd do away with the exchange part and just push into the live partition. But the only way to know for sure is to build a test environment and test the alternate solution.

    Write Comment

    Please enter a first name

    Please enter a last name

    We will never share this with anyone.

    Featured Post

    Threat Intelligence Starter Resources

    Integrating threat intelligence can be challenging, and not all companies are ready. These resources can help you build awareness and prepare for defense.

    Working with Network Access Control Lists in Oracle 11g (part 1) Part 2: So, you upgraded to a shiny new 11g database and all of a sudden every program that used UTL_MAIL, UTL_SMTP, UTL_TCP, UTL_HTTP or any oth…
    Cursors in Oracle: A cursor is used to process individual rows returned by database system for a query. In oracle every SQL statement executed by the oracle server has a private area. This area contains information about the SQL statement and the…
    This video shows setup options and the basic steps and syntax for duplicating (cloning) a database from one instance to another. Examples are given for duplicating to the same machine and to different machines
    Via a live example, show how to restore a database from backup after a simulated disk failure using RMAN.

    779 members asked questions and received personalized solutions in the past 7 days.

    Join the community of 500,000 technology professionals and ask your questions.

    Join & Ask a Question

    Need Help in Real-Time?

    Connect with top rated Experts

    16 Experts available now in Live!

    Get 1:1 Help Now