slow update

Dear Experts -

I've a update process takes 2 hours to update 100K rows. The table involved is partitioned and has 50mil rows on it. Is there any better way of doing this?

Thanks for your time.

~sve
WHENEVER SQLERROR EXIT SQL.SQLCODE;

SET TIMING ON;
SET ECHO ON;
SET TERMOUT ON;
SET FEEDBACK ON;
SET SERVEROUTPUT ON;

DECLARE
   L_TOTAL              NUMBER := 0;
   L_COUNT              NUMBER := 1;
   L_TIME_MAX    NUMBER := NVL(&&2,60);
   L_ROW_MAX          NUMBER := NVL(&&3,1000000);
   L_SEQUENCE           NUMBER;
   L_UPDATE_MAX       NUMBER := NVL(&&1,10000);
   L_SQL_CODE           NUMBER;
   L_SQL_MSG            VARCHAR2(200);
   L_START_TIME         DATE := SYSDATE;
   L_TIMEOUT            CHAR(1) := 'N';
BEGIN
   -- GET SEQUENCE ID
   SELECT SEQ_MDN_PROC_ID.NEXTVAL
     INTO L_SEQUENCE
     FROM DUAL;




   WHILE (L_COUNT > 0 AND L_TIMEOUT = 'N') 
   LOOP
      UPDATE LCACHE -- partition table about 50 million rows
         SET STATUS  = DECODE(RC,'PB','P','V'),
             PROC_ID = L_SEQUENCE
       WHERE EXPIRATION_DATE > TO_DATE('2019-01-01','YYYY-MM-DD') 
         AND RC in ('PB','VB')
         AND STATUS = 'D' 
         AND ROWNUM <= L_UPDATE_MAX;

      L_COUNT := SQL%ROWCOUNT;
      L_TOTAL := L_TOTAL + L_COUNT;
      

      IF ((SYSDATE - L_START_TIME)*24*60 >= L_TIME_MAX OR L_TOTAL >= L_ROW_MAX)
      THEN
         L_TIMEOUT := 'Y';
      END IF;  
   END LOOP;

   DBMS_OUTPUT.PUT_LINE ('Updated a total of ' || L_TOTAL || ' rows.');


EXCEPTION
   WHEN OTHERS
   THEN
      L_SQL_CODE := SQLCODE;
      L_SQL_MSG  := SUBSTR(SQLERRM, 1, 200);  
      

      -- RAISE EXCEPTION
      RAISE_APPLICATION_ERROR( -20001, 'SQL CODE: ' || L_SQL_CODE ||'. SQL MSG: '|| L_SQL_MSG );

END;
/

Open in new window

LVL 18
sventhanAsked:
Who is Participating?

Improve company productivity with a Business Account.Sign Up

x
 
sdstuberConnect With a Mentor Commented:
what are you using for l_update_max   ?  10000 rows  isn't very many,  I'd use 100000 or more for a simple update.


also, what indexes and partitioning do you have?  Is it something your where clause can take advantage of?

and finally,  are you sure the update it taking a long time to do the work?  or are you waiting on other sessions to release locks so you're not really executing something the whole time
0
 
slightwv (䄆 Netminder) Commented:
What is the purpose of the loop?  If I read it right you can easily do the same update many times.
0
 
sventhanAuthor Commented:
There is a non-unique index on expiration_date. No other indexes on column involved in the where clause. The PK but its on different columns.
0
Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

 
sdstuberCommented:
if the only index you have is on expiration_date,  is that index helpful?

how many blocks will have the query have to read to find all of your data?

0
 
sventhanAuthor Commented:
Thanks experts.

@sligthtVW

You're right. We can run this many times, its a controlled update.

@SD

Its a range partition based on some range of numbers. only 10 to 15 partitions.

I'll check for locks and get back to you. Do you think adding indexes will help?

0
 
sdstuberCommented:
will adding indexes help?  - you tell me,  how many rows/blocks can you narrow the result set down to?

also,  your loop doesn't have a COMMIT,  so breaking the update into chunks updating a subset of the rows each time doesn't actually help you conserve rollback/undo.  In fact, you're simply consuming more.

either add the commit on each iteration,  or remove the loop altogether and update all of the rows in a single pass
0
 
slightwv (䄆 Netminder)Connect With a Mentor Commented:
>>You're right. We can run this many times, its a controlled update.

But you aren't committing between the loops.
0
 
sventhanAuthor Commented:
Nice to see you both. Thanks for all the help.
0
 
sventhanAuthor Commented:
The problem is there is an row level trigger on this big table. I get 2mins without the trigger for 100k rows.

There was a commit in the loop and I missed it when I altered the code for posting. Sorry!
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.