Link to home
Start Free TrialLog in
Avatar of ajaybelde
ajaybeldeFlag for United States of America

asked on

Long running Update

Hi,

We are using oracle 10g & trying to update 60million records on a big table(140million ) which is  running very very slow.
We disabled triggers,related refrence key constraints & made indexes unusable.I am not sure why it is taking so long to update.

Any suggetions?
Avatar of awking00
awking00
Flag of United States of America image

Can you post the update statement?
Agreed post the update statement,  of which should at the minimum be part of a cursor that defines only the rows that should be updated.  The worst thing you can be doing (ok not the worst, but pretty bad still )  is  searching for stuff that isn't even in your table as it could perform a full table scan just to come back with No records found. this alone takes about 3 to 5 minutes just to tell you it couldnt find the search criteria.

Optimal situation is your cursor provides you with only the rows that should be updated that should shave 97% of your spent waiting time.  I have knocked down poorly written pl/sql update blocks from 15 hours down to 7 minutes.  You can too.
good suggestion by negrilsand.
and by using this method, you can even commit after every x number of rows and thereby manage your undo well, without setting it to a higher value only for this update purpose.
Or it could be the update is committing after every row insert, adding to the performance degradation, which is why we need to see the update statement.
Avatar of ajaybelde

ASKER

This is the update statement i able to see in database.It has been running from last 3days and updated 17Million records so far. ETL team said that the commit frequency is 100,000 records.


The execution plan for this statement shows using index on SPECIMEN_C
ONTAINER_PK but not sure why index look up also slow.
UPDATE "SPECIMEN_CONTAINER" SET "COLLECTED_ON" = :B5 , "UPDATE_DB_U
SER" = :B4 , "UPDATE_OS_USER" = :B3 , "UPDATE_DB_DATE" = :B2 WHERE ( "SPECIMEN_C
ONTAINER_PK" = :B1 )
UPDATE "SPECIMEN_CONTAINER"
SET "COLLECTED_ON" = :B5 ,
 "UPDATE_DB_USER" = :B4 ,
 "UPDATE_OS_USER" = :B3 ,
 "UPDATE_DB_DATE" = :B2
WHERE ( "SPECIMEN_CONTAINER_PK" = :B1 )


Thanks  but what is the block that this exists in ?
if its anything like  in pseudocode

for c_petri_dish in petri_dish_cur

  select  b0
   into found_flag
   from SPECIMEN_CONTAINER  a
  where  c_petri_dish.b_flag = a.flag

  if found_flag  is not null  then

   PERFORM UPDATE

end for loop

this is bad (forgetting the pseudo code itself because you are looping through all the lines of petri dish  and looking for it to exist in the specimen_container then performing your update.  if it doesnt exist in the specimen container thats wasted time and cycles.

any variation of the above code must perform this test within a declared cursor if possible and loop through updating.    A look at the code block would help.
ASKER CERTIFIED SOLUTION
Avatar of ajaybelde
ajaybelde
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
One issue I see is the commit statement -
>>IF (commit_count     >= 10000) THEN<<
It appears that this means you are issuing a commit after every single row once 10000 is reached. For 60 million rows, that's 59,990,000 commits. You might want to change that to something like
IF mod(commit_count,10000) = 0 THEN ...
You might also consider increasing that number since 10000 rows is not all that many.
I agree with awking00 the If commit logic is not right for the intent:

I would likely  change the above to

DECLARE

 cursor c_cvn is
 SELECT A."COLLECTED_ON" COLLECTED_ON ,
        A.CVNCODS UPDATE_DB_USER ,
        A.DC-8952' UPDATE_OS_USER ,
        sysdate UPDATE_DB_DATE ,
        A.SPECIMEN_CONTAINER_PK SPECIMEN_CONTAINER_PK
  FROM CVNCODS.TEMP_8952  A,
       SPECIMEN_CONTAINER B  
  WHERE ( A.SPECIMEN_CONTAINER_PK = B.SPECIMEN_CONTAINER_PK );

BEGIN

  for v_cvn in c_cvn  loop
 
    UPDATE "SPECIMEN_CONTAINER"
    SET "COLLECTED_ON"              = v_cvn.COLLECTED_ON,
      "UPDATE_DB_USER"              = v_cvn.UPDATE_DB_USER,
      "UPDATE_OS_USER"              = v_cvn.UPDATE_OS_USER,
      "UPDATE_DB_DATE"              = v_cvn.UPDATE_DB_DATE
        WHERE ( SPECIMEN_CONTAINER_PK = v_cvn.SPECIMEN_CONTAINER_PK )
   end loop;

   commit_count         := commit_count + 1;

  :processed_row_count := :processed_row_count + SQL%ROWCOUNT;
  IF (commit_count     >= 10000) THEN
      COMMIT;
      commit_count := 0;
  END IF;
  END LOOP;

  COMMIT;
END;  


Essentially the same but cursor is built once and static everything is updated once the loop is entered.  Check for syntax I may have missed something.  :)
Sorry, I missed the part about resetting the commit_count back to 0 after the commit.
My suggestion is to use bulk load:
DECLARE
   TYPE rec_array IS TABLE OF table1%ROWTYPE;

   rec_to_update rec_array := rec_array();

   CURSOR c1 IS
      SELECT *
        FROM table1
       WHERE <any filter condition>;
BEGIN
   OPEN c1;

   LOOP
      FETCH c1
        BULK COLLECT INTO rec_to_update
      LIMIT 20000;

      FORALL i IN rec_to_update.FIRST .. rec_to_update.LAST
         UPDATE table1
            SET field1 = rec_to_update(i).field1
          WHERE key_field = rec_to_update(i).key_field
            AND effdt BETWEEN rec_to_update(i).effdt AND rec_to_update(i).enddt;

      COMMIT;

      rec_to_update.delete;

      EXIT WHEN c1%NOTFOUND;
   END LOOP;

   COMMIT;
END;
/

Open in new window


Hope this helps.
Avatar of slightwv (䄆 Netminder)
slightwv (䄆 Netminder)

>>My suggestion is to use bulk load:

An in-memory table of 60 million rows?
If he has computing power available, why not?

But you are correct, this may be a problem.
The "resultant set" is doubtful 60 million rows.
This answer is useful