Oracle Delete very slow

Hi,
I've these 2 delete:

DELETE FROM DOC.T_LAY 
WHERE ID_LAY iN (100,101);

delete from prod.T_R_LAY 
where id_r_lay IN (100, 101);

Open in new window


but these delete are very slow (more 3 hours)

Can someone tell me how can I rewrite these delete with some function/procedure more efficiently?

Thanks in advance!
ralph_reaAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Alexander Eßer [Alex140181]Software DeveloperCommented:
I suppose, both tables lack of the "right" indexes (on ID_LAY and ID_R_LAY).
Could you provide us the corresponding SQL plans (explain plan..)?!
0
Gerwin Jansen, EE MVETopic Advisor Commented:
Can you describe doc and prod tables? How may records are in there, how many % of the records are you deleting? It may be faster to create a new table based on what is remaining.
0
ralph_reaAuthor Commented:
The Tables have 100 million rows and I'm deleting 30 million rows.

What do you think if I use this procedure?

DECLARE
       count  NUMBER  := 0;
       total  NUMBER  := 0;
       CURSOR del_record_cur IS
         SELECT rowid
         FROM   DOC.T_LAY
         WHERE  ID_LAY iN (100,101);
     BEGIN
       FOR rec IN del_record_cur LOOP
         DELETE FROM DOC.T_LAY
           WHERE rowid = rec.rowid;
         total := total + 1;
         count := count + 1;
         IF (count >= 2000) THEN
           COMMIT;
           count := 0;
         END IF;
       END LOOP;
       COMMIT;
       DBMS_OUTPUT.PUT_LINE('deleted ' || total || ' rows from DOC.T_LAY');
     END;
     /

Open in new window


My fear is getting the error:
ORA-01555 Snapshot Too Old
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

Gerwin Jansen, EE MVETopic Advisor Commented:
2000 is not too much for one commit, set it to 20000 or more I would think. The check can be (count = 20000) instead.

You could also exit the procedure after one commit, time one run, wait a few seconds (tbd) and run again. With a 20000 commit limit, you have to run 1500 times.
0
ralph_reaAuthor Commented:
Hi Gerwin Jansen,
Could you rewrite my procedure with exit the procedure after one commit and wait a few seconds (tbd)?
0
Gerwin Jansen, EE MVETopic Advisor Commented:
I usually implement the wait in a script that I run the procedure from, the updated procedure could look like this:

DECLARE
       count NUMBER := 0;
       CURSOR del_record_cur IS
         SELECT rowid
         FROM DOC.T_LAY
         WHERE ID_LAY IN (100,101);
BEGIN
       OPEN del_record_cur;
       LOOP
         FETCH del_record_cur INTO rec;
         EXIT when del_record_cur%ROWCOUNT > 20000 OR del_record_cur%NOTFOUND;
         DELETE FROM DOC.T_LAY WHERE rowid = rec.rowid;
         count := count + 1;
       END LOOP;
       COMMIT;
       DBMS_OUTPUT.PUT_LINE('deleted ' || count || ' rows from DOC.T_LAY');
       close del_record_cur;
END;

Open in new window

0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Alexander Eßer [Alex140181]Software DeveloperCommented:
Why don't you just put this one into a DBMS_JOB or DBMS_SCHEDULER?!
declare

  cursor del_record_cur is
    select rowid
      from DOC.T_LAY
     where ID_LAY in (100, 101)
       and rownum <= 20000;

begin

  for rec in del_record_cur loop
    delete from DOC.T_LAY
     where rowid = rec.rowid;
  end loop;

  commit;

end;

Open in new window


BUT, this this doesn't solve the core problem why this SQL is running "too slow"! What are the indexes on these tables?! If an appropriate index (if any) exists, what about the statistics?! Are they up to date?! ... In the end, we DO NEED more information ;-)
0
ralph_reaAuthor Commented:
Below  describe and indexes:

CREATE TABLE DOC.T_LAY
(
  ID_DOC_LAY          INTEGER,
  ID_DOC              INTEGER,
  ID_FOR              INTEGER,
  ID_LAY            INTEGER
)

CREATE INDEX DOC.IDX_T_LAY  ON DOC.T_LAY
(ID_DOC)

CREATE UNIQUE INDEX DOC.PK_T_LAY ON DOC.T_LAY
(ID_DOC_LAY)
0
awking00Commented:
You might try creating indexes on id_lay since that's what your where clause filters on.
0
Alexander Eßer [Alex140181]Software DeveloperCommented:
You might try creating indexes on id_lay since that's what your where clause filters on.
Plus: check the other table for appropriate index(es) and create one if neccessary ;-)
0
Mark GeerlingsDatabase AdministratorCommented:
One other important thing to check: are either of these tables "master" tables that have dependent records in detail tables that are related by a foreign key to these tables?  If yes, that could be with the majority of time is being spent.
0
Alexander Eßer [Alex140181]Software DeveloperCommented:
Good point markgeer. That leads to another possible problem: there could be delete triggers firing that might cause certain overhead in processing...
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Oracle Database

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.