Oracle Delete running slow

Experts -

This simple SQL delete is running for ever to complete. It has only 200K rows to delete. It did delete fine in TEST but not in PROD.

delete FROM customer a WHERE exists ( SELECT x FROM purge_table b where a.customer_number = b.customer_number)

The explain plan is using the proper indexes on the customer_number. We've deleted all the child rows.

Thanks,

sve
LVL 18
sventhanAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

DavidSenior Oracle Database AdministratorCommented:
1.  Possible row lock contention (some other process is updating the table, for example).  Can you quiesce the instance from all other sessions and jobs? One option is to stop and restart the instance in exclusive mode.
1a.  Similar day and time of day?
2.  Possible missing / stale CBO statistics
3.  Take a reality check, just for fun, and SELECT COUNT(1) the statement instead of DELETE.  Obviously the number of rows should be close.
4.  Can you please attach screenshots of the two explain plans, so that we can eyeball them?
5.  Anything relevant in the alert logs when the jobs were run?
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Wasim Akram ShaikCommented:
see the explain plan of the below query in both instances..

the stats may differ in prod and test..(i believe this is a possibility same as dvz had suggested in point 2)


if they differ.. check out the possibility of gathering the stats..
else check if rephrasing your query might help(assuming customer_number is an indexed column).

delete from customer  where customer_number in(select customer_number from customer a,purge_table  b where a.customer_number=b.customer_number)
0
sventhanAuthor Commented:
Thanks DVZ. That's lots of information.

If I run a delete in a small group it works. This table has multiple constraints and relationships. The STAT is up to date.

The Select count(*) runs  fine.  The table has 5 million rows, but I noticed it reads 500 mil rows from the buffer when delete runs.

I'll run this again and will get you the details.

Thanks Wasim.
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

jtriftsMI and AutomationCommented:
Lots of good info already from other DBA's...a few comments...

Database transactions generate redo and rollback.
In the case of deletes, the more deletes you have the greater amount of rollback needed.
Sometimes your rollback segments are not large enough to cope with the amount of rollback activity being generated.
So, assuming you have already checked for table/row lock contention and ruled it out, you might consider either:
a) increasing the size of your rollback segment; or
b) chunking up your transaction so that you commit after X number of rows. X might be 100 or 1000 or 10000 depending on your DB. Too low a number and you're spending more time processing commits, too high and you might still suffer from your apparent hanging.
0
DavidSenior Oracle Database AdministratorCommented:
I concur with 40311967.  For further reading, I chanced upon changing the commit statement from immediate to batch, and will play with this when I can.  http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_4010.htm#i2060233
0
sventhanAuthor Commented:
Thanks again.

I did try the controlled delete and it takes 10 minutes for 500 rows.

It uses both the indexes and going for a fast full scan. Still looking ....
0
sventhanAuthor Commented:
This issue not solved.

The problem is  the explain plan. It does Index (Fast) full scan on both the indexes in the join column, which is customer number.

How can I change that to Index by row id? If I can change that it would execute more faster.

---------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name                        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT        |                             |       |       |       | 10353 (100)|          |
|   1 |  DELETE                 | DM_CUSTOMER_DIM             |       |       |       |            |          |
|   2 |   MERGE JOIN SEMI       |                             | 41295 |  1532K|       | 10353   (4)| 00:02:05 |
|   3 |    INDEX FULL SCAN      | DM_CUSTOMER_DIM_AK3         |  3812K|   112M|       |  9430   (3)| 00:01:54 |
|   4 |    SORT UNIQUE          |                             |   204K|  1397K|  6424K|   923   (7)| 00:00:12 |
|   5 |     INDEX FAST FULL SCAN| IDX2_CUSTOMER_PURGE |   204K|  1397K|       |   174   (6)| 00:00:03 |
---------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$3BA1AD7C
   3 - SEL$3BA1AD7C / A@DEL$1
   5 - SEL$3BA1AD7C / B@SEL$1

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
      DB_VERSION('11.2.0.3')
      OPT_PARAM('_replace_virtual_columns' 'false')
      OPT_PARAM('_optimizer_eliminate_filtering_join' 'false')
      OPT_PARAM('star_transformation_enabled' 'true')
      OPT_PARAM('_fix_control' '4728348:0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$3BA1AD7C")
      UNNEST(@"SEL$1")
      OUTLINE(@"DEL$1")
      OUTLINE(@"SEL$1")
      INDEX(@"SEL$3BA1AD7C" "A"@"DEL$1" ("DM_CUSTOMER_DIM"."CUSTOMER_NUMBER"))
      INDEX_FFS(@"SEL$3BA1AD7C" "B"@"SEL$1" ("DM_CUSTOMER_PURGE"."CUSTOMER_NUMBER"))
      LEADING(@"SEL$3BA1AD7C" "A"@"DEL$1" "B"@"SEL$1")
      USE_MERGE(@"SEL$3BA1AD7C" "B"@"SEL$1")
      END_OUTLINE_DATA
  */
0
sdstuberCommented:
It looks like you are getting rowid lookups but only after combining the index scans.

That means your indexes, by themselves, are not useful for the optimizer - or it doesn't think so anyway.

what are your indexes on that table?
0
sventhanAuthor Commented:
SD -

Thanks for looking and your time.

The indexes file is attached.

Looking at the 10046 trace file in PROD, the execution spent almost 5 minutes in the recursive query. This is not happening in TEST. This table DM_CUSTOMER_PRODUCT_SEGM_FACT has a foreign-key constraint to DM_CUSTOMER_DIM through column DM_CUSTOMER_DKEY.
 
select /*+ all_rows */ count(1) from "DM"."DM_CUSTOMER_PRODUCT_SEGM_FACT" where "DM_CUSTOMER_DKEY" = :1
END OF STMT
PARSE #4574288328:c=94681,e=1633996,p=365,cr=1304,cu=14,mis=1,r=0,dep=1,og=1,plh=0,tim=97283574143143
.
.
.
EXEC #4574288328:c=2052,e=30604,p=3,cr=12,cu=0,mis=1,r=0,dep=1,og=1,plh=2795380233,tim=97283574173917
WAIT #4574288328: nam='Disk file operations I/O' ela= 64 FileOperation=2 fileno=588 filetype=2 obj#=3475408 tim=97283574174103
WAIT #4574288328: nam='db file sequential read' ela= 1188 file#=588 block#=543379 blocks=1 obj#=3475408 tim=97283574175314
WAIT #4574288328: nam='Disk file operations I/O' ela= 29 FileOperation=2 fileno=464 filetype=2 obj#=3475408 tim=97283574175426
WAIT #4574288328: nam='db file sequential read' ela= 21325 file#=464 block#=652103 blocks=1 obj#=3475408 tim=97283574196770
.
.
.
WAIT #4574288328: nam='db file sequential read' ela= 525 file#=414 block#=3335987 blocks=1 obj#=3475408 tim=97283859585625
WAIT #4574288328: nam='db file sequential read' ela= 968 file#=414 block#=3336003 blocks=1 obj#=3475408 tim=97283859586638
WAIT #4574288328: nam='db file sequential read' ela= 1538 file#=414 block#=3336020 blocks=1 obj#=3475408 tim=97283859588272
FETCH #4574288328:c=8277712,e=285414391,p=100083,cr=100083,cu=0,mis=0,r=1,dep=1,og=1,plh=2795380233,tim=97283859588350 <<<--- 285 s. = 4.75 min.
STAT #4574288328 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=100083 pr=100083 pw=0 time=285414386 us)'
STAT #4574288328 id=2 cnt=0 pid=1 pos=1 obj=3475408 op='INDEX SKIP SCAN IX1_CUSTOMER_PROD_SEGM_FACT (cr=100083 pr=100083 pw=0 time=285414368 us cost=845 size=5700 card=950)'
CLOSE #4574288328:c=3,e=4,dep=1,type=3,tim=97283859588526
.
indexes.txt
0
sventhanAuthor Commented:
Its strange. We just refresh the TEST from PROD. This delete ran fine in TEST in minutes using the same plan.

But in PROD it hung, after little the tuning/altering I got the delete moving but it is still slow.

We found recursive SQL from one of the referenced table mentioned above which took most of the time. After disable the FK reference I could be able to complete the Delete in minutes.

Thanks for all your help.
0
sventhanAuthor Commented:
Thanks for all your help.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Oracle Database

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.