Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1270
  • Last Modified:

Oracle Delete running slow

Experts -

This simple SQL delete is running for ever to complete. It has only 200K rows to delete. It did delete fine in TEST but not in PROD.

delete FROM customer a WHERE exists ( SELECT x FROM purge_table b where a.customer_number = b.customer_number)

The explain plan is using the proper indexes on the customer_number. We've deleted all the child rows.


4 Solutions
DavidSenior Oracle Database AdministratorCommented:
1.  Possible row lock contention (some other process is updating the table, for example).  Can you quiesce the instance from all other sessions and jobs? One option is to stop and restart the instance in exclusive mode.
1a.  Similar day and time of day?
2.  Possible missing / stale CBO statistics
3.  Take a reality check, just for fun, and SELECT COUNT(1) the statement instead of DELETE.  Obviously the number of rows should be close.
4.  Can you please attach screenshots of the two explain plans, so that we can eyeball them?
5.  Anything relevant in the alert logs when the jobs were run?
Wasim Akram ShaikCommented:
see the explain plan of the below query in both instances..

the stats may differ in prod and test..(i believe this is a possibility same as dvz had suggested in point 2)

if they differ.. check out the possibility of gathering the stats..
else check if rephrasing your query might help(assuming customer_number is an indexed column).

delete from customer  where customer_number in(select customer_number from customer a,purge_table  b where a.customer_number=b.customer_number)
sventhanAuthor Commented:
Thanks DVZ. That's lots of information.

If I run a delete in a small group it works. This table has multiple constraints and relationships. The STAT is up to date.

The Select count(*) runs  fine.  The table has 5 million rows, but I noticed it reads 500 mil rows from the buffer when delete runs.

I'll run this again and will get you the details.

Thanks Wasim.
NEW Veeam Backup for Microsoft Office 365 1.5

With Office 365, it’s your data and your responsibility to protect it. NEW Veeam Backup for Microsoft Office 365 eliminates the risk of losing access to your Office 365 data.

Lots of good info already from other DBA's...a few comments...

Database transactions generate redo and rollback.
In the case of deletes, the more deletes you have the greater amount of rollback needed.
Sometimes your rollback segments are not large enough to cope with the amount of rollback activity being generated.
So, assuming you have already checked for table/row lock contention and ruled it out, you might consider either:
a) increasing the size of your rollback segment; or
b) chunking up your transaction so that you commit after X number of rows. X might be 100 or 1000 or 10000 depending on your DB. Too low a number and you're spending more time processing commits, too high and you might still suffer from your apparent hanging.
DavidSenior Oracle Database AdministratorCommented:
I concur with 40311967.  For further reading, I chanced upon changing the commit statement from immediate to batch, and will play with this when I can.  http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_4010.htm#i2060233
sventhanAuthor Commented:
Thanks again.

I did try the controlled delete and it takes 10 minutes for 500 rows.

It uses both the indexes and going for a fast full scan. Still looking ....
sventhanAuthor Commented:
This issue not solved.

The problem is  the explain plan. It does Index (Fast) full scan on both the indexes in the join column, which is customer number.

How can I change that to Index by row id? If I can change that it would execute more faster.

| Id  | Operation               | Name                        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
|   0 | DELETE STATEMENT        |                             |       |       |       | 10353 (100)|          |
|   1 |  DELETE                 | DM_CUSTOMER_DIM             |       |       |       |            |          |
|   2 |   MERGE JOIN SEMI       |                             | 41295 |  1532K|       | 10353   (4)| 00:02:05 |
|   3 |    INDEX FULL SCAN      | DM_CUSTOMER_DIM_AK3         |  3812K|   112M|       |  9430   (3)| 00:01:54 |
|   4 |    SORT UNIQUE          |                             |   204K|  1397K|  6424K|   923   (7)| 00:00:12 |
|   5 |     INDEX FAST FULL SCAN| IDX2_CUSTOMER_PURGE |   204K|  1397K|       |   174   (6)| 00:00:03 |

Query Block Name / Object Alias (identified by operation id):

   1 - SEL$3BA1AD7C
   3 - SEL$3BA1AD7C / A@DEL$1
   5 - SEL$3BA1AD7C / B@SEL$1

Outline Data

      OPT_PARAM('_replace_virtual_columns' 'false')
      OPT_PARAM('_optimizer_eliminate_filtering_join' 'false')
      OPT_PARAM('star_transformation_enabled' 'true')
      OPT_PARAM('_fix_control' '4728348:0')
      LEADING(@"SEL$3BA1AD7C" "A"@"DEL$1" "B"@"SEL$1")
      USE_MERGE(@"SEL$3BA1AD7C" "B"@"SEL$1")
It looks like you are getting rowid lookups but only after combining the index scans.

That means your indexes, by themselves, are not useful for the optimizer - or it doesn't think so anyway.

what are your indexes on that table?
sventhanAuthor Commented:
SD -

Thanks for looking and your time.

The indexes file is attached.

Looking at the 10046 trace file in PROD, the execution spent almost 5 minutes in the recursive query. This is not happening in TEST. This table DM_CUSTOMER_PRODUCT_SEGM_FACT has a foreign-key constraint to DM_CUSTOMER_DIM through column DM_CUSTOMER_DKEY.
select /*+ all_rows */ count(1) from "DM"."DM_CUSTOMER_PRODUCT_SEGM_FACT" where "DM_CUSTOMER_DKEY" = :1
PARSE #4574288328:c=94681,e=1633996,p=365,cr=1304,cu=14,mis=1,r=0,dep=1,og=1,plh=0,tim=97283574143143
EXEC #4574288328:c=2052,e=30604,p=3,cr=12,cu=0,mis=1,r=0,dep=1,og=1,plh=2795380233,tim=97283574173917
WAIT #4574288328: nam='Disk file operations I/O' ela= 64 FileOperation=2 fileno=588 filetype=2 obj#=3475408 tim=97283574174103
WAIT #4574288328: nam='db file sequential read' ela= 1188 file#=588 block#=543379 blocks=1 obj#=3475408 tim=97283574175314
WAIT #4574288328: nam='Disk file operations I/O' ela= 29 FileOperation=2 fileno=464 filetype=2 obj#=3475408 tim=97283574175426
WAIT #4574288328: nam='db file sequential read' ela= 21325 file#=464 block#=652103 blocks=1 obj#=3475408 tim=97283574196770
WAIT #4574288328: nam='db file sequential read' ela= 525 file#=414 block#=3335987 blocks=1 obj#=3475408 tim=97283859585625
WAIT #4574288328: nam='db file sequential read' ela= 968 file#=414 block#=3336003 blocks=1 obj#=3475408 tim=97283859586638
WAIT #4574288328: nam='db file sequential read' ela= 1538 file#=414 block#=3336020 blocks=1 obj#=3475408 tim=97283859588272
FETCH #4574288328:c=8277712,e=285414391,p=100083,cr=100083,cu=0,mis=0,r=1,dep=1,og=1,plh=2795380233,tim=97283859588350 <<<--- 285 s. = 4.75 min.
STAT #4574288328 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=100083 pr=100083 pw=0 time=285414386 us)'
STAT #4574288328 id=2 cnt=0 pid=1 pos=1 obj=3475408 op='INDEX SKIP SCAN IX1_CUSTOMER_PROD_SEGM_FACT (cr=100083 pr=100083 pw=0 time=285414368 us cost=845 size=5700 card=950)'
CLOSE #4574288328:c=3,e=4,dep=1,type=3,tim=97283859588526
sventhanAuthor Commented:
Its strange. We just refresh the TEST from PROD. This delete ran fine in TEST in minutes using the same plan.

But in PROD it hung, after little the tuning/altering I got the delete moving but it is still slow.

We found recursive SQL from one of the referenced table mentioned above which took most of the time. After disable the FK reference I could be able to complete the Delete in minutes.

Thanks for all your help.
sventhanAuthor Commented:
Thanks for all your help.

Featured Post

[Webinar] Database Backup and Recovery

Does your company store data on premises, off site, in the cloud, or a combination of these? If you answered “yes”, you need a data backup recovery plan that fits each and every platform. Watch now as as Percona teaches us how to build agile data backup recovery plan.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now