Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x
?
Solved

Oracle Delete running slow

Posted on 2014-09-05
11
Medium Priority
?
1,145 Views
Last Modified: 2014-09-18
Experts -

This simple SQL delete is running for ever to complete. It has only 200K rows to delete. It did delete fine in TEST but not in PROD.

delete FROM customer a WHERE exists ( SELECT x FROM purge_table b where a.customer_number = b.customer_number)

The explain plan is using the proper indexes on the customer_number. We've deleted all the child rows.

Thanks,

sve
0
Comment
Question by:sventhan
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
11 Comments
 
LVL 23

Accepted Solution

by:
David earned 500 total points
ID: 40306986
1.  Possible row lock contention (some other process is updating the table, for example).  Can you quiesce the instance from all other sessions and jobs? One option is to stop and restart the instance in exclusive mode.
1a.  Similar day and time of day?
2.  Possible missing / stale CBO statistics
3.  Take a reality check, just for fun, and SELECT COUNT(1) the statement instead of DELETE.  Obviously the number of rows should be close.
4.  Can you please attach screenshots of the two explain plans, so that we can eyeball them?
5.  Anything relevant in the alert logs when the jobs were run?
0
 
LVL 16

Assisted Solution

by:Wasim Akram Shaik
Wasim Akram Shaik earned 500 total points
ID: 40309475
see the explain plan of the below query in both instances..

the stats may differ in prod and test..(i believe this is a possibility same as dvz had suggested in point 2)


if they differ.. check out the possibility of gathering the stats..
else check if rephrasing your query might help(assuming customer_number is an indexed column).

delete from customer  where customer_number in(select customer_number from customer a,purge_table  b where a.customer_number=b.customer_number)
0
 
LVL 18

Author Comment

by:sventhan
ID: 40310961
Thanks DVZ. That's lots of information.

If I run a delete in a small group it works. This table has multiple constraints and relationships. The STAT is up to date.

The Select count(*) runs  fine.  The table has 5 million rows, but I noticed it reads 500 mil rows from the buffer when delete runs.

I'll run this again and will get you the details.

Thanks Wasim.
0
Get your Conversational Ransomware Defense e‑book

This e-book gives you an insight into the ransomware threat and reviews the fundamentals of top-notch ransomware preparedness and recovery. To help you protect yourself and your organization. The initial infection may be inevitable, so the best protection is to be fully prepared.

 
LVL 4

Assisted Solution

by:jtrifts
jtrifts earned 500 total points
ID: 40311967
Lots of good info already from other DBA's...a few comments...

Database transactions generate redo and rollback.
In the case of deletes, the more deletes you have the greater amount of rollback needed.
Sometimes your rollback segments are not large enough to cope with the amount of rollback activity being generated.
So, assuming you have already checked for table/row lock contention and ruled it out, you might consider either:
a) increasing the size of your rollback segment; or
b) chunking up your transaction so that you commit after X number of rows. X might be 100 or 1000 or 10000 depending on your DB. Too low a number and you're spending more time processing commits, too high and you might still suffer from your apparent hanging.
0
 
LVL 23

Expert Comment

by:David
ID: 40312172
I concur with 40311967.  For further reading, I chanced upon changing the commit statement from immediate to batch, and will play with this when I can.  http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_4010.htm#i2060233
0
 
LVL 18

Author Comment

by:sventhan
ID: 40312347
Thanks again.

I did try the controlled delete and it takes 10 minutes for 500 rows.

It uses both the indexes and going for a fast full scan. Still looking ....
0
 
LVL 18

Author Comment

by:sventhan
ID: 40315120
This issue not solved.

The problem is  the explain plan. It does Index (Fast) full scan on both the indexes in the join column, which is customer number.

How can I change that to Index by row id? If I can change that it would execute more faster.

---------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name                        | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT        |                             |       |       |       | 10353 (100)|          |
|   1 |  DELETE                 | DM_CUSTOMER_DIM             |       |       |       |            |          |
|   2 |   MERGE JOIN SEMI       |                             | 41295 |  1532K|       | 10353   (4)| 00:02:05 |
|   3 |    INDEX FULL SCAN      | DM_CUSTOMER_DIM_AK3         |  3812K|   112M|       |  9430   (3)| 00:01:54 |
|   4 |    SORT UNIQUE          |                             |   204K|  1397K|  6424K|   923   (7)| 00:00:12 |
|   5 |     INDEX FAST FULL SCAN| IDX2_CUSTOMER_PURGE |   204K|  1397K|       |   174   (6)| 00:00:03 |
---------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$3BA1AD7C
   3 - SEL$3BA1AD7C / A@DEL$1
   5 - SEL$3BA1AD7C / B@SEL$1

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('11.2.0.3')
      DB_VERSION('11.2.0.3')
      OPT_PARAM('_replace_virtual_columns' 'false')
      OPT_PARAM('_optimizer_eliminate_filtering_join' 'false')
      OPT_PARAM('star_transformation_enabled' 'true')
      OPT_PARAM('_fix_control' '4728348:0')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$3BA1AD7C")
      UNNEST(@"SEL$1")
      OUTLINE(@"DEL$1")
      OUTLINE(@"SEL$1")
      INDEX(@"SEL$3BA1AD7C" "A"@"DEL$1" ("DM_CUSTOMER_DIM"."CUSTOMER_NUMBER"))
      INDEX_FFS(@"SEL$3BA1AD7C" "B"@"SEL$1" ("DM_CUSTOMER_PURGE"."CUSTOMER_NUMBER"))
      LEADING(@"SEL$3BA1AD7C" "A"@"DEL$1" "B"@"SEL$1")
      USE_MERGE(@"SEL$3BA1AD7C" "B"@"SEL$1")
      END_OUTLINE_DATA
  */
0
 
LVL 74

Assisted Solution

by:sdstuber
sdstuber earned 500 total points
ID: 40315305
It looks like you are getting rowid lookups but only after combining the index scans.

That means your indexes, by themselves, are not useful for the optimizer - or it doesn't think so anyway.

what are your indexes on that table?
0
 
LVL 18

Author Comment

by:sventhan
ID: 40315798
SD -

Thanks for looking and your time.

The indexes file is attached.

Looking at the 10046 trace file in PROD, the execution spent almost 5 minutes in the recursive query. This is not happening in TEST. This table DM_CUSTOMER_PRODUCT_SEGM_FACT has a foreign-key constraint to DM_CUSTOMER_DIM through column DM_CUSTOMER_DKEY.
 
select /*+ all_rows */ count(1) from "DM"."DM_CUSTOMER_PRODUCT_SEGM_FACT" where "DM_CUSTOMER_DKEY" = :1
END OF STMT
PARSE #4574288328:c=94681,e=1633996,p=365,cr=1304,cu=14,mis=1,r=0,dep=1,og=1,plh=0,tim=97283574143143
.
.
.
EXEC #4574288328:c=2052,e=30604,p=3,cr=12,cu=0,mis=1,r=0,dep=1,og=1,plh=2795380233,tim=97283574173917
WAIT #4574288328: nam='Disk file operations I/O' ela= 64 FileOperation=2 fileno=588 filetype=2 obj#=3475408 tim=97283574174103
WAIT #4574288328: nam='db file sequential read' ela= 1188 file#=588 block#=543379 blocks=1 obj#=3475408 tim=97283574175314
WAIT #4574288328: nam='Disk file operations I/O' ela= 29 FileOperation=2 fileno=464 filetype=2 obj#=3475408 tim=97283574175426
WAIT #4574288328: nam='db file sequential read' ela= 21325 file#=464 block#=652103 blocks=1 obj#=3475408 tim=97283574196770
.
.
.
WAIT #4574288328: nam='db file sequential read' ela= 525 file#=414 block#=3335987 blocks=1 obj#=3475408 tim=97283859585625
WAIT #4574288328: nam='db file sequential read' ela= 968 file#=414 block#=3336003 blocks=1 obj#=3475408 tim=97283859586638
WAIT #4574288328: nam='db file sequential read' ela= 1538 file#=414 block#=3336020 blocks=1 obj#=3475408 tim=97283859588272
FETCH #4574288328:c=8277712,e=285414391,p=100083,cr=100083,cu=0,mis=0,r=1,dep=1,og=1,plh=2795380233,tim=97283859588350 <<<--- 285 s. = 4.75 min.
STAT #4574288328 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=100083 pr=100083 pw=0 time=285414386 us)'
STAT #4574288328 id=2 cnt=0 pid=1 pos=1 obj=3475408 op='INDEX SKIP SCAN IX1_CUSTOMER_PROD_SEGM_FACT (cr=100083 pr=100083 pw=0 time=285414368 us cost=845 size=5700 card=950)'
CLOSE #4574288328:c=3,e=4,dep=1,type=3,tim=97283859588526
.
indexes.txt
0
 
LVL 18

Author Comment

by:sventhan
ID: 40316114
Its strange. We just refresh the TEST from PROD. This delete ran fine in TEST in minutes using the same plan.

But in PROD it hung, after little the tuning/altering I got the delete moving but it is still slow.

We found recursive SQL from one of the referenced table mentioned above which took most of the time. After disable the FK reference I could be able to complete the Delete in minutes.

Thanks for all your help.
0
 
LVL 18

Author Closing Comment

by:sventhan
ID: 40331376
Thanks for all your help.
0

Featured Post

NFR key for Veeam Backup for Microsoft Office 365

Veeam is happy to provide a free NFR license (for 1 year, up to 10 users). This license allows for the non‑production use of Veeam Backup for Microsoft Office 365 in your home lab without any feature limitations.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Truncate is a DDL Command where as Delete is a DML Command. Both will delete data from table, but what is the difference between these below statements truncate table <table_name> ?? delete from <table_name> ?? The first command cannot be …
From implementing a password expiration date, to datatype conversions and file export options, these are some useful settings I've found in Jasper Server.
This video shows how to recover a database from a user managed backup
This video shows how to copy an entire tablespace from one database to another database using Transportable Tablespace functionality.

705 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question