Solved

# Optimizing a DELETE statement for large number of records

Posted on 2008-10-16
2,931 Views
The column OPERAZIONE in the NEREPSTA_STORICO table can assume 2 values: 'IMPORT' or 'EXPORT'.

The column CENTRALE in the NEREPSTA_STORICO table can assume 10500 different values.

The DELETE statement you can see attached, keeps in the NEREPSTA_STORICO table keeps max 50 records for every different CENTRALE with OPERAZIONE = 'IMPORT', and max 50 records for every different CENTRALE with OPERAZIONE = 'EXPORT'.

So we can have maximum 50 * 2 * 10500  = 1050000 records on the NEREPSTA_STORICO table.

Explaination:

50 => max 50 record for every distinct couple (CENTRALE, OPERAZIONE)
2 => two types of OPERAZIONE ('IMPORT' or 'EXPORT')
10500 => the number of different CENTRALE values

How can I optimize the DELETE statement?  At the end of the transitory period, I'll have always 1050000 records, and I'm afraid of going deadlock to delete the records.

``````DELETE FROM nerepsta_storico@linkmaster

WHERE ROWID IN (

SELECT rid

FROM (SELECT ROWID rid,

ROW_NUMBER () OVER (PARTITION BY centrale, operazione ORDER BY data_inizio DESC)

rn

WHERE rn > 50);
``````
0
• 6
• 6

LVL 27

Expert Comment

What is the expected number of records that get deleted?

If this number is large, and If you dont have dependent objects on nerepsta_storico, then can you try to create a table with only the required records using CTAS, rename the original table to backup and rename the temp table to nerepsta_storico?

0

Author Comment

The expected number of deleted records for every delete statement is:

one for every couple (CENTRALE, OPERAZIONE), where CENTRALE can assume 10550 different values, and OPERAZIONE can assume two values: 'IMPORT' or 'EXPORT'.

The objective is to keep only 50 more recent records for every CENTRALE with OPERAZIONE = 'IMPORT', and only 50 more recent records for every CENTRALE with operazione = 'EXPORT'.
0

LVL 27

Expert Comment

10550 * 2;
is it slow?
0

Author Comment

In a few months, the system will delete about 20000 records selected between over 1000000 records: is it not possible to optimize the query?  The clause "WHERE ROWID IN" is too heavy?  Have I to substitute it with "EXISTS" clause?
0

LVL 27

Accepted Solution

sujith80 earned 500 total points
IN and EXISTS may not make a difference, as EXISTS is used in most of the cases to push the usage of an INDEX.
In my opinion, you have to think of a way to avoid the delete, say
- partition the table effectively and truncate the partition
- or create a new table with required data.(This approach is worth only if the required number of records are much lesser than the number of records to be deleted.)
0

Author Comment

OK.  Any other clues?
0

LVL 27

Expert Comment

Can you do this delete in the remote database directly, instead of going through the db-link?
0

LVL 27

Expert Comment

If it is possible you may try to use parallelism in the delete, by altering the table to give a degree of parallelism.
0

Author Comment

@sujith80: No, I can't do this delete in the remote database :-(   What do you mean by using parallelism in the delete?
0

LVL 27

Expert Comment

But, i am not quite sure whether it will work for you, as you are doing the delete over a db-link(never tried it).
0

Author Comment

I was trying to use the WITH clase, but with the DELETE statement it doen't work, why??
``````WITH s AS

(SELECT rid

FROM (SELECT ROWID rid,

ROW_NUMBER () OVER (PARTITION BY centrale, operazione ORDER BY data_inizio DESC)

rn

WHERE rn > 5)

SELECT RID FROM S
``````
0

Author Closing Comment

0

## Featured Post

### Suggested Solutions

From the Oracle SQL Reference (http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/queries006.htm) we are told that a join is a query that combines rows from two or more tables, views, or materialized views. This article provides a glimps…
Background In several of the companies I have worked for, I noticed that corporate reporting is off loaded from the production database and done mainly on a clone database which needs to be kept up to date daily by various means, be it a logical…
Using examples as well as descriptions, step through each of the common simple join types, explaining differences in syntax, differences in expected outputs and showing how the queries run along with the actual outputs based upon a simple set of dem…
This video shows syntax for various backup options while discussing how the different basic backup types work.  It explains how to take full backups, incremental level 0 backups, incremental level 1 backups in both differential and cumulative mode a…