• Status: Solved
• Priority: Medium
• Security: Public
• Views: 2961

# Optimizing a DELETE statement for large number of records

The column OPERAZIONE in the NEREPSTA_STORICO table can assume 2 values: 'IMPORT' or 'EXPORT'.

The column CENTRALE in the NEREPSTA_STORICO table can assume 10500 different values.

The DELETE statement you can see attached, keeps in the NEREPSTA_STORICO table keeps max 50 records for every different CENTRALE with OPERAZIONE = 'IMPORT', and max 50 records for every different CENTRALE with OPERAZIONE = 'EXPORT'.

So we can have maximum 50 * 2 * 10500  = 1050000 records on the NEREPSTA_STORICO table.

Explaination:

50 => max 50 record for every distinct couple (CENTRALE, OPERAZIONE)
2 => two types of OPERAZIONE ('IMPORT' or 'EXPORT')
10500 => the number of different CENTRALE values

How can I optimize the DELETE statement?  At the end of the transitory period, I'll have always 1050000 records, and I'm afraid of going deadlock to delete the records.

``````DELETE FROM nerepsta_storico@linkmaster
WHERE ROWID IN (
SELECT rid
FROM (SELECT ROWID rid,
ROW_NUMBER () OVER (PARTITION BY centrale, operazione ORDER BY data_inizio DESC)
rn
WHERE rn > 50);
``````
0
• 6
• 6
1 Solution

Data ArchitectCommented:
What is the expected number of records that get deleted?

If this number is large, and If you dont have dependent objects on nerepsta_storico, then can you try to create a table with only the required records using CTAS, rename the original table to backup and rename the temp table to nerepsta_storico?

0

Author Commented:
The expected number of deleted records for every delete statement is:

one for every couple (CENTRALE, OPERAZIONE), where CENTRALE can assume 10550 different values, and OPERAZIONE can assume two values: 'IMPORT' or 'EXPORT'.

The objective is to keep only 50 more recent records for every CENTRALE with OPERAZIONE = 'IMPORT', and only 50 more recent records for every CENTRALE with operazione = 'EXPORT'.
0

Data ArchitectCommented:
10550 * 2;
is it slow?
0

Author Commented:
In a few months, the system will delete about 20000 records selected between over 1000000 records: is it not possible to optimize the query?  The clause "WHERE ROWID IN" is too heavy?  Have I to substitute it with "EXISTS" clause?
0

Data ArchitectCommented:
IN and EXISTS may not make a difference, as EXISTS is used in most of the cases to push the usage of an INDEX.
In my opinion, you have to think of a way to avoid the delete, say
- partition the table effectively and truncate the partition
- or create a new table with required data.(This approach is worth only if the required number of records are much lesser than the number of records to be deleted.)
0

Author Commented:
OK.  Any other clues?
0

Data ArchitectCommented:
Can you do this delete in the remote database directly, instead of going through the db-link?
0

Data ArchitectCommented:
If it is possible you may try to use parallelism in the delete, by altering the table to give a degree of parallelism.
0

Author Commented:
@sujith80: No, I can't do this delete in the remote database :-(   What do you mean by using parallelism in the delete?
0

Data ArchitectCommented:

But, i am not quite sure whether it will work for you, as you are doing the delete over a db-link(never tried it).
0

Author Commented:
I was trying to use the WITH clase, but with the DELETE statement it doen't work, why??
``````WITH s AS
(SELECT rid
FROM (SELECT ROWID rid,
ROW_NUMBER () OVER (PARTITION BY centrale, operazione ORDER BY data_inizio DESC)
rn
WHERE rn > 5)
SELECT RID FROM S
``````
0

Author Commented:
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.