Link to home
Start Free TrialLog in
Avatar of CRISTIANO_CORRADI
CRISTIANO_CORRADI

asked on

Optimizing a DELETE statement for large number of records

The column OPERAZIONE in the NEREPSTA_STORICO table can assume 2 values: 'IMPORT' or 'EXPORT'.

The column CENTRALE in the NEREPSTA_STORICO table can assume 10500 different values.

The DELETE statement you can see attached, keeps in the NEREPSTA_STORICO table keeps max 50 records for every different CENTRALE with OPERAZIONE = 'IMPORT', and max 50 records for every different CENTRALE with OPERAZIONE = 'EXPORT'.

So we can have maximum 50 * 2 * 10500  = 1050000 records on the NEREPSTA_STORICO table.

Explaination:

50 => max 50 record for every distinct couple (CENTRALE, OPERAZIONE)
2 => two types of OPERAZIONE ('IMPORT' or 'EXPORT')
10500 => the number of different CENTRALE values

How can I optimize the DELETE statement?  At the end of the transitory period, I'll have always 1050000 records, and I'm afraid of going deadlock to delete the records.

DELETE FROM nerepsta_storico@linkmaster
            WHERE ROWID IN (
                     SELECT rid
                       FROM (SELECT ROWID rid,
                                    ROW_NUMBER () OVER (PARTITION BY centrale, operazione ORDER BY data_inizio DESC)
                                                                           rn
                               FROM nerepsta_storico@linkmaster)
                      WHERE rn > 50);

Open in new window

Avatar of Sujith
Sujith
Flag of United Kingdom of Great Britain and Northern Ireland image

What is the expected number of records that get deleted?

If this number is large, and If you dont have dependent objects on nerepsta_storico, then can you try to create a table with only the required records using CTAS, rename the original table to backup and rename the temp table to nerepsta_storico?

Avatar of CRISTIANO_CORRADI
CRISTIANO_CORRADI

ASKER

The expected number of deleted records for every delete statement is:

one for every couple (CENTRALE, OPERAZIONE), where CENTRALE can assume 10550 different values, and OPERAZIONE can assume two values: 'IMPORT' or 'EXPORT'.

The objective is to keep only 50 more recent records for every CENTRALE with OPERAZIONE = 'IMPORT', and only 50 more recent records for every CENTRALE with operazione = 'EXPORT'.
10550 * 2;
is it slow?
In a few months, the system will delete about 20000 records selected between over 1000000 records: is it not possible to optimize the query?  The clause "WHERE ROWID IN" is too heavy?  Have I to substitute it with "EXISTS" clause?
ASKER CERTIFIED SOLUTION
Avatar of Sujith
Sujith
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
OK.  Any other clues?
Can you do this delete in the remote database directly, instead of going through the db-link?
If it is possible you may try to use parallelism in the delete, by altering the table to give a degree of parallelism.
@sujith80: No, I can't do this delete in the remote database :-(   What do you mean by using parallelism in the delete?
Read this link for details on parallel DML in oracle.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/usingpe.htm#sthref2120

But, i am not quite sure whether it will work for you, as you are doing the delete over a db-link(never tried it).
I was trying to use the WITH clase, but with the DELETE statement it doen't work, why??
WITH s AS
     (SELECT rid
        FROM (SELECT ROWID rid,
                     ROW_NUMBER () OVER (PARTITION BY centrale, operazione ORDER BY data_inizio DESC)
                                                                           rn
                FROM nerepsta_storico@linkmaster)
       WHERE rn > 5)
SELECT RID FROM S

Open in new window

Thanks for your explaination!