Purging OLD data from oracle database

Hi Experts,
We have a huge OLTP database with oracle 10g. It has data from year 2000 and it is causing performance issues so I’m writing a purge process to clean old data from database. We can’t afford outage to this task so I had written PLSQL code to delete the OLD data. There are 13 tables need to be purged and having parent and child relationship. This is how I have written the code.

Step1: go and grab one chunk (3000) profiles from parent table and load into a one temp table.
Step 2: delete these profiles from child tables. At the end, we are deleting from parent table.
Step 3: commit the data
Step 4: repeat steps 1, 2 and 3 until there are no profile matches deletion criteria.

Here is sample code I have.  
Insert into temp (tid)
Select p.id from parent p where date<=cutoff_date and rownum<=3000;
Exit when sql%rowcount=0;
Delete child1 c1
where exists (select * from temp t1 where c1.id=t1.id);

Delete child13 c13
where exists (select * from temp t1 where c13.id=t1.id);
end loop;

There are 330M profiles are there need to be purged so it will take long time (3+ months) to delete these profiles and generating gobs of redo. Business wants to purge this data quicker.  
In some forums, I have seen approach of just retaining required data.  Basically, it says that creating a one temp table holds required database. Then we truncate source table and copy data from temp table. The problem with this approach is that it requires outage which we can’t afford it. Also, we need to same thing for 13 tables. I presume that it’s going to take extended outage.  

I just want to find out is there any fast way purging the data. I have been looking for this since couple of weeks but no luck. Any help really appreciated.
Who is Participating?

Improve company productivity with a Business Account.Sign Up

paulwquinnConnect With a Mentor Commented:
Tom Kyte (of "Ask Tom" and Oracle fame) has a great discussion of this type of operation at:


Rather than preventing any outage, perhaps the key is really minimizing the outage. Don't copy the data you want to keep to a new table, truncate the old table, and then copy the data back, but rather minimize the outage (and redo) by:


Index the new table (CREATE INDEX...PARALLEL 5 NOLOGGING;, for example), put the relevant constraints on, etc.
RENAME current_table TO old_table;
RENAME new_table TO current_table;


DROP TABLE old_table;

If you have a regular maintenance window, you could do 1 of the child tables during each window, then do the parent table during the last one.

BY using the NOLOGGING option you can minimize the REDO created(which will speed things up). By using parallelization you can speed things (like building the new indexes) up. You could conceivably look at partitioning the new tables (by year, for example) to facilitate future maintenance of this kind.

If the Business folks want the job done quickly and with minimal impact on the business environment, then "no outage" is not an option. It reminds me of the adage about a sign in a printer's window: "Do you want it GOOD, FAST or CHEAP... pick two out of three and then call us." If it's really a high availability environment, why don't you have any server redundancy? In which case you could make all of the necessary changes on the secondary server, flip the production environment over to that server, then resync. The best you can hope for is to minimize the impact of an outage by limiting its duration and performing the task "off-peak".
slightwv (䄆 Netminder)Connect With a Mentor Commented:
Without an outage or possible unrecoverable situation, I can't think of a 'magic' way to do this.  Maybe another Expert will have some idea later.

Creating new tables with the 'preserved' data, truncate the tables, re-insert the preserved data would definitely require an outage.

A definite faster way is to turn off logging for the tables, do all your magic, then turn it back on.

The danger here is a crash while you aren't logging.  You won't be able to recover.  This method will also invalidate any previous backups you have.

Aaron ShiloChief Database ArchitectCommented:

i agree with slightwv

you will have to do this slow and just delete a small amount of dataevery time
this will be a long process but will work with minimum overhead and risk to your system.
Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

slightwv (䄆 Netminder) Commented:
The rename table approach means you also need to recreate all indexes/constraints.
KuldeepReddyAuthor Commented:
Hi experts,

For now, let's look at the existing code i have developed to delete 'N' rows at a time.  I have actual code attached which will delete 'N' number of rows at a time from child table and  delete same rows parent table finally. Please suggest how we can update PLSQL block to improve better ptocessing rate (No of rows deleted per second). Any help really appreaciated.

Please let me know if you have questions/concerns.



slightwv (䄆 Netminder) Commented:
On mobile right now and cannot get a complete look at the code but I suggest you generate explain plans for the individual pieces.

For example, do you have an index on lastactivity_date?  That might speed up the first query.

Then on the temp tables, what indexes do you have?  You might need to use index hints or regenerate statistics on the temp tables after large inserts to ensure efficiency.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.