[Last Call] Learn how to a build a cloud-first strategyRegister Now


Purging OLD data from oracle database

Posted on 2011-02-18
Medium Priority
Last Modified: 2012-05-11
Hi Experts,
We have a huge OLTP database with oracle 10g. It has data from year 2000 and it is causing performance issues so I’m writing a purge process to clean old data from database. We can’t afford outage to this task so I had written PLSQL code to delete the OLD data. There are 13 tables need to be purged and having parent and child relationship. This is how I have written the code.

Step1: go and grab one chunk (3000) profiles from parent table and load into a one temp table.
Step 2: delete these profiles from child tables. At the end, we are deleting from parent table.
Step 3: commit the data
Step 4: repeat steps 1, 2 and 3 until there are no profile matches deletion criteria.

Here is sample code I have.  
Insert into temp (tid)
Select p.id from parent p where date<=cutoff_date and rownum<=3000;
Exit when sql%rowcount=0;
Delete child1 c1
where exists (select * from temp t1 where c1.id=t1.id);

Delete child13 c13
where exists (select * from temp t1 where c13.id=t1.id);
end loop;

There are 330M profiles are there need to be purged so it will take long time (3+ months) to delete these profiles and generating gobs of redo. Business wants to purge this data quicker.  
In some forums, I have seen approach of just retaining required data.  Basically, it says that creating a one temp table holds required database. Then we truncate source table and copy data from temp table. The problem with this approach is that it requires outage which we can’t afford it. Also, we need to same thing for 13 tables. I presume that it’s going to take extended outage.  

I just want to find out is there any fast way purging the data. I have been looking for this since couple of weeks but no luck. Any help really appreciated.
Question by:KuldeepReddy
LVL 78

Assisted Solution

by:slightwv (䄆 Netminder)
slightwv (䄆 Netminder) earned 800 total points
ID: 34928988
Without an outage or possible unrecoverable situation, I can't think of a 'magic' way to do this.  Maybe another Expert will have some idea later.

Creating new tables with the 'preserved' data, truncate the tables, re-insert the preserved data would definitely require an outage.

A definite faster way is to turn off logging for the tables, do all your magic, then turn it back on.

The danger here is a crash while you aren't logging.  You won't be able to recover.  This method will also invalidate any previous backups you have.

LVL 15

Expert Comment

by:Aaron Shilo
ID: 34932715

i agree with slightwv

you will have to do this slow and just delete a small amount of dataevery time
this will be a long process but will work with minimum overhead and risk to your system.

Accepted Solution

paulwquinn earned 1200 total points
ID: 34940195
Tom Kyte (of "Ask Tom" and Oracle fame) has a great discussion of this type of operation at:


Rather than preventing any outage, perhaps the key is really minimizing the outage. Don't copy the data you want to keep to a new table, truncate the old table, and then copy the data back, but rather minimize the outage (and redo) by:


Index the new table (CREATE INDEX...PARALLEL 5 NOLOGGING;, for example), put the relevant constraints on, etc.
RENAME current_table TO old_table;
RENAME new_table TO current_table;


DROP TABLE old_table;

If you have a regular maintenance window, you could do 1 of the child tables during each window, then do the parent table during the last one.

BY using the NOLOGGING option you can minimize the REDO created(which will speed things up). By using parallelization you can speed things (like building the new indexes) up. You could conceivably look at partitioning the new tables (by year, for example) to facilitate future maintenance of this kind.

If the Business folks want the job done quickly and with minimal impact on the business environment, then "no outage" is not an option. It reminds me of the adage about a sign in a printer's window: "Do you want it GOOD, FAST or CHEAP... pick two out of three and then call us." If it's really a high availability environment, why don't you have any server redundancy? In which case you could make all of the necessary changes on the secondary server, flip the production environment over to that server, then resync. The best you can hope for is to minimize the impact of an outage by limiting its duration and performing the task "off-peak".
Free recovery tool for Microsoft Active Directory

Veeam Explorer for Microsoft Active Directory provides fast and reliable object-level recovery for Active Directory from a single-pass, agentless backup or storage snapshot — without the need to restore an entire virtual machine or use third-party tools.

LVL 78

Expert Comment

by:slightwv (䄆 Netminder)
ID: 34946291
The rename table approach means you also need to recreate all indexes/constraints.

Author Comment

ID: 34966713
Hi experts,

For now, let's look at the existing code i have developed to delete 'N' rows at a time.  I have actual code attached which will delete 'N' number of rows at a time from child table and  delete same rows parent table finally. Please suggest how we can update PLSQL block to improve better ptocessing rate (No of rows deleted per second). Any help really appreaciated.

Please let me know if you have questions/concerns.



LVL 78

Expert Comment

by:slightwv (䄆 Netminder)
ID: 34966744
On mobile right now and cannot get a complete look at the code but I suggest you generate explain plans for the individual pieces.

For example, do you have an index on lastactivity_date?  That might speed up the first query.

Then on the temp tables, what indexes do you have?  You might need to use index hints or regenerate statistics on the temp tables after large inserts to ensure efficiency.

Featured Post

Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Why doesn't the Oracle optimizer use my index? Querying too much data Most Oracle developers know that an index is useful when you can use it to restrict your result set to a small number of the total rows in a table. So, the obvious side…
Working with Network Access Control Lists in Oracle 11g (part 2) Part 1: http://www.e-e.com/A_8429.html Previously, I introduced the basics of network ACL's including how to create, delete and modify entries to allow and deny access.  For many…
This video explains at a high level with the mandatory Oracle Memory processes are as well as touching on some of the more common optional ones.
This video shows how to Export data from an Oracle database using the Datapump Export Utility.  The corresponding Datapump Import utility is also discussed and demonstrated.
Suggested Courses
Course of the Month18 days, 12 hours left to enroll

834 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question