Purging OLD data from oracle database

Posted on 2011-02-18
Last Modified: 2012-05-11
Hi Experts,
We have a huge OLTP database with oracle 10g. It has data from year 2000 and it is causing performance issues so I’m writing a purge process to clean old data from database. We can’t afford outage to this task so I had written PLSQL code to delete the OLD data. There are 13 tables need to be purged and having parent and child relationship. This is how I have written the code.

Step1: go and grab one chunk (3000) profiles from parent table and load into a one temp table.
Step 2: delete these profiles from child tables. At the end, we are deleting from parent table.
Step 3: commit the data
Step 4: repeat steps 1, 2 and 3 until there are no profile matches deletion criteria.

Here is sample code I have.  
Insert into temp (tid)
Select from parent p where date<=cutoff_date and rownum<=3000;
Exit when sql%rowcount=0;
Delete child1 c1
where exists (select * from temp t1 where;

Delete child13 c13
where exists (select * from temp t1 where;
end loop;

There are 330M profiles are there need to be purged so it will take long time (3+ months) to delete these profiles and generating gobs of redo. Business wants to purge this data quicker.  
In some forums, I have seen approach of just retaining required data.  Basically, it says that creating a one temp table holds required database. Then we truncate source table and copy data from temp table. The problem with this approach is that it requires outage which we can’t afford it. Also, we need to same thing for 13 tables. I presume that it’s going to take extended outage.  

I just want to find out is there any fast way purging the data. I have been looking for this since couple of weeks but no luck. Any help really appreciated.
Question by:KuldeepReddy
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
LVL 77

Assisted Solution

by:slightwv (䄆 Netminder)
slightwv (䄆 Netminder) earned 200 total points
ID: 34928988
Without an outage or possible unrecoverable situation, I can't think of a 'magic' way to do this.  Maybe another Expert will have some idea later.

Creating new tables with the 'preserved' data, truncate the tables, re-insert the preserved data would definitely require an outage.

A definite faster way is to turn off logging for the tables, do all your magic, then turn it back on.

The danger here is a crash while you aren't logging.  You won't be able to recover.  This method will also invalidate any previous backups you have.

LVL 15

Expert Comment

by:Aaron Shilo
ID: 34932715

i agree with slightwv

you will have to do this slow and just delete a small amount of dataevery time
this will be a long process but will work with minimum overhead and risk to your system.

Accepted Solution

paulwquinn earned 300 total points
ID: 34940195
Tom Kyte (of "Ask Tom" and Oracle fame) has a great discussion of this type of operation at:

Rather than preventing any outage, perhaps the key is really minimizing the outage. Don't copy the data you want to keep to a new table, truncate the old table, and then copy the data back, but rather minimize the outage (and redo) by:


Index the new table (CREATE INDEX...PARALLEL 5 NOLOGGING;, for example), put the relevant constraints on, etc.
RENAME current_table TO old_table;
RENAME new_table TO current_table;


DROP TABLE old_table;

If you have a regular maintenance window, you could do 1 of the child tables during each window, then do the parent table during the last one.

BY using the NOLOGGING option you can minimize the REDO created(which will speed things up). By using parallelization you can speed things (like building the new indexes) up. You could conceivably look at partitioning the new tables (by year, for example) to facilitate future maintenance of this kind.

If the Business folks want the job done quickly and with minimal impact on the business environment, then "no outage" is not an option. It reminds me of the adage about a sign in a printer's window: "Do you want it GOOD, FAST or CHEAP... pick two out of three and then call us." If it's really a high availability environment, why don't you have any server redundancy? In which case you could make all of the necessary changes on the secondary server, flip the production environment over to that server, then resync. The best you can hope for is to minimize the impact of an outage by limiting its duration and performing the task "off-peak".
PeopleSoft Has Never Been Easier

PeopleSoft Adoption Made Smooth & Simple!

On-The-Job Training Is made Intuitive & Easy With WalkMe's On-Screen Guidance Tool.  Claim Your Free WalkMe Account Now

LVL 77

Expert Comment

by:slightwv (䄆 Netminder)
ID: 34946291
The rename table approach means you also need to recreate all indexes/constraints.

Author Comment

ID: 34966713
Hi experts,

For now, let's look at the existing code i have developed to delete 'N' rows at a time.  I have actual code attached which will delete 'N' number of rows at a time from child table and  delete same rows parent table finally. Please suggest how we can update PLSQL block to improve better ptocessing rate (No of rows deleted per second). Any help really appreaciated.

Please let me know if you have questions/concerns.



LVL 77

Expert Comment

by:slightwv (䄆 Netminder)
ID: 34966744
On mobile right now and cannot get a complete look at the code but I suggest you generate explain plans for the individual pieces.

For example, do you have an index on lastactivity_date?  That might speed up the first query.

Then on the temp tables, what indexes do you have?  You might need to use index hints or regenerate statistics on the temp tables after large inserts to ensure efficiency.

Featured Post

On Demand Webinar - Networking for the Cloud Era

This webinar discusses:
-Common barriers companies experience when moving to the cloud
-How SD-WAN changes the way we look at networks
-Best practices customers should employ moving forward with cloud migration
-What happens behind the scenes of SteelConnect’s one-click button

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Working with Network Access Control Lists in Oracle 11g (part 1) Part 2: So, you upgraded to a shiny new 11g database and all of a sudden every program that used UTL_MAIL, UTL_SMTP, UTL_TCP, UTL_HTTP or any oth…
From implementing a password expiration date, to datatype conversions and file export options, these are some useful settings I've found in Jasper Server.
This video shows information on the Oracle Data Dictionary, starting with the Oracle documentation, explaining the different types of Data Dictionary views available by group and permissions as well as giving examples on how to retrieve data from th…
This video shows how to copy an entire tablespace from one database to another database using Transportable Tablespace functionality.

691 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question