Solved

Purging OLD data from oracle database

Posted on 2011-02-18
6
3,571 Views
Last Modified: 2012-05-11
Hi Experts,
We have a huge OLTP database with oracle 10g. It has data from year 2000 and it is causing performance issues so I’m writing a purge process to clean old data from database. We can’t afford outage to this task so I had written PLSQL code to delete the OLD data. There are 13 tables need to be purged and having parent and child relationship. This is how I have written the code.

Step1: go and grab one chunk (3000) profiles from parent table and load into a one temp table.
Step 2: delete these profiles from child tables. At the end, we are deleting from parent table.
Step 3: commit the data
Step 4: repeat steps 1, 2 and 3 until there are no profile matches deletion criteria.

Here is sample code I have.  
Begin
loop
Insert into temp (tid)
Select p.id from parent p where date<=cutoff_date and rownum<=3000;
Exit when sql%rowcount=0;
Delete child1 c1
where exists (select * from temp t1 where c1.id=t1.id);

Delete child13 c13
where exists (select * from temp t1 where c13.id=t1.id);
commit;
end loop;
end;

There are 330M profiles are there need to be purged so it will take long time (3+ months) to delete these profiles and generating gobs of redo. Business wants to purge this data quicker.  
In some forums, I have seen approach of just retaining required data.  Basically, it says that creating a one temp table holds required database. Then we truncate source table and copy data from temp table. The problem with this approach is that it requires outage which we can’t afford it. Also, we need to same thing for 13 tables. I presume that it’s going to take extended outage.  

I just want to find out is there any fast way purging the data. I have been looking for this since couple of weeks but no luck. Any help really appreciated.
0
Comment
Question by:KuldeepReddy
6 Comments
 
LVL 76

Assisted Solution

by:slightwv (䄆 Netminder)
slightwv (䄆 Netminder) earned 200 total points
ID: 34928988
Without an outage or possible unrecoverable situation, I can't think of a 'magic' way to do this.  Maybe another Expert will have some idea later.

Creating new tables with the 'preserved' data, truncate the tables, re-insert the preserved data would definitely require an outage.

A definite faster way is to turn off logging for the tables, do all your magic, then turn it back on.

The danger here is a crash while you aren't logging.  You won't be able to recover.  This method will also invalidate any previous backups you have.



0
 
LVL 15

Expert Comment

by:Aaron Shilo
ID: 34932715
hi

i agree with slightwv

you will have to do this slow and just delete a small amount of dataevery time
this will be a long process but will work with minimum overhead and risk to your system.
0
 
LVL 3

Accepted Solution

by:
paulwquinn earned 300 total points
ID: 34940195
Tom Kyte (of "Ask Tom" and Oracle fame) has a great discussion of this type of operation at:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2345591157689

Rather than preventing any outage, perhaps the key is really minimizing the outage. Don't copy the data you want to keep to a new table, truncate the old table, and then copy the data back, but rather minimize the outage (and redo) by:

BEGIN OUTAGE:

CREATE TABLE new_table AS SELECT * FROM current_table WHERE... PARALLEL 5 NOLOGGING;
Index the new table (CREATE INDEX...PARALLEL 5 NOLOGGING;, for example), put the relevant constraints on, etc.
RENAME current_table TO old_table;
RENAME new_table TO current_table;

END OUTAGE

TRUNCATE TABLE old_table;
DROP TABLE old_table;

If you have a regular maintenance window, you could do 1 of the child tables during each window, then do the parent table during the last one.

BY using the NOLOGGING option you can minimize the REDO created(which will speed things up). By using parallelization you can speed things (like building the new indexes) up. You could conceivably look at partitioning the new tables (by year, for example) to facilitate future maintenance of this kind.

If the Business folks want the job done quickly and with minimal impact on the business environment, then "no outage" is not an option. It reminds me of the adage about a sign in a printer's window: "Do you want it GOOD, FAST or CHEAP... pick two out of three and then call us." If it's really a high availability environment, why don't you have any server redundancy? In which case you could make all of the necessary changes on the secondary server, flip the production environment over to that server, then resync. The best you can hope for is to minimize the impact of an outage by limiting its duration and performing the task "off-peak".
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
ID: 34946291
The rename table approach means you also need to recreate all indexes/constraints.
0
 

Author Comment

by:KuldeepReddy
ID: 34966713
Hi experts,

For now, let's look at the existing code i have developed to delete 'N' rows at a time.  I have actual code attached which will delete 'N' number of rows at a time from child table and  delete same rows parent table finally. Please suggest how we can update PLSQL block to improve better ptocessing rate (No of rows deleted per second). Any help really appreaciated.

Please let me know if you have questions/concerns.

Thanks.


 


code.txt
0
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
ID: 34966744
On mobile right now and cannot get a complete look at the code but I suggest you generate explain plans for the individual pieces.

For example, do you have an index on lastactivity_date?  That might speed up the first query.

Then on the temp tables, what indexes do you have?  You might need to use index hints or regenerate statistics on the temp tables after large inserts to ensure efficiency.
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Join & Write a Comment

Suggested Solutions

Have you ever had to make fundamental changes to a table in Oracle, but haven't been able to get any downtime?  I'm talking things like: * Dropping columns * Shrinking allocated space * Removing chained blocks and restoring the PCTFREE * Re-or…
From implementing a password expiration date, to datatype conversions and file export options, these are some useful settings I've found in Jasper Server.
Via a live example show how to connect to RMAN, make basic configuration settings changes and then take a backup of a demo database
This video explains at a high level about the four available data types in Oracle and how dates can be manipulated by the user to get data into and out of the database.

708 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

16 Experts available now in Live!

Get 1:1 Help Now