Solved

How to purge historical data from current tables.

Posted on 2006-11-08
3
1,608 Views
Last Modified: 2012-05-05
I'm using IBM DB2 version 5.2

(Yea I know, its old, I've finally got the bosses convinced that its time to move.)

(please see http://www.experts-exchange.com/Databases/IBM_UDB/Q_21766241.html, for a detailed question of the setup and how things are going.)

when I issue a carefully crafted delete statement against a dated dataset to delete data older than a certain date I get an error indicating that the transaction log is not large enough to accomodate my request.

I end up having to reissue the same sql statement and incrementing the date by a week, or a month or whatever to get a years worth of data purged.

In my estimation I need to do one of two things:

1) find a way to temporarily (or dynamically) increase the transaction buffer so that the sql statement will execute ONCE and be done.

2) find a way to bybass the the transactional logging system and rollbacks and all that for a one time use sql statement.

I'm open to suggestions on approach as well as method.

Thanks for your help.
0
Comment
Question by:Rance_Hall
3 Comments
 
LVL 45

Expert Comment

by:Kdo
ID: 17900779

Hi Rance,

I remember the discussion.    :~}


You've got a couple of choices.

-- You can go into the db2 admin utility and increase the number and/or size of the logs.  This may not be sufficient as the volume of data being deleted may require a huge amount of logging.

-- You can spin through the data deleting the oldest data one day, week, month, etc at a time.


And of course, you can do both.  The loop below will delete one days' worth of data at a time.  That should be relatively easy on the logs.

  DECLARE BaseDate    DATE = '2004-01-01';
  DECLARE offset         INTEGER;

  offset = 10000;       -- Assuming nearly 30 years is backing up far enough.    :~}

  WHILE (offset > 0)
  BEGIN
    DELETE FROM table WHERE posting_date < BaseDate - offset DAYS:
    COMMIT;
  END


Good Luck,
Kent
0
 
LVL 13

Accepted Solution

by:
ghp7000 earned 125 total points
ID: 17900857
log full error messages simply mean that you dont have enough log files configured to handle the load.

if you do from command line:
db2 get db cfg for <dbname>
and look at the configuration of the LOGPRIMARY database parameter, you will see it it is set to a certain number of log files.
Increase the number of log files by doing
db2 update db cfg for <dbname> using logprimary <at least twice as many log files as were previously defined>
or
you can increasethe size of the log files by doing
db2 update db cfg for <dbname> using LOGFILSIZ 5000 (means each log file is 20MB in size)
or
you can tell db2 to create more log files if it runs out of log file space during a transaction
db2 update db cfg for <dbname> using LOGSECOND 100 (means create up to 100 log files if logprimary space runs out. These log files are destroyed automatically by db2 when the transaction is committed or rolled back)


0
 
LVL 8

Author Comment

by:Rance_Hall
ID: 17901516
Thanks for the input, your question has opened another question concerning access control which I will post later, but these notes give me exactly what I want.


Thanks.
0

Featured Post

Why You Should Analyze Threat Actor TTPs

After years of analyzing threat actor behavior, it’s become clear that at any given time there are specific tactics, techniques, and procedures (TTPs) that are particularly prevalent. By analyzing and understanding these TTPs, you can dramatically enhance your security program.

Join & Write a Comment

Recursive SQL in UDB/LUW (you can use 'recursive' and 'SQL' in the same sentence) A growing number of database queries lend themselves to recursive solutions.  It's not always easy to spot when recursion is called for, especially for people una…
Recursive SQL in UDB/LUW (it really isn't that hard to do) Recursive SQL is most often used to convert columns to rows or rows to columns.  A previous article described the process of converting rows to columns.  This article will build off of th…
This video gives you a great overview about bandwidth monitoring with SNMP and WMI with our network monitoring solution PRTG Network Monitor (https://www.paessler.com/prtg). If you're looking for how to monitor bandwidth using netflow or packet s…
In this tutorial you'll learn about bandwidth monitoring with flows and packet sniffing with our network monitoring solution PRTG Network Monitor (https://www.paessler.com/prtg). If you're interested in additional methods for monitoring bandwidt…

743 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

14 Experts available now in Live!

Get 1:1 Help Now