Solved

Automatically backup data

Posted on 2013-01-26
5
315 Views
Last Modified: 2013-01-28
Hi all,

I need to find the best solution -
back up data on a table basis every hour (or every day) automatically, to a backup table in the same instance. Cannot be triggers, that would affect performance too much.

Any help is appreciated
0
Comment
Question by:darrgyas
  • 2
  • 2
5 Comments
 
LVL 45

Expert Comment

by:Kdo
ID: 38823188
Hi darrgyas ,

Depending on your needs and the amount of data involved, you might consider replication.  That's a process where you have another database, usually on another server, and when updates are applied to one database, the changes are automatically applied to the other.

If you really need to copy data from a key table (or tables) on a regular basis, there are a number of considerations.

1.  How much data will you copy (number of row and total size of the data).
2.  Do you need to copy the entire table?
3.  Can you copy only the new rows and can you identify them?
4.  Are you going to delete data behind the copy?
5.  Will the new table be completely reloaded or is the data added to existing data?

There are more questions, but that should be a great start.


Kent
0
 
LVL 27

Expert Comment

by:tliotta
ID: 38825334
And what platform is DB2 on? That goes specifically to the element of triggers which might be your best choice if performance is an issue.

Tom
0
 

Author Comment

by:darrgyas
ID: 38826441
The platform is Windows, DB2-Express-C 9.74

Each table is being written to appr 200 times a second, average record is appr 200 bytes
I don't need the entire table - just the new rows
I can identify the new rows by the timestamp
The data from the original table will not be deleted
The new data will be added to the backup table
0
 
LVL 45

Accepted Solution

by:
Kdo earned 500 total points
ID: 38826916
Hi darrgyas,

If you want to do this with SQL, I suggest a two-step approach.  Step 1 pulls the desired rows into a staging table.  Step 2 writes the rows to the target table.

One of the issues that you'll be contending with is logging.  The INSERTS on the second table will use log space.  200/rows second over an hour is 720,00 rows.  You don't really want to have 3/4 million uncommitted rows logged.  And the easiest way to avoid that is to write to a staging table without logging.  You can then move the data from the staging table to the target table in smaller chunks.

You can also move the data directly from the source to the target in smaller chunks that will affect buffering for a longer period of time.  Which works best for you is something that you might have to learn by trial and error.

Anyway, moving the data via a staging table is easy.

-- One time:
CREATE TABLE stage1 AS (SELECT * FROM source_table) DEFINITION ONLY;

-- Empty the staging table
ALTER TABLE stage1 ACTIVATE NOT LOGGED INITIALLY WITH EMPTY TABLE;
COMMIT;

-- Insert the new rows
ALTER TABLE stage1 ACTIVATE NOT LOGGED INITIALLY;
INSERT INTO stage1
SELECT * 
FROM source_table 
WHERE {condition};
COMMIT;

-- Copy the rows to the target table
INSERT INTO target
SELECT * FROM stage1
WHERE {condition};
COMMIT;

Open in new window


That's about it.  

If the insert into the staging table fails, the table will be marked as corrupt.  That's not a big deal as you'll simply drop the table and recreate it.

The last INSERT should move part of the data from the staging table to the target table.  Split the data, probably by the rows' timestamp or identity value.


Good Luck,
Kent
0
 

Author Closing Comment

by:darrgyas
ID: 38829248
Thank you
0

Featured Post

Better Security Awareness With Threat Intelligence

See how one of the leading financial services organizations uses Recorded Future as part of a holistic threat intelligence program to promote security awareness and proactively and efficiently identify threats.

Join & Write a Comment

Recursive SQL in UDB/LUW (it really isn't that hard to do) Recursive SQL is most often used to convert columns to rows or rows to columns.  A previous article described the process of converting rows to columns.  This article will build off of th…
Composite queries are used to retrieve the results from joining multiple queries after applying any filters. UNION, INTERSECT, MINUS, and UNION ALL are some of the operators used to get certain desired results.​
When you create an app prototype with Adobe XD, you can insert system screens -- sharing or Control Center, for example -- with just a few clicks. This video shows you how. You can take the full course on Experts Exchange at http://bit.ly/XDcourse.
This video demonstrates how to create an example email signature rule for a department in a company using CodeTwo Exchange Rules. The signature will be inserted beneath users' latest emails in conversations and will be displayed in users' Sent Items…

747 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

16 Experts available now in Live!

Get 1:1 Help Now