Solved

Automatically backup data

Posted on 2013-01-26
5
318 Views
Last Modified: 2013-01-28
Hi all,

I need to find the best solution -
back up data on a table basis every hour (or every day) automatically, to a backup table in the same instance. Cannot be triggers, that would affect performance too much.

Any help is appreciated
0
Comment
Question by:darrgyas
  • 2
  • 2
5 Comments
 
LVL 45

Expert Comment

by:Kdo
ID: 38823188
Hi darrgyas ,

Depending on your needs and the amount of data involved, you might consider replication.  That's a process where you have another database, usually on another server, and when updates are applied to one database, the changes are automatically applied to the other.

If you really need to copy data from a key table (or tables) on a regular basis, there are a number of considerations.

1.  How much data will you copy (number of row and total size of the data).
2.  Do you need to copy the entire table?
3.  Can you copy only the new rows and can you identify them?
4.  Are you going to delete data behind the copy?
5.  Will the new table be completely reloaded or is the data added to existing data?

There are more questions, but that should be a great start.


Kent
0
 
LVL 27

Expert Comment

by:tliotta
ID: 38825334
And what platform is DB2 on? That goes specifically to the element of triggers which might be your best choice if performance is an issue.

Tom
0
 

Author Comment

by:darrgyas
ID: 38826441
The platform is Windows, DB2-Express-C 9.74

Each table is being written to appr 200 times a second, average record is appr 200 bytes
I don't need the entire table - just the new rows
I can identify the new rows by the timestamp
The data from the original table will not be deleted
The new data will be added to the backup table
0
 
LVL 45

Accepted Solution

by:
Kdo earned 500 total points
ID: 38826916
Hi darrgyas,

If you want to do this with SQL, I suggest a two-step approach.  Step 1 pulls the desired rows into a staging table.  Step 2 writes the rows to the target table.

One of the issues that you'll be contending with is logging.  The INSERTS on the second table will use log space.  200/rows second over an hour is 720,00 rows.  You don't really want to have 3/4 million uncommitted rows logged.  And the easiest way to avoid that is to write to a staging table without logging.  You can then move the data from the staging table to the target table in smaller chunks.

You can also move the data directly from the source to the target in smaller chunks that will affect buffering for a longer period of time.  Which works best for you is something that you might have to learn by trial and error.

Anyway, moving the data via a staging table is easy.

-- One time:
CREATE TABLE stage1 AS (SELECT * FROM source_table) DEFINITION ONLY;

-- Empty the staging table
ALTER TABLE stage1 ACTIVATE NOT LOGGED INITIALLY WITH EMPTY TABLE;
COMMIT;

-- Insert the new rows
ALTER TABLE stage1 ACTIVATE NOT LOGGED INITIALLY;
INSERT INTO stage1
SELECT * 
FROM source_table 
WHERE {condition};
COMMIT;

-- Copy the rows to the target table
INSERT INTO target
SELECT * FROM stage1
WHERE {condition};
COMMIT;

Open in new window


That's about it.  

If the insert into the staging table fails, the table will be marked as corrupt.  That's not a big deal as you'll simply drop the table and recreate it.

The last INSERT should move part of the data from the staging table to the target table.  Split the data, probably by the rows' timestamp or identity value.


Good Luck,
Kent
0
 

Author Closing Comment

by:darrgyas
ID: 38829248
Thank you
0

Featured Post

ScreenConnect 6.0 Free Trial

Discover new time-saving features in one game-changing release, ScreenConnect 6.0, based on partner feedback. New features include a redesigned UI, app configurations and chat acknowledgement to improve customer engagement!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
SQL Error in WHERE Clause 5 47
Common Records between Sub Queries 4 25
Run SQL Server Proc from Access 11 30
Mysql Left Join Case 10 52
In database programming, custom sort order seems to be necessary quite often, at least in my experience and time here at EE. Within the realm of custom sorting is the sorting of numbers and text independently (i.e., treating the numbers as number…
If you have heard of RFC822 date formats, they can be quite a challenge in SQL Server. RFC822 is an Internet standard format for email message headers, including all dates within those headers. The RFC822 protocols are available in detail at:   ht…
Windows 10 is mostly good. However the one thing that annoys me is how many clicks you have to do to dial a VPN connection. You have to go to settings from the start menu, (2 clicks), Network and Internet (1 click), Click VPN (another click) then fi…
Finds all prime numbers in a range requested and places them in a public primes() array. I've demostrated a template size of 30 (2 * 3 * 5) but larger templates can be built such 210  (2 * 3 * 5 * 7) or 2310  (2 * 3 * 5 * 7 * 11). The larger templa…

776 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question