SQL 2008 - Timely archiving of data in Tables

Posted on 2008-11-12
Last Modified: 2012-05-05

I have got a database which has got 100+ tables. I want to create an ARCHIVE DATABASE and timely move the data from one database to another database.

Is there any thing available or has somebody got any code - which helps me to kick start

Objective is to:
->create an ARCHIVE DATABASE (which should have all the tables from the MAIN DATABASE)
->set an interval and based on that archive process should work (means any data more than 1 months old would be archived)

Question by:RadheRadhe
    LVL 41

    Expert Comment

    This is something that I think you'll have to do with code (either via a program or via a stored procedure)...
    A typical database doesn't have a universal way to determine the age of a record in a table.   So, that means you'll need your own technique for determining the  age (most likely some existing "date stamp" field)
    LVL 7

    Expert Comment

    --Lots of methods.  See below of one method.
    --Please excuse any syntax errors.
    create table tables_to_archive
       tablename varchar(500) not null,
       datefield varchar(100) not null,
       archiveage int)
    alter table tables_to_archive add primary key pk_tables_to_archive (tablename,datefield)
    insert into tables_to_archive
       select 'orders','orderdate',31
    insert into tables_to_archive
       select 'invoices','invoicedate',31
    --execute this procedure at the desired frequency.
    create procedure p_archive_tables
       declare @tablename varchar(500),
               @datefield varchar(100),
               @archiveage int,
               @sqlcmd varchar(2000),
               @archivedate datetime
       declare tbl_cur cursor for
          select tablename, datefield,archiveage
          from tables_to_archive
          order by tablename
       fetch next from tbl_cur
          into @tablename, @datefield, @archiveage
       while @@FETCH_STATUS = 1
          select @archivedate = dateadd(dd,getdate(),-1 * abs(@archiveage))
          select @sqlcmd = ' insert into archivedb..' + @tablename +
                           ' select * from ' + @tablename +
                           ' where ' + @datefield + ' <= ''' +
                                 convert(varchar, @archivedate, 101) + ''''
          exec (@sqlcmd)
          select @sqlcmd = ' delete from ' + @tablename + 
                           ' where ' + @datefield + ' <= ''' +
                                 convert(varchar, @archivedate, 101) + ''''
          exec (@sqlcmd)
          fetch next from tbl_cur
             into @tablename, @datefield, @archiveage

    Open in new window

    LVL 51

    Accepted Solution

    Just did one a month or so ago...

    1) Created the database - just a script of the current "live" database using the wizard
    2) Inspected table relationships and dependancies
    3) worked out the hierarchy / sequence to populate archive database
    4) worked out hierarchy / sequence to remove live data
    5) worked out "transaction groups" for the above hierarchies
    6) worked out logical "events" for archiving (e.g. end of month financial reporting)
    7) created individual scripts / procedures for each transaction group
        basically amounts to:
              insert into archive select from live where not exists in archive
              delete from live where exists in archive
              few strategies for the above - can write generic code and use information_schema's into temp area then use that as a transition / stage for speed
              log success / progress for dependancy checking
    8) created scheduled jobs of the above scripts
    9) created a "checker" to see if data structures had altered

    altered recovery model and backup maintenance plan for Archive to be simple, and correspond to archive frequency of "events"

    altered the couple of functions/procedures (and views) where access to archive data was required - because it has the same structure, was a "no brainer" with a union type approach (and the major reason why the database was duplicated).

    next step was to automate data structure migration, at the moment a fairly small and static site - so was deemed unneccessary other than point 9 above - and the use of dynamic SQL did cater for various changes anyway.
    LVL 51

    Expert Comment

    by:Mark Wills
    Oh, and step 3 also included the time scales to hold data in live - that did take quite a bit of discussion with the business...

    Write Comment

    Please enter a first name

    Please enter a last name

    We will never share this with anyone.

    Featured Post

    How to run any project with ease

    Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
    - Combine task lists, docs, spreadsheets, and chat in one
    - View and edit from mobile/offline
    - Cut down on emails

    Everyone has problem when going to load data into Data warehouse (EDW). They all need to confirm that data quality is good but they don't no how to proceed. Microsoft has provided new task within SSIS 2008 called "Data Profiler Task". It solve th…
    Use this article to create a batch file to backup a Microsoft SQL Server database to a Windows folder.  The folder can be on the local hard drive or on a network share.  This batch file will query the SQL server to get the current date & time and wi…
    Via a live example, show how to setup several different housekeeping processes for a SQL Server.
    Viewers will learn how to use the SELECT statement in SQL to return specific rows and columns, with various degrees of sorting and limits in place.

    761 members asked questions and received personalized solutions in the past 7 days.

    Join the community of 500,000 technology professionals and ask your questions.

    Join & Ask a Question

    Need Help in Real-Time?

    Connect with top rated Experts

    15 Experts available now in Live!

    Get 1:1 Help Now