Solved

Database Archive

Posted on 2003-11-04
10
437 Views
Last Modified: 2006-11-17
I want find out best possible option for the database archive.

Archive means append the records to archive database and delete archived records from main database. I have main database and archive database on same SQL server. Problem is the database size. Main database could of size 40 GB +. I have implemented this using  INSERT INTO by selecting from main database and then deleting the selected records from main database. This works, but the transaction log size grows substantially. And Insert and Delete needs to part of transaction.

What could be better approach to handle this?
 
0
Comment
Question by:Milind00
10 Comments
 
LVL 5

Expert Comment

by:russellshome
ID: 9685339
Change 5 below to whatever does not make the transaction log too large & is most efficient.

declare @rows int
select @rows = count(au_id) from pubs..authors

while @rows > 0
begin
      begin tran
            insert into tempdb..authors
            select top 5 * from pubs..authors
            order by au_id

            delete
            from pubs..authors
            where au_id in (
                  select top 5 au_id
                  from tempdb..authors
                  order by au_id
            )

      commit tran
      select @rows = count(au_id) from pubs..authors
end
0
 
LVL 5

Expert Comment

by:russellshome
ID: 9685346
Slight change::

declare @rows int
select @rows = count(au_id) from pubs..authors

while @rows > 0
begin
     begin tran
          insert into tempdb..authors
          select top 5 * from pubs..authors
          order by au_id

          delete
          from pubs..authors
          where au_id in (
               select top 5 au_id
               from pubs..authors
               order by au_id
          )

     commit tran
     select @rows = count(au_id) from pubs..authors
end
0
 
LVL 4

Author Comment

by:Milind00
ID: 9685574
russellshome ...with your query do I need to use DBCC shrinkfile to shrink the database file and LDF file?
0
Free Webinar: AWS Backup & DR

Join our upcoming webinar with experts from AWS, CloudBerry Lab, and the Town of Edgartown IT to discuss best practices for simplifying online backup management and cutting costs.

 
LVL 4

Author Comment

by:Milind00
ID: 9685593
russellshome...

Also how about using bulk insert for the above task. What could be efficent way BULK Insert or the way you suggested?
0
 
LVL 1

Expert Comment

by:RadimHampel
ID: 9686998
When you are removing all records in one huge transaction, log will be containing all data from your table(if in your db is only table, up to 40gb). Even if you are using simple recovery mode, log file will grow quickly. Solution presented by russellshome is good, but it isn't done in one transaction. But after it, you should shirk log file, i don't advice you to shrink data file, because (as i expect) it will grow again.
0
 
LVL 22

Expert Comment

by:CJ_S
ID: 9687094
A TRUNCATE statement is not logged, might help?
0
 
LVL 2

Expert Comment

by:sedmans
ID: 9687155
I am not a DTS expert but this sounds like it is something that would be suited to it and you might be able to bulk inserts this way.
0
 
LVL 34

Expert Comment

by:arbert
ID: 9689063
Agreee with sedmans, look at using DTS bulk insert task.  You can also specify the transaction size to keep the number of open transactions to a minimum...

Brett
0
 
LVL 5

Accepted Solution

by:
russellshome earned 500 total points
ID: 9699997
While RadimHampel says "it isn't done in one transaction", the point is that my method does each pair of insert /delete in a transaction.

> do I need to use DBCC shrinkfile to shrink the database file and LDF file?

Depends on the recovery model.

Simple no

Full Yes

>Also how about using bulk insert for the above task. What could be efficent way BULK Insert or the way you suggested?

BULK insert will be more efficient but less robust.

You can only ensure that a row is deleted for each row inserted in two ways.
Your first attempt which puts all insert/delete pairs in one transaction (and needs lots of log file space)
Or my suggestion which puts a smaller number of insert/delete pairs into the transaction.
0
 
LVL 4

Author Comment

by:Milind00
ID: 9706192
Thanks a lot experts. I think I got the answer.
0

Featured Post

Ransomware: The New Cyber Threat & How to Stop It

This infographic explains ransomware, type of malware that blocks access to your files or your systems and holds them hostage until a ransom is paid. It also examines the different types of ransomware and explains what you can do to thwart this sinister online threat.  

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Slowly Changing Dimension Transformation component in data task flow is very useful for us to manage and control how data changes in SSIS.
This article shows gives you an overview on SQL Server 2016 row level security. You will also get to know the usages of row-level-security and how it works
This videos aims to give the viewer a basic demonstration of how a user can query current session information by using the SYS_CONTEXT function
Viewers will learn how to use the INSERT statement to insert data into their tables. It will also introduce the NULL statement, to show them what happens when no value is giving for any given column.

756 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question