Database Archive

I want find out best possible option for the database archive.

Archive means append the records to archive database and delete archived records from main database. I have main database and archive database on same SQL server. Problem is the database size. Main database could of size 40 GB +. I have implemented this using  INSERT INTO by selecting from main database and then deleting the selected records from main database. This works, but the transaction log size grows substantially. And Insert and Delete needs to part of transaction.

What could be better approach to handle this?
 
LVL 4
Milind00Asked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

russellshomeCommented:
Change 5 below to whatever does not make the transaction log too large & is most efficient.

declare @rows int
select @rows = count(au_id) from pubs..authors

while @rows > 0
begin
      begin tran
            insert into tempdb..authors
            select top 5 * from pubs..authors
            order by au_id

            delete
            from pubs..authors
            where au_id in (
                  select top 5 au_id
                  from tempdb..authors
                  order by au_id
            )

      commit tran
      select @rows = count(au_id) from pubs..authors
end
russellshomeCommented:
Slight change::

declare @rows int
select @rows = count(au_id) from pubs..authors

while @rows > 0
begin
     begin tran
          insert into tempdb..authors
          select top 5 * from pubs..authors
          order by au_id

          delete
          from pubs..authors
          where au_id in (
               select top 5 au_id
               from pubs..authors
               order by au_id
          )

     commit tran
     select @rows = count(au_id) from pubs..authors
end
Milind00Author Commented:
russellshome ...with your query do I need to use DBCC shrinkfile to shrink the database file and LDF file?
Protecting & Securing Your Critical Data

Considering 93 percent of companies file for bankruptcy within 12 months of a disaster that blocked access to their data for 10 days or more, planning for the worst is just smart business. Learn how Acronis Backup integrates security at every stage

Milind00Author Commented:
russellshome...

Also how about using bulk insert for the above task. What could be efficent way BULK Insert or the way you suggested?
RadimHampelCommented:
When you are removing all records in one huge transaction, log will be containing all data from your table(if in your db is only table, up to 40gb). Even if you are using simple recovery mode, log file will grow quickly. Solution presented by russellshome is good, but it isn't done in one transaction. But after it, you should shirk log file, i don't advice you to shrink data file, because (as i expect) it will grow again.
CJ_SCommented:
A TRUNCATE statement is not logged, might help?
sedmansCommented:
I am not a DTS expert but this sounds like it is something that would be suited to it and you might be able to bulk inserts this way.
arbertCommented:
Agreee with sedmans, look at using DTS bulk insert task.  You can also specify the transaction size to keep the number of open transactions to a minimum...

Brett
russellshomeCommented:
While RadimHampel says "it isn't done in one transaction", the point is that my method does each pair of insert /delete in a transaction.

> do I need to use DBCC shrinkfile to shrink the database file and LDF file?

Depends on the recovery model.

Simple no

Full Yes

>Also how about using bulk insert for the above task. What could be efficent way BULK Insert or the way you suggested?

BULK insert will be more efficient but less robust.

You can only ensure that a row is deleted for each row inserted in two ways.
Your first attempt which puts all insert/delete pairs in one transaction (and needs lots of log file space)
Or my suggestion which puts a smaller number of insert/delete pairs into the transaction.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Milind00Author Commented:
Thanks a lot experts. I think I got the answer.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server

From novice to tech pro — start learning today.