Link to home
Start Free TrialLog in
Avatar of Milind00
Milind00

asked on

Database Archive

I want find out best possible option for the database archive.

Archive means append the records to archive database and delete archived records from main database. I have main database and archive database on same SQL server. Problem is the database size. Main database could of size 40 GB +. I have implemented this using  INSERT INTO by selecting from main database and then deleting the selected records from main database. This works, but the transaction log size grows substantially. And Insert and Delete needs to part of transaction.

What could be better approach to handle this?
 
Avatar of russellshome
russellshome

Change 5 below to whatever does not make the transaction log too large & is most efficient.

declare @rows int
select @rows = count(au_id) from pubs..authors

while @rows > 0
begin
      begin tran
            insert into tempdb..authors
            select top 5 * from pubs..authors
            order by au_id

            delete
            from pubs..authors
            where au_id in (
                  select top 5 au_id
                  from tempdb..authors
                  order by au_id
            )

      commit tran
      select @rows = count(au_id) from pubs..authors
end
Slight change::

declare @rows int
select @rows = count(au_id) from pubs..authors

while @rows > 0
begin
     begin tran
          insert into tempdb..authors
          select top 5 * from pubs..authors
          order by au_id

          delete
          from pubs..authors
          where au_id in (
               select top 5 au_id
               from pubs..authors
               order by au_id
          )

     commit tran
     select @rows = count(au_id) from pubs..authors
end
Avatar of Milind00

ASKER

russellshome ...with your query do I need to use DBCC shrinkfile to shrink the database file and LDF file?
russellshome...

Also how about using bulk insert for the above task. What could be efficent way BULK Insert or the way you suggested?
When you are removing all records in one huge transaction, log will be containing all data from your table(if in your db is only table, up to 40gb). Even if you are using simple recovery mode, log file will grow quickly. Solution presented by russellshome is good, but it isn't done in one transaction. But after it, you should shirk log file, i don't advice you to shrink data file, because (as i expect) it will grow again.
A TRUNCATE statement is not logged, might help?
I am not a DTS expert but this sounds like it is something that would be suited to it and you might be able to bulk inserts this way.
Agreee with sedmans, look at using DTS bulk insert task.  You can also specify the transaction size to keep the number of open transactions to a minimum...

Brett
ASKER CERTIFIED SOLUTION
Avatar of russellshome
russellshome

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanks a lot experts. I think I got the answer.