Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x
?
Solved

Database Archive

Posted on 2003-11-04
10
Medium Priority
?
452 Views
Last Modified: 2006-11-17
I want find out best possible option for the database archive.

Archive means append the records to archive database and delete archived records from main database. I have main database and archive database on same SQL server. Problem is the database size. Main database could of size 40 GB +. I have implemented this using  INSERT INTO by selecting from main database and then deleting the selected records from main database. This works, but the transaction log size grows substantially. And Insert and Delete needs to part of transaction.

What could be better approach to handle this?
 
0
Comment
Question by:Milind00
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
10 Comments
 
LVL 5

Expert Comment

by:russellshome
ID: 9685339
Change 5 below to whatever does not make the transaction log too large & is most efficient.

declare @rows int
select @rows = count(au_id) from pubs..authors

while @rows > 0
begin
      begin tran
            insert into tempdb..authors
            select top 5 * from pubs..authors
            order by au_id

            delete
            from pubs..authors
            where au_id in (
                  select top 5 au_id
                  from tempdb..authors
                  order by au_id
            )

      commit tran
      select @rows = count(au_id) from pubs..authors
end
0
 
LVL 5

Expert Comment

by:russellshome
ID: 9685346
Slight change::

declare @rows int
select @rows = count(au_id) from pubs..authors

while @rows > 0
begin
     begin tran
          insert into tempdb..authors
          select top 5 * from pubs..authors
          order by au_id

          delete
          from pubs..authors
          where au_id in (
               select top 5 au_id
               from pubs..authors
               order by au_id
          )

     commit tran
     select @rows = count(au_id) from pubs..authors
end
0
 
LVL 4

Author Comment

by:Milind00
ID: 9685574
russellshome ...with your query do I need to use DBCC shrinkfile to shrink the database file and LDF file?
0
Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

 
LVL 4

Author Comment

by:Milind00
ID: 9685593
russellshome...

Also how about using bulk insert for the above task. What could be efficent way BULK Insert or the way you suggested?
0
 
LVL 1

Expert Comment

by:RadimHampel
ID: 9686998
When you are removing all records in one huge transaction, log will be containing all data from your table(if in your db is only table, up to 40gb). Even if you are using simple recovery mode, log file will grow quickly. Solution presented by russellshome is good, but it isn't done in one transaction. But after it, you should shirk log file, i don't advice you to shrink data file, because (as i expect) it will grow again.
0
 
LVL 22

Expert Comment

by:CJ_S
ID: 9687094
A TRUNCATE statement is not logged, might help?
0
 
LVL 2

Expert Comment

by:sedmans
ID: 9687155
I am not a DTS expert but this sounds like it is something that would be suited to it and you might be able to bulk inserts this way.
0
 
LVL 34

Expert Comment

by:arbert
ID: 9689063
Agreee with sedmans, look at using DTS bulk insert task.  You can also specify the transaction size to keep the number of open transactions to a minimum...

Brett
0
 
LVL 5

Accepted Solution

by:
russellshome earned 1500 total points
ID: 9699997
While RadimHampel says "it isn't done in one transaction", the point is that my method does each pair of insert /delete in a transaction.

> do I need to use DBCC shrinkfile to shrink the database file and LDF file?

Depends on the recovery model.

Simple no

Full Yes

>Also how about using bulk insert for the above task. What could be efficent way BULK Insert or the way you suggested?

BULK insert will be more efficient but less robust.

You can only ensure that a row is deleted for each row inserted in two ways.
Your first attempt which puts all insert/delete pairs in one transaction (and needs lots of log file space)
Or my suggestion which puts a smaller number of insert/delete pairs into the transaction.
0
 
LVL 4

Author Comment

by:Milind00
ID: 9706192
Thanks a lot experts. I think I got the answer.
0

Featured Post

Ransomware: The New Cyber Threat & How to Stop It

This infographic explains ransomware, type of malware that blocks access to your files or your systems and holds them hostage until a ransom is paid. It also examines the different types of ransomware and explains what you can do to thwart this sinister online threat.  

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

A Stored Procedure in Microsoft SQL Server is a powerful feature that it can be used to execute the Data Manipulation Language (DML) or Data Definition Language (DDL). Depending on business requirements, a single Stored Procedure can return differe…
What if you have to shut down the entire Citrix infrastructure for hardware maintenance, software upgrades or "the unknown"? I developed this plan for "the unknown" and hope that it helps you as well. This article explains how to properly shut down …
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
Via a live example, show how to setup several different housekeeping processes for a SQL Server.

715 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question