• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 209
  • Last Modified:

Mass deletes in SQL2005 Simple Recovery mode

I have a 100 million record table(60GIG).   The database is in simple recovery mode for various reasons.  I want to start archiving this table witch would involve mass inserts into another table and mass deletes from the main table. It is my understanding since I am in simple mode that this will not affect my transaction logs at all. Is that correct?  Is there any other reason I should fear?

By the way I have many other databases on the same instance of SQL server and all the others are in full recovery mode.
0
soccerman767
Asked:
soccerman767
  • 2
  • 2
  • 2
1 Solution
 
EvilPostItCommented:
This is not entirely true.

The transaction log would grow to the size of the biggest transaction.

If you were looking at deleteing 60GB of records and you only had a 10GB transaction log i would suggest either running this in multiple batches or running something like below as otherwise it would fill up the tran log in the single transaction.

SET ROWCOUNT 1000000

DECLARE @RECCOUNT INT
SELECT @RECCOUNT=COUNT(*) FROM RECORD_TABLE WHERE RANGE BETWEEN X AND Y
WHILE @RECCOUNT > 0
BEGIN
DELETE FROM RECORD_TABLE WHERE RANGE BETWEEN X AND Y
SELECT @RECCOUNT=COUNT(*) FROM RECORD_TABLE WHERE RANGE BETWEEN X AND Y
END

Open in new window

0
 
Guy Hengel [angelIII / a3]Billing EngineerCommented:
>It is my understanding since I am in simple mode that this will not affect my transaction logs at all.
no. the transaction will be logged into the t-log, but the space used can be reused immediately at the end of the transaction by other transactions.

you might have other workarounds:
* truncate the table
* drop the table, and recreate it

0
 
EvilPostItCommented:
Oh yeah, if you want to just empty the table go with angelIII's response. That be be much quicker. My solution is more from the perspective of deleting a set of data and having some left over.
0
Free Backup Tool for VMware and Hyper-V

Restore full virtual machine or individual guest files from 19 common file systems directly from the backup file. Schedule VM backups with PowerShell scripts. Set desired time, lean back and let the script to notify you via email upon completion.  

 
Kevin CrossChief Technology OfficerCommented:
Hi.

When you are in simple mode that means the transaction log is marked for release at the end of each transaction; therefore, a mass delete/insert will still cause the log to grow -- it will just be marked for reuse shortly after the transaction is complete.  You can shrink the log after the first massive execution and then after, I would setup an interval that results in smaller batches for archiving.  In fact you could do that now by batching the deletes by year or quarter or other logical partitioning.

Hope that helps!

Kevin
0
 
Kevin CrossChief Technology OfficerCommented:
Arg, please see http:#a35720544 and http:#a35720559.  I should really know better than to post after my screen has been up for more than a few minutes.
0
 
Guy Hengel [angelIII / a3]Billing EngineerCommented:
>perspective of deleting a set of data and having some left over.
you could still insert the "small set" of data to be retained to a "staging" table, then truncate/drop/recreate, and insert the small set of data back ...
can be very effective to avoid t-log fillup ...
0

Featured Post

Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

  • 2
  • 2
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now