MS sql sever 2005 - In recovery

Hi,
Running a large delete sql in taking sql server to auto recovery mode after which the DB isnt available for long time (hours).
My understanding the is that running large quries clutters the db buffer space and when reaching a certain point it goes on recovery mode (thus cleaning the buffer) - please correct
Issue is that we are running a web from the DB so cant affrod the down time.
What is an appropriate fix to this?
thanks.
SameerMirzaAsked:
Who is Participating?
 
lcohanConnect With a Mentor Database AnalystCommented:
Is it your db t-log where you do the deletes or tempdb t-log? I use the command below in all may batch deletes relying on temp tables and you could include an explicit CHECKPOINT (and even a "update statistics table_name") after a larger number of rows were deleted -like 100,000 or 1,000,000 rows:

EXEC sp_executesql N'USE TEMPDB; CHECKPOINT;'
0
 
lcohanDatabase AnalystCommented:
you must batch up your deletes and I won't go beyond 10000 rows per batch delete even if your table is partitioned.
deletes are affected by:

clustered index/pk on the table
foreign keys - pretty big impact
I/O
recovery mode of your DB - if possible simple would be prefered but if you run 24/7 web site against it this is unlikely to be available.

I need to do similar thing to purge expired data and for some tables (100million plus clients) the batch is no larger than 20-50 records for delets but it goes silent and pretty fast. also if possible include hints in your delete - I use ROWLOCK
0
 
SameerMirzaAuthor Commented:
Problem is that my log backup disk space is running out of memory.
I have 10z of millions of record to delete. How can u delete them in such short batches
If that's the best way then could you please paste link on how to? :)
Also about the rowlock
Thanks
0
Upgrade your Question Security!

Your question, your audience. Choose who sees your identity—and your question—with question security.

 
lcohanDatabase AnalystCommented:
This looks challenging indeed but let me ask a question:

Is this something like a purge process that you need to setup and the innitial volumes are high then once you catch up the delete volumes are smaller or
This is something you need to constantly do on such large volumes?

For the first option I can provide some batch example including a chekpoint and update stats in it so your log file may be somewhat under control due to the volumes to delete however you will need to test that CAUTIOSLY in your live environment for speed/impact in your web site UI due to locking/blocking. You could start the deletes from small to larger batch in a SQL job.

For the second option if you need to constantly delete such volumes I suggest/recommand partitioning then you could switch out unwanted large volumes faster with less impact - I use this method for archiving obsolite data by date range or inactive clients.

0
 
SameerMirzaAuthor Commented:
Disk space issue - Logs filling the disk space
0
 
SameerMirzaAuthor Commented:
thanks
0
All Courses

From novice to tech pro — start learning today.