Improve company productivity with a Business Account.Sign Up

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 296
  • Last Modified:

MS sql sever 2005 - In recovery

Hi,
Running a large delete sql in taking sql server to auto recovery mode after which the DB isnt available for long time (hours).
My understanding the is that running large quries clutters the db buffer space and when reaching a certain point it goes on recovery mode (thus cleaning the buffer) - please correct
Issue is that we are running a web from the DB so cant affrod the down time.
What is an appropriate fix to this?
thanks.
0
SameerMirza
Asked:
SameerMirza
  • 3
  • 3
1 Solution
 
lcohanDatabase AnalystCommented:
you must batch up your deletes and I won't go beyond 10000 rows per batch delete even if your table is partitioned.
deletes are affected by:

clustered index/pk on the table
foreign keys - pretty big impact
I/O
recovery mode of your DB - if possible simple would be prefered but if you run 24/7 web site against it this is unlikely to be available.

I need to do similar thing to purge expired data and for some tables (100million plus clients) the batch is no larger than 20-50 records for delets but it goes silent and pretty fast. also if possible include hints in your delete - I use ROWLOCK
0
 
SameerMirzaAuthor Commented:
Problem is that my log backup disk space is running out of memory.
I have 10z of millions of record to delete. How can u delete them in such short batches
If that's the best way then could you please paste link on how to? :)
Also about the rowlock
Thanks
0
 
lcohanDatabase AnalystCommented:
This looks challenging indeed but let me ask a question:

Is this something like a purge process that you need to setup and the innitial volumes are high then once you catch up the delete volumes are smaller or
This is something you need to constantly do on such large volumes?

For the first option I can provide some batch example including a chekpoint and update stats in it so your log file may be somewhat under control due to the volumes to delete however you will need to test that CAUTIOSLY in your live environment for speed/impact in your web site UI due to locking/blocking. You could start the deletes from small to larger batch in a SQL job.

For the second option if you need to constantly delete such volumes I suggest/recommand partitioning then you could switch out unwanted large volumes faster with less impact - I use this method for archiving obsolite data by date range or inactive clients.

0
A proven path to a career in data science

At Springboard, we know how to get you a job in data science. With Springboard’s Data Science Career Track, you’ll master data science  with a curriculum built by industry experts. You’ll work on real projects, and get 1-on-1 mentorship from a data scientist.

 
SameerMirzaAuthor Commented:
Disk space issue - Logs filling the disk space
0
 
lcohanDatabase AnalystCommented:
Is it your db t-log where you do the deletes or tempdb t-log? I use the command below in all may batch deletes relying on temp tables and you could include an explicit CHECKPOINT (and even a "update statistics table_name") after a larger number of rows were deleted -like 100,000 or 1,000,000 rows:

EXEC sp_executesql N'USE TEMPDB; CHECKPOINT;'
0
 
SameerMirzaAuthor Commented:
thanks
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

What Kind of Coding Program is Right for You?

There are many ways to learn to code these days. From coding bootcamps like Flatiron School to online courses to totally free beginner resources. The best way to learn to code depends on many factors, but the most important one is you. See what course is best for you.

  • 3
  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now