• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 131
  • Last Modified:

deletes causing chokes

when you delete a huge quality of data, it sometimes takes long time.. or even makes the system unstable...

what is the term for this problem..?
4 Solutions
25112Author Commented:
is it the lack of index?
or too much log writing?
or shifting of data within the file etc?
Probably your transaction-log table is getting a lot of activity... each DML is either explicitly (BEGIN TRAN) or implicitly contained in a transaction, which populates transaction-log.  If the statement aborts then transaction-log is cleared of those transactions, otherwise, they are committed (eventually) to disk.
Jim HornMicrosoft SQL Server Developer, Architect, and AuthorCommented:
Any chance you can DELETE a set number of rows at a time, say 50,000, so that it's not one large transaction (per above experts)?

>when you delete a huge quality of data
Define this.  Is it the entire table, or can it be defined in a WHERE clause?
lcohanDatabase AnalystCommented:
Do the tables involved in DELETEs have FKeys? Or CLUSTERED indexes?
If FKEys exists make sure you have a NONCLUSTER index to cover the FKey structure to help the deletes.
I also suggest adding WITH (ROWLOCK) hint and maybe batch them using SET @@ROWCOUNT or some other method.
Anthony PerkinsCommented:
what is the term for this problem..?
Painful ?

But seriously, define large.  If it is more than say 500K consider breaking it up in batches.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

Tackle projects and never again get stuck behind a technical roadblock.
Join Now