• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 128
  • Last Modified:

deletes causing chokes

when you delete a huge quality of data, it sometimes takes long time.. or even makes the system unstable...

what is the term for this problem..?
0
25112
Asked:
25112
4 Solutions
 
25112Author Commented:
is it the lack of index?
or too much log writing?
or shifting of data within the file etc?
0
 
John_VidmarCommented:
Probably your transaction-log table is getting a lot of activity... each DML is either explicitly (BEGIN TRAN) or implicitly contained in a transaction, which populates transaction-log.  If the statement aborts then transaction-log is cleared of those transactions, otherwise, they are committed (eventually) to disk.
0
 
Jim HornMicrosoft SQL Server Developer, Architect, and AuthorCommented:
Any chance you can DELETE a set number of rows at a time, say 50,000, so that it's not one large transaction (per above experts)?

>when you delete a huge quality of data
Define this.  Is it the entire table, or can it be defined in a WHERE clause?
0
 
lcohanDatabase AnalystCommented:
Do the tables involved in DELETEs have FKeys? Or CLUSTERED indexes?
If FKEys exists make sure you have a NONCLUSTER index to cover the FKey structure to help the deletes.
I also suggest adding WITH (ROWLOCK) hint and maybe batch them using SET @@ROWCOUNT or some other method.
0
 
Anthony PerkinsCommented:
what is the term for this problem..?
Painful ?

But seriously, define large.  If it is more than say 500K consider breaking it up in batches.
0

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Tackle projects and never again get stuck behind a technical roadblock.
Join Now