• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 579
  • Last Modified:

Why does DELETE take a long time (and act so differently) when table has over 200 fields)?

Dear Experts,

I have a perplexing problem that has confounded many at our company.  I was hoping someone
here had the answer.

I have few tables that are related to a master table.
The master table has a key seq number which is related to a number of different tables.
When I want to delete from the master and related tables the delete takes a very long time
for the tables with 200+ fields.
In Query Analyzer, I noticed that when I enter the following command

DELETE FROM DetailTable where SeqNum in (10355)
I get the following response:
(0 row(s) affected)
(0 row(s) affected)
(0 row(s) affected)
...
(0 row(s) affected)
(0 row(s) affected)
(0 row(s) affected)
(0 row(s) affected)
(0 row(s) affected)
(0 row(s) affected)

with (0 row(s) affected) appearing up  to the number of fields in that table.
And this takes about 20 seconds to complete

If I execute the same command on a much smaller table (about 20 fields)
DELETE FROM DetailTable_2 where SeqNum in (10355)
I simply get the response
(0 row(s) affected)
within a fraction of a second.

Each table has SeqNum indexed.  So I don't know if this is a bug in SQL server, or if
there is something else that needs to be done for such a large table.

We don't have  a cascade update or delete on the table.  I know we may be able to add this,
but before I did, I thought I'd ask because adding a cascade delete may affect legacy code (which I'm trying to avoid).  I also don't want to redesign the table (into many smaller tables), again because of legacy code.

Thank you all in advance for your help!
0
BrianMc1958
Asked:
BrianMc1958
2 Solutions
 
Guy Hengel [angelIII / a3]Billing EngineerCommented:
that "larger" table has for sure a trigger on it.
you might want to "disable" that trigger for your large delete ...
0
 
BrianMc1958Author Commented:
Yes,
You are right, I removed the trigger and it took a fraction of a second.  Of course, we want to trace
when people delete records from our database as a matter of policy.  Is there another way to handle
this situation?
0
 
Anthony PerkinsCommented:
>>Is there another way to handle this situation?<<
Don't allow it?

Or so it does not sound facetious, only allow DELETEs (as well as UPDATEs and INSERTs) from Stored Procedures which can only be run by users that have the appropriate EXECUTE permissions.
0
Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

 
chapmandewCommented:
One idea is to setup replication and do the auditing on the subscriber...
0
 
BrianMc1958Author Commented:
Thank you all for your comments.
I do have a bit of trouble understanding why a trigger would cause such a huge problem.   I've
noticed in other large tables (that we have audit triggers on), the amount of time to execute is
doubled (not 20X as long in this instance).  I've notice that even if no records are deleted in
my query, it still takes an inordinate amount of time.

0
 
Guy Hengel [angelIII / a3]Billing EngineerCommented:
the trigger is sql also, so all the sql statements in there take also some time.
you have to check if any of those queries are not optimized.

the suggestion of chapmandew is usually the best method, offloading the trigger to a dedicated server...
0
 
BrianMc1958Author Commented:
Thank you all for your help.
0

Featured Post

Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now