SQL Server 2008 - Small table, horrible performance on update queries

Posted on 2012-03-30
Last Modified: 2012-04-04
I have several SQL Server 2008 databases - all are in the 1GB to 3GB range. All run beautifully smooth in performance, except for update queries on two tables (very similar), in two different databases.

Here's an example of the issue.

The table:
CREATE TABLE [dbo].[Integrations_Queue](
	[QueueID] [bigint] IDENTITY(1,1) NOT NULL,
	[IntegrationID] [tinyint] NULL,
	[RecID] [bigint] NULL,
	[QueuedAt] [smalldatetime] NULL,
	[SubmittedAt] [smalldatetime] NULL,
	[IsError] [tinyint] NULL,
	[XMLData] [varchar](max) NULL,
	[QueueID] ASC


Open in new window

This has 4,938 rows, consuming 9.461 MB.

Once this table gets in the realm of anywhere from about 4,800 rows to 7,500 rows, update queries against the QueueID primary key take like 4 minutes. Example query that has horrible performance:

UPDATE Integrations_Queue
	SET SubmittedAt=GETUTCDATE(), IsError=0 
		WHERE QueueID=23923

Open in new window

If I clear out most of the rows, delete the index, re-add the index, all is well until I get back up to a higher number of rows. If I simply rebuild the index, or even reorganize (even to the point that it has nearly 0% fragmentation), it will still take several minutes on an update query. And it's not even the index that's really the issue I don't think - I just issued a query a while ago after archiving 4,800 records into a different table - the following query took > 10 minutes to delete all 4,832 rows:

DELETE FROM Integrations_Queue WHERE SubmittedAt IS NOT NULL AND IsError=0

Open in new window

Again, the rest of these two databases run beautifully - many tables with as many as 1,000,000 rows, stored procedures joining way too many tables, it doesn't appear to be an issue with the databases themselves, the drives they reside on, or a lack of memory - no query in this database takes more than just a second, maybe two at the worst, except for this update query on this one table.

I'm at a total loss as to why this one table, in each of two different databases, is having this issue? Any thoughts?

This is SQL Server 2008 Workgroup Edition, running on Windows Server 2008 R2 Standard 64-bit. 12 GB ram (I know WGE maxes out at 4GB), dual hex core processors. It's not like it's a P2! haha

Thanks so much for any advice you may be able to offer!

EDIT: cleared out all rows except for what I NEEDED in this table - 106 rows, 0.180 MB - the above update query is now running a 15 minutes, and still counting. I have no index on this table at this particular moment, but 15 minutes to scan 106 rows?!?!

EDIT 2: A select statement for this same row (referencing the same row as the update query is updating) -
SELECT * FROM Integrations_Queue WHERE QueueID=28723

Open in new window

takes 0 seconds, according to query analyzer.

It's not an issue with select statements, just an issue with update and delete. And inserts work great, as well.
Question by:aaron900
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
LVL 25

Expert Comment

ID: 37790089
<< but 15 minutes to scan 106 rows?!?!>>
Looks like waithing for locks to me.
LVL 25

Accepted Solution

jogos earned 500 total points
ID: 37790758
Key is to know where the time goes to

Execution plan, but while it's a strait forward command on smal table
Sql profiler -> then see what is high cpu, reads, writes, elapsed (big gab between cpu and elapsed could indicate locks)
DMV -> 

How to see if there is blocking

But with your update you fill 2 columns with a value that previously was NULL, so making the length of the record larger, this could generate 'reorganisations' if the pages don't have enough free space. Since you will regularly do this action it's better to see there is enough free space so the update don't encounters a full page.


Author Closing Comment

ID: 37804165
I'm not 100% certain I have it totally resolved, but I did rework the table and the logic a little bit to ensure there were no NULLable columns - your logic makes a ton of sense. So far, been running fine for a few days, I thank you for your help and hope it continues to run fine!
LVL 25

Expert Comment

ID: 37805015
No nullable columns will not  solve  a problem of a growing record-size. If a nvarchar(100) has a content of 1 character changes to 80 characters the 'var' in varchar means the length will grow.  With fixed datatypes (char, datetime, in)  that does not happen, their is size fixed.  

And there I overlooked  the datatypes of your 2 columns, they look like fixed types.

In fact my advise was

1) find where the time goes (measure, measure , measure)
2) understand why time is spend there
3) tackle the problem that takes  time, one change at a time
4) measure after each step if it enhanced as expected
5) Still not satisfied after this change then go back to 1) for next change

If you don't follow this it's possible you make 5 changes and 2 don't make a difference, 2 are worse for performance and only 1 enhanced performance. And you judge on the cumulated enhancement and still didn't know which of the 5 changes  did the trick and that the result could be even beter when you left out the 2 bad changes.

And when measuring you must be aware of cached plans, buffer cache, other activity on that moment, healt of indexes/statistics.....
Yep, nobody said performance tuning is easy. But when you start by measuring and evaulating each step at a time ..... it will become predictable

Featured Post

Announcing the Most Valuable Experts of 2016

MVEs are more concerned with the satisfaction of those they help than with the considerable points they can earn. They are the types of people you feel privileged to call colleagues. Join us in honoring this amazing group of Experts.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Hi all, It is important and often overlooked to understand “Database properties”. Often we see questions about "log files" or "where is the database" and one of the easiest ways to get general information about your database is to use “Database p…
In this article I will describe the Detach & Attach method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
I've attached the XLSM Excel spreadsheet I used in the video and also text files containing the macros used below.…
Exchange organizations may use the Journaling Agent of the Transport Service to archive messages going through Exchange. However, if the Transport Service is integrated with some email content management application (such as an antispam), the admini…

730 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question