Solved

SQL Server 2008 - Small table, horrible performance on update queries

Posted on 2012-03-30
4
191 Views
Last Modified: 2012-04-04
I have several SQL Server 2008 databases - all are in the 1GB to 3GB range. All run beautifully smooth in performance, except for update queries on two tables (very similar), in two different databases.

Here's an example of the issue.

The table:
CREATE TABLE [dbo].[Integrations_Queue](
	[QueueID] [bigint] IDENTITY(1,1) NOT NULL,
	[IntegrationID] [tinyint] NULL,
	[RecID] [bigint] NULL,
	[QueuedAt] [smalldatetime] NULL,
	[SubmittedAt] [smalldatetime] NULL,
	[IsError] [tinyint] NULL,
	[XMLData] [varchar](max) NULL,
 CONSTRAINT [PK_Integrations_Queue] PRIMARY KEY CLUSTERED 
(
	[QueueID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

Open in new window

This has 4,938 rows, consuming 9.461 MB.

Once this table gets in the realm of anywhere from about 4,800 rows to 7,500 rows, update queries against the QueueID primary key take like 4 minutes. Example query that has horrible performance:


UPDATE Integrations_Queue
	SET SubmittedAt=GETUTCDATE(), IsError=0 
		WHERE QueueID=23923

Open in new window

If I clear out most of the rows, delete the index, re-add the index, all is well until I get back up to a higher number of rows. If I simply rebuild the index, or even reorganize (even to the point that it has nearly 0% fragmentation), it will still take several minutes on an update query. And it's not even the index that's really the issue I don't think - I just issued a query a while ago after archiving 4,800 records into a different table - the following query took > 10 minutes to delete all 4,832 rows:

DELETE FROM Integrations_Queue WHERE SubmittedAt IS NOT NULL AND IsError=0

Open in new window


Again, the rest of these two databases run beautifully - many tables with as many as 1,000,000 rows, stored procedures joining way too many tables, it doesn't appear to be an issue with the databases themselves, the drives they reside on, or a lack of memory - no query in this database takes more than just a second, maybe two at the worst, except for this update query on this one table.

I'm at a total loss as to why this one table, in each of two different databases, is having this issue? Any thoughts?

This is SQL Server 2008 Workgroup Edition, running on Windows Server 2008 R2 Standard 64-bit. 12 GB ram (I know WGE maxes out at 4GB), dual hex core processors. It's not like it's a P2! haha

Thanks so much for any advice you may be able to offer!

EDIT: cleared out all rows except for what I NEEDED in this table - 106 rows, 0.180 MB - the above update query is now running a 15 minutes, and still counting. I have no index on this table at this particular moment, but 15 minutes to scan 106 rows?!?!

EDIT 2: A select statement for this same row (referencing the same row as the update query is updating) -
SELECT * FROM Integrations_Queue WHERE QueueID=28723

Open in new window


takes 0 seconds, according to query analyzer.

It's not an issue with select statements, just an issue with update and delete. And inserts work great, as well.
0
Comment
Question by:aaron900
  • 3
4 Comments
 
LVL 25

Expert Comment

by:jogos
ID: 37790089
<< but 15 minutes to scan 106 rows?!?!>>
Looks like waithing for locks to me.
0
 
LVL 25

Accepted Solution

by:
jogos earned 500 total points
ID: 37790758
Key is to know where the time goes to

Execution plan, but while it's a strait forward command on smal table
Sql profiler -> then see what is high cpu, reads, writes, elapsed (big gab between cpu and elapsed could indicate locks)
DMV -> http://msdn.microsoft.com/en-us/library/ms188068.aspx

How to see if there is blocking http://www.mssqltips.com/sqlservertip/2429/how-to-identify-blocking-in-sql-server-2005-and-2008/

But with your update you fill 2 columns with a value that previously was NULL, so making the length of the record larger, this could generate 'reorganisations' if the pages don't have enough free space. Since you will regularly do this action it's better to see there is enough free space so the update don't encounters a full page.

WITH FILLFACTOR =fillfactor
0
 

Author Closing Comment

by:aaron900
ID: 37804165
I'm not 100% certain I have it totally resolved, but I did rework the table and the logic a little bit to ensure there were no NULLable columns - your logic makes a ton of sense. So far, been running fine for a few days, I thank you for your help and hope it continues to run fine!
0
 
LVL 25

Expert Comment

by:jogos
ID: 37805015
No nullable columns will not  solve  a problem of a growing record-size. If a nvarchar(100) has a content of 1 character changes to 80 characters the 'var' in varchar means the length will grow.  With fixed datatypes (char, datetime, in)  that does not happen, their is size fixed.  

And there I overlooked  the datatypes of your 2 columns, they look like fixed types.


In fact my advise was

1) find where the time goes (measure, measure , measure)
2) understand why time is spend there
3) tackle the problem that takes  time, one change at a time
4) measure after each step if it enhanced as expected
5) Still not satisfied after this change then go back to 1) for next change

If you don't follow this it's possible you make 5 changes and 2 don't make a difference, 2 are worse for performance and only 1 enhanced performance. And you judge on the cumulated enhancement and still didn't know which of the 5 changes  did the trick and that the result could be even beter when you left out the 2 bad changes.

And when measuring you must be aware of cached plans, buffer cache, other activity on that moment, healt of indexes/statistics.....
Yep, nobody said performance tuning is easy. But when you start by measuring and evaulating each step at a time ..... it will become predictable
0

Featured Post

Better Security Awareness With Threat Intelligence

See how one of the leading financial services organizations uses Recorded Future as part of a holistic threat intelligence program to promote security awareness and proactively and efficiently identify threats.

Join & Write a Comment

SQL Server engine let you use a Windows account or a SQL Server account to connect to a SQL Server instance. This can be configured immediatly during the SQL Server installation or after in the Server Authentication section in the Server properties …
In this article I will describe the Backup & Restore method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
Access reports are powerful and flexible. Learn how to create a query and then a grouped report using the wizard. Modify the report design after the wizard is done to make it look better. There will be another video to explain how to put the final p…
This demo shows you how to set up the containerized NetScaler CPX with NetScaler Management and Analytics System in a non-routable Mesos/Marathon environment for use with Micro-Services applications.

758 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

21 Experts available now in Live!

Get 1:1 Help Now