Want to win a PS4? Go Premium and enter to win our High-Tech Treats giveaway. Enter to Win


Does a clustered index optimize an update process?

Posted on 2006-10-30
Medium Priority
Last Modified: 2012-05-05
I'm trying to understand how SQL Server will actually access the data during an update.  I've read that clustered indexes are fast for reading ranges but individual reads do not have a performance boost over a non-clustered index.  In an update, does the "read head' (or whatever) move between the source and target for each record regardless of being clustered so there is no performance gain over a regular index?

Background: I'm developing a system with retiree information.  I get an updated Retiree file each month from HR's mainframe with updated address etc.  HR uses RetireeID as their key field which is a composite of several numbers and letters.  I update my Retiree table which contains the same fields as the HR file.  I'm proposing using the RetireeID as the Primary key on my table as a clustered index rather than an identity field.

Note: my system does not capture data through a GUI.

Question by:FatalErr
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 2
LVL 10

Expert Comment

ID: 17838820
How many records are you talking about? In general, it is a good idea to use a clustered index to support the most common queries.

There is a update hit. especially on large updates. In the cluster, the hit is due to the fact that the data is organized along the lines of the index elements and that physical ordering has to be maintained. In a nonclustered, it inserts into the table and then updates the index - two separate acts, as you surmise.

If you are replacing all of the records in a table, drop all indexes before inserting, then add indexes. That's a general rule.

but how many records are you talking about? And whats the hardware platform. As I often quote "premature optimization is the root of all evil." - attributed to alan turing

Author Comment

ID: 17839202
I've got about 250K records.
I don't know how much ram.  At least a gig.  It is dual processor.  I don't know more right now.

Inserts are slowed by a clustered index.  But, I'm focusing just on the update process.  

The cluster would really increase performance if multiple records are updated sequentially rather than one at a time.   I don't know if that's what happens.
LVL 10

Accepted Solution

AaronAbend earned 2000 total points
ID: 17839449
I have been tuning SQL Server for 20 years and I could not make that statement about update performance. I have always found that my assumptions about "what the database should do fast" or slow are consistently inaccurate to the point where I only rely on carefully run benchmarks.

The impact of the cluster on performance has nothing to do with sequentially updating the records, but rather how much time the optimizer has to spend finding those records prior to the update.  So, as long as the columns in the where clause are indexed, you should get great performance. And the type of index will not make much of a difference. Most of my current understanding of SQL is based on actual benchmarks on a stable SQL 2000 system that had 30 million to 500 million records.  As far as I know, records are updated one at a time regardless of whether there is a cluster or not.  Remember that 99% of the operations you do in SQL are going to happen in memory, not on the disk. The statement written to the disk at the time of commit is the update statement itself, not the data being updated.  

A tremendous amount of performance in updates will relate to your log writing. Get an extra controller to write the log file and updates will fly. If you have disk contention between the log writer and the process that is looking for your records, you will probably see slower performance.  

I just did a little benchmark... created 200,000 test records on my P4 2G duo with 2G ram...
an update statement that updated 16,000 of the records (about 8%)

newly created table with no indexes at all
   update column1 from 'A' to 'B'  6 seconds
create clustered index on column1
   update column1 from 'A' to 'B'  71 seconds (wow! didn't expect that! - did the whole thing a second time to make sure - 67 seconds!!)
   update column2 where column1 = 'B' 6 seconds
   Rerun for a different column without clearing the buffer pool - unmeasurable (instantaneous)
create nonclustered index on column1 (had to drop cluster of course)
   update column1 from 'B' to 'A'  8 seconds

So you see - it is really hard to predict even for an expert. Do the benchmarks and once you have a performance problem, use query analyzer to figure out what might help. A great resource, besides ee, is  http://www.sql-server-performance.com

Get free NFR key for Veeam Availability Suite 9.5

Veeam is happy to provide a free NFR license (1 year, 2 sockets) to all certified IT Pros. The license allows for the non-production use of Veeam Availability Suite v9.5 in your home lab, without any feature limitations. It works for both VMware and Hyper-V environments

LVL 10

Expert Comment

ID: 17839469
One error in my post - the queries were updating 160,000 records - almost 80% of the database! Not typical of course. The "shape" of your queries will be a factor deciding what is important.

LVL 29

Expert Comment

ID: 17839964
1. Indexes will significantly improve seek times when trying to find data.
2. They will slow performance when doing inserts and updates (as SQL also has to maintain the index as well as the data)
3. Sequential clustered inserts are OK (e.g. 1,2,3,4,5).
4. Non-sequential inserts on data pages with high fill factors will result in page splits (think of the page as where the data is stored, and an insert in the middle causes SQL to shuffle the data around to fit the new data in) - this can be a significant overhead on busy servers.
5. Non-clustered covering indexes (i.e. indexes that contain all the data that you are trying to retrieve) will improve performance (as SQL only has to go down to the index, or leaf level, to retrieve the data, and does not have to touch the actual data page).

Author Comment

ID: 17843673
Thanks for the good info.  

Aaron - You said get an extra controller to write the log file.  Will SQL Server automatically do that, or do you have to direct it in some way?  Thanks,

Featured Post

Ask an Anonymous Question!

Don't feel intimidated by what you don't know. Ask your question anonymously. It's easy! Learn more and upgrade.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Ever wondered why sometimes your SQL Server is slow or unresponsive with connections spiking up but by the time you go in, all is well? The following article will show you how to install and configure a SQL job that will send you email alerts includ…
This article shows gives you an overview on SQL Server 2016 row level security. You will also get to know the usages of row-level-security and how it works
Familiarize people with the process of retrieving data from SQL Server using an Access pass-thru query. Microsoft Access is a very powerful client/server development tool. One of the ways that you can retrieve data from a SQL Server is by using a pa…
Viewers will learn how to use the UPDATE and DELETE statements to change or remove existing data from their tables. Make a table: Update a specific column given a specific row using the UPDATE statement: Remove a set of values using the DELETE s…

618 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question