SQL Server 2000 very long running delete query

I have to move some of data from main reporting table (~100M rows) to archive server (each few M rows to different target database..).

I did write a query, that for every 5000 rows that match my criteria copies the data (selects and inserts into archive table) and then deletes data from the source table.
As for each 5000 row select to temporary table lasts about 5 mins, the last step of transaction, DELETE, lasted 8 hours for only 25000 rows (5 loops).
As I have to migrate ~20 millions rows till the end of the month, I got a serious problem.

At first I have a temp table (on destination server, as the query runs from destination server)
	create table #tran_nr(nr int not null)
	create unique clustered index ix_1_temp_nr on #tran_nr(nr)

Open in new window

and insert data into it (~5 mins for 5k rows).

then, after copying, I do the following:
set @sql=N'delete from ' + @src_table_trans +
	N' where tr_tran_nr in (select nr from #tran_nr)'
EXEC sp_executesql  @sql

Open in new window

And that delete lasts way too long.

On the source table (from which I am deleting) there are a lot of indexes. Especially, the tran_nr column is the primary key and clustered index.

Are all the indexes on source table rebuilt when deleting? How can I fasten the delete operation?

I cannot just copy the @src_table_trans and truncate the source etc. because the table is on production reporting server and it is used every 5 minutes by several jobs.

Source and destination tables are on different servers (it's a distributed transaction via DTC), however, the storage disks are on same lunn (SAN).

EDIT: when deleting, the disk I/O on the machine (with the source table) goes to 100%, but transaction log isn't growing at all.
Who is Participating?
DavidMorrisonConnect With a Mentor Commented:
Hi Anna,

OK firstly the thing that jumps out at me is that an insert of 5k rows takes 5 minutes!  as you're inserting into a temp table this is going into tempDB, have you made sure your tempDB is optimized and has enough disk etc?

If you can try inserting 5k into a temp table locally (i.e. not between servers) as it could simply be a network issue.

As for the deletes, you are deleting from a fairly large table, and if it is having to scan that table for every row it needs to remove (which is sounds like it is) it's going to take quite some time.

what you need to to have an index on the tr_tran_nr in the source table as this will aid the process in finding the record.

also thinking about it, the delete will also have a fairly large dependency on tempDB due to the way sql server deals with the Halloween problem.

So in summary, check your tempDB, make sure it has enough disk and isn't limited to a small area, inst fragmented etc (there are lots of blogs on tuning tempDB) check your network speeds as I think if I'm correct when you're doing everything on the destination server there will be a fair amount of network I/O between the two servers.

can you post the .sqlplan file for us to look at? (if you change the extension to .txt it'll let you upload it here)


do you have many indexes on the @src_table_trans tables?

If your server is not fast, and you have many indexes you may just get 1 row per second when deleting rows.

are your delete job beeing blocked by other processes?
Use sp_whoisactive to check if so.
If you haven't got that sproc, you can find it here: http://sqlblog.com/blogs/adam_machanic/archive/2009/03/30/who-is-active-v8-40-now-with-delta-power.aspx

Best regards,
Anna_RAuthor Commented:
I do have a lot of indexes on @src_trans_table (11 nonclustered indexes and 1 clustered mentioned above).

The server is a virtual machine on vm ESX server with Xeon E7420 CPU, with Windows 2003 Enterprise Edition, 5GB RAM PAE, SQL Server 2000 SP4 Enterprise Edition.

SANs are fastest possible and both (src and destination database server) are on the same SAN on the same LUN.
Build your data science skills into a career

Are you ready to take your data science career to the next step, or break into data science? With Springboard’s Data Science Career Track, you’ll master data science topics, have personalized career guidance, weekly calls with a data science expert, and a job guarantee.

Are you sure you need all the indexes when you delete the data?

I use http://www.karaszi.com/SQLServer/util_sp_indexinfo.asp
to see if my indexes are used at all.

I do not like virtual servers, but that may just be me.
You should be able to delete +100 rows/second

Take a look at the execution plan, to see if there are any surprises.

Best regards,
also you may want to take a look at http://www.brentozar.com/archive/2011/09/indexing-for-deletes/
including notes
Eugene ZCommented:
...you can try to delete archived records in smaller numbers:
instead of 5000- do 5..50.
make sure there are no blocking,  your server is not in 100% CPU, no another heavy processes running at same time (e.g. reindex, etc.) ....

looks like your db in full recovery mode - this can slow down as well, same for RI -- > you may have FK dependencies that you may like to disable \remove for duration of this operatrion

Anna_RAuthor Commented:
I have dropped all the indexes except of the clustered one.
There are no FK on this table.
No other processes are to use this table now.

The showcontig says clustered index fragmentation is 0.02%.

Execution plan shows Index Scan 65% and Clustered Index Delete 29% cost.

So, a disk issue?
Eugene ZCommented:
<There are no FK on this table.
it does not mean there are not fk to this table on another ones (e.g order -> order details..)

before you move with delete - make sure the db has been reindexed and update stats ran as well

check if the below "@src_table_trans " table's tr_tran_nr has index

set @sql=N'delete from ' + @src_table_trans +
     N' where tr_tran_nr in (select nr from #tran_nr)'
EXEC sp_executesql  @sql

also try to use regular table instead of tempdb (#temp table)

also try small deletes (50) ...

did you check your server blocking during "delete" ?


also you can dts\bcp data that you need to new table ->rename old table to .._old-> and rename new table to what you had ..
it can be faster however may be done offhours
First of all, replace the clustered index in the temp table #tran_nr for a nonclustered index, then make sure the field  tr_tran_nr  in the table to delete is the primary key.

With the temp table having an index in  tr_tran_nr  and the table to delete have this field as primary key or at least an index try this instead of your query

set @sql=N'delete from ' + @src_table_trans +
      N' where exists (select top 1 1 from #tran_nr  as t where ' + @src_table_trans +
'.tr_tran_nr = t.tr_tran_nr )'
EXEC sp_executesql  @sql

Hi Anna,

5 minutes for 5000 rows!!!
Then you do not want the hear about our server. It is a big box, but we get +10.000 rows per second, when inserting into a table with 4 billion rows and 1 clustered and 4 non-clustered indexes.

When we delete large amount of rows, we insert the rows we want to keep in a new table, and do 2 rename when we're ready for the delete to go live. Only downside is that it takes a bit of disk space.

Did you try my other suggestions?
Anna_RAuthor Commented:
I have moved the target table to the same server as source table. Obviously, the SAN and network was the problem.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.