Link to home
Start Free TrialLog in
Avatar of cheluto2
cheluto2

asked on

Optimize large delete and insert operations with large rows

I have a table with around 4.8 million records, and about 65 columns each record.  On a daily basis, I need to upload data from another source to this table.  Data in the original source can change up to 13 months back, so I bring only the latest 13 months from the source into a holding table (replacing the data in it every day).  I need to add/replace data in the "production" table with the new data from the holding table.  Because there are so many columns and any of them can change, there is no easy way to determine which rows have changed and only update those; so I am simply deleting all rows in the production table that fall within the date range existing in the holding table, and then inserting everything from the holding table into it.  Here's the problem.

Deleting the data from the production table takes a long time (over 20 minutes).  On average, about 760,000 records need to be deleted from this 4.8 million record table, which has several indexes.  Is there a way to delete records faster or without loggin?  Once deleted, I don't need to get them back.  A partial truncate would be ideal, but that does not exist.  I tried copying the remaining records to another table, then truncating it, then inserting from the other table and from the holding table to complete the set, but that was not any faster.  The delete looks something like this:

DECLARE @firstDate datetime, @lastDate datetime,

SET @firstDate = (SELECT min(the_date) FROM Holding_Table)
SET @lastDate = (SELECT max(the_date) FROM Holding_Table)

--delete records in Production table that exist in holding table
DELETE Production_Table
WHERE the_date BETWEEN @firstDate AND @lastDate --takes about 26 mins

--insert records from holding table into production table, which has additional fields
INSERT INTO Production_Table
SELECT *, null, null, null, null, null, null, null, dbo.getQuarter(the_date),null
FROM Holding_Table  --takes about 12 mins

Any suggestions?  I have indexes on the date fields for both tables (among others that are used for queries).

After this, I have update queries that update the null values inserted above, based on joins with other tables, which also are taking a long time.

Thanks!
SOLUTION
Avatar of hkamal
hkamal

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of EugeneZ
when did you last time rrefreshed indexes\ update statistics- i
Fresh indexes  can speed up you query
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of cheluto2
cheluto2

ASKER

Thanks for the responses, and sorry it took me so long to get back to you.

hkamal:  I changed the condition to use >= and <=, but I did not see a noticeable difference.  However, thanks for the tip.  The table has a clustered index on the date field already, and other indexes on other fields, too.

ScottPletcher:  I just added the change in recovery mode as you recommend.  I will know by tomorrow (after the process runs tonight) what effect it had on the process, but it sounds like it might be a good improvement, so thanks in advance.  We do a full backup every other day, and the data is not updated except for this one process that runs once a day, so even if we lose a day's worth of data, we can recover it easily.  I will keep you posted tomorrow.
The problem appears to be with the number of indexes on the large table, more than anything else.  I did apply Scott's suggestion of changing the recovery mode, and it seemed to speed it up a bit, so I am marking that as the accepted answer, and also gave part of the points to hkamal for his tip.  I will have to keep working on the indexes to get them to where they need to be, using the Index Tuning Wizard and common sense.
Thanks!