Link to home
Start Free TrialLog in
Avatar of p_love
p_love

asked on

Can I drop indexes during large (60 million row) SQL 2005 Insert!

I am doing an insert from one server to another...the source table is 60 million records, and it is taking forever....I need to drop the indexes (there are 10 of them) on the destination table, but am only about 19 million records through the insert.  Is this possible - i.e. can I open a separate session in SQL Management Studio and drop the indexes?
ASKER CERTIFIED SOLUTION
Avatar of SQL_SERVER_DBA
SQL_SERVER_DBA
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of p_love
p_love

ASKER

Thanks...

will it help?!
SOLUTION
Avatar of Aneesh
Aneesh
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of p_love

ASKER

Hi,

Having waited 8 hours...I am reluctant to start again.  I have a finite window to do this in.  I did a test run on Sunday and it took only 4 hours.  I have a job starting in 8 hours which will be fairly dependent on this data being present.
I don't think it will help
Is the data coming from an existing table or from a file(s)?

If file(s), you should use BULK INSERT, as that is the fastest method for loading large volumes of data.

Dropping indexes would certainly help prior to the load starting.  Not sure if it will work now.  You could try running a test and if that doesn't cause problems, try it live.

Also, if there any trigger(s) on the table, disabling them, if possible, would also speed the load.
does dts load data faster?
No; BULK INSERT is much faster than DTS.
IS the server to server thing the problem?
If it is can't you just backup and restore the database over to the target server and then do the bulk insert on the same server?
Avatar of p_love

ASKER

Hi,

One server is in Sweden and the other is in the US.  I am using OpenQuery.  It took 4 hours on a test run on Sunday, but 9 hours today.  I have cancelled and will investigate further.  I may do a RAW export via SSIS, then do a RAW import.

Thanks

<<I am doing an insert from one server to another...the source table is 60 million records, and it is taking forever....I need to drop the indexes (there are 10 of them) on the destination table, but am only about 19 million records through the insert.  Is this possible>>
Use BULK INSERT...and make sure you have no network bottlenecks...

Hope this helps...
Avatar of p_love

ASKER

Use BULK INSERT...and make sure you have no network bottlenecks...

Hi,

I appreciate this.  The question however wasnt, how could I do this better, it was - can I drop the indexes "during" a large insert.   My only worry with bulk insert is I have to do a bulk export first and didnt fancy it on 60 million records...
<<can I drop the indexes "during" a large insert.>>
Yes.  But it is a tedious process to drop  them (you must drop constraints too) and recreate them:  you will have to do lots of scripting for this  Indexes are *not* the primary reason for which your inserts takes forever...Replacing *inserts* by *bulk inserts* is easy to do and you don't have to worry about indexes.

Hope this helps...