Using bcp utility on huge tables to take monthly backup

I have a very huge SQL Server table which stores GPS data of more than 1000 vehicles [More than 100 Million records in three months]. Each month, I manually delete data older than three months. Now I want to do this automatically. I have planned to write a SQL Job with following steps

Use bcp utility and create a backup file on disk with only last three months data
Truncate the table
Drop all the indexes of the table
Use bcp shell command to restore the table using the backup file
Recreate all the indexes

My question is, Is this aright way to do this? Is there any other way to optimize this?
mvcdeveloperdxbAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Deepak ChauhanSQL Server DBACommented:
Your plan is good but you have to do some test. What is the recovery model of your database. You can note time and watch the performance and choose a plan which is best option.

First test - With your plan.

Second test -
1. Create new table with the same structure.
2. run select into command
3. create indexes
4. Rename tables.

Third Test-  

Delete data from original table in chunk
Reorg\rebuild the indexes.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
mvcdeveloperdxbAuthor Commented:
Thank you Deepak, I will run the tests and will let you know
Vitor MontalvãoMSSQL Senior EngineerCommented:
I personally don't like at all that method.
If you are deleting data based in the date I would suggest you to partition the table and then you can drop only the partitions that old data older than 3 months. This process can be easily automated and will be simple and fast.
The 7 Worst Nightmares of a Sysadmin

Fear not! To defend your business’ IT systems we’re going to shine a light on the seven most sinister terrors that haunt sysadmins. That way you can be sure there’s nothing in your stack waiting to go bump in the night.

Deepak ChauhanSQL Server DBACommented:
Agreed with Vitor If you are using SQL server Enterprise edition you can go with data partitioning.
Scott PletcherSenior DBACommented:
What indexes are on the table?  In particular, how is the table clustered?

Since you're only deleting ~1/4 of the rows in the table, I definitely wouldn't truncate the table.  You can use standard DELETEs, but do it in batches to reduce the affect on the trans log.
mvcdeveloperdxbAuthor Commented:
The method suggested by Vitor is very good. But the database is already partitioned by the third party GPS device supplying company and i think i will not be able to change the existing partition scheme. Any way i will check whether it is possible

Deepak, you asked the recovery model of the database. It is "Simple"

ScottPletcher, batch delete is a good suggestion. I will try that out as a test
Deepak ChauhanSQL Server DBACommented:
ok, Recovery model is simple so it is already set to minimal logged. You can test the Third Test delete the record in chunk and Scott is also suggested.
mvcdeveloperdxbAuthor Commented:
Deepak, do we need to delete indexes before batch delete? Because in you comment, it is given

Third Test-  

Delete data from original table in chunk
Reorg\rebuild the indexes.
Deepak ChauhanSQL Server DBACommented:
Index maintenance (reorg and rebuild) will be required after huge amount of deletion because index pages will be affected. You will search the value on basis of date column and i guess this might not be indexed column but since complete row will be delete so index pages will be affected on all columns.

You can check by deleting few records and estimate the time taken by deletion.
Vitor MontalvãoMSSQL Senior EngineerCommented:
Try to find the partition function and check if the same partition function is applied to the indexes. You may already have it partitioned as you need and then it will a more easy task for you.
Scott PletcherSenior DBACommented:
Again, what specific indexes are on the table?  In particular, how is the table clustered?  

Some indexes may not be badly affected by the deletes, it just depends.
mvcdeveloperdxbAuthor Commented:
Thanks to all guys who supported. I submitted all the ideas to my management team and got approval for the process using bcp utility only. I have successfully completed the task. It took 2 hours to apply the process to the database with 110 million records. Thank you
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server 2008

From novice to tech pro — start learning today.