Link to home
Start Free TrialLog in
Avatar of mvcdeveloperdxb
mvcdeveloperdxb

asked on

Using bcp utility on huge tables to take monthly backup

I have a very huge SQL Server table which stores GPS data of more than 1000 vehicles [More than 100 Million records in three months]. Each month, I manually delete data older than three months. Now I want to do this automatically. I have planned to write a SQL Job with following steps

Use bcp utility and create a backup file on disk with only last three months data
Truncate the table
Drop all the indexes of the table
Use bcp shell command to restore the table using the backup file
Recreate all the indexes

My question is, Is this aright way to do this? Is there any other way to optimize this?
ASKER CERTIFIED SOLUTION
Avatar of Deepak Chauhan
Deepak Chauhan
Flag of India image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of mvcdeveloperdxb
mvcdeveloperdxb

ASKER

Thank you Deepak, I will run the tests and will let you know
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Agreed with Vitor If you are using SQL server Enterprise edition you can go with data partitioning.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
The method suggested by Vitor is very good. But the database is already partitioned by the third party GPS device supplying company and i think i will not be able to change the existing partition scheme. Any way i will check whether it is possible

Deepak, you asked the recovery model of the database. It is "Simple"

ScottPletcher, batch delete is a good suggestion. I will try that out as a test
ok, Recovery model is simple so it is already set to minimal logged. You can test the Third Test delete the record in chunk and Scott is also suggested.
Deepak, do we need to delete indexes before batch delete? Because in you comment, it is given

Third Test-  

Delete data from original table in chunk
Reorg\rebuild the indexes.
Index maintenance (reorg and rebuild) will be required after huge amount of deletion because index pages will be affected. You will search the value on basis of date column and i guess this might not be indexed column but since complete row will be delete so index pages will be affected on all columns.

You can check by deleting few records and estimate the time taken by deletion.
Try to find the partition function and check if the same partition function is applied to the indexes. You may already have it partitioned as you need and then it will a more easy task for you.
Again, what specific indexes are on the table?  In particular, how is the table clustered?  

Some indexes may not be badly affected by the deletes, it just depends.
Thanks to all guys who supported. I submitted all the ideas to my management team and got approval for the process using bcp utility only. I have successfully completed the task. It took 2 hours to apply the process to the database with 110 million records. Thank you