Recovery Model for Bulk Operations

We have a few stored procedures that do some heavy select into (millions of records at a time).

It has been recommend that we alter the database to a Bulk-logged recovery model during the window in which we perform the bulk operations.

We currently have the database in a Simple recovery model. Based on SQL documentation, both Bulk-Logged and Simple recovery models support high-performance bulk copy operations.

Is there any performance benefit to switch the database from Simple recovery model to Bulk-logged recovery model while we perform the 'select into'?

Thanks,
Dean
dthansenAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Ryan McCauleyData and Analytics ManagerCommented:
The BULK-LOGGED recovery model actually appears to be much closer to the FULL model than SIMPLE, as you're currently using. Given that, I'd recommend that you continue to use SIMPLE.

Bulk Logged is really designed as an alternative to FULL recovery, to be used during large bulk load operations. If you're already using Simple, then you'd actually generate more logging by switching to Bulk.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Scott PletcherSenior DBACommented:
I agree.

You should do other tuning as needed to handle the large INSERTs, rather than just the recovery model.

1) Do you have IFI (Instant FIle Init.) enabled?

2) Have you preallocated enough data space and, *even more so*, log space to handle the INSERT statement *before* the INSERT starts?

3) Is the autogrow set to a *fixed* (*not* %) amount that is fairly large, so that you are not constantly having to autogrow? (Just in case; should avoid autogrow if you can)

4) Are the physical files highly fragmented?  That can hurt performance of the db overall.
0
dthansenAuthor Commented:
So if the recovery model is already Simple, you don't see any benefit to doing the following...

1. alter database SOMEDB set recovery BULK_LOGGED

2. select  column1, column2 into sometable from sometable2  (nolock)

3. alter database SOMEDB set recovery Simple

Thanks,
Dean

             

0
Ryan McCauleyData and Analytics ManagerCommented:
None at all - the SIMPLE recovery model already takes all the logging shortcuts that BULK_LOGGED does, plus some more. Changing to BULK_LOGGED sounds like it would actually increase the amount of logging that's done, not decrease it or speed things up.
0
jogosCommented:
Altering the transaction-model during a process is something that must be carefully done and very good documented.  It's quickly done but if you change the transaction model of the db your code won't be correct anymore.... and maybe your backup-cycle won't be corret either.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server 2008

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.