SQL Server Transaction Log

Hi,

We are putting around 7 million (70 lac) rows data into the SQL server from .net application using a single transactions. This process happens frequently in a single day and we cant use SSIS for this.

The issue is that after this process our database transaction log gets increased by 3 gb, due to this we are constantly running out of disk space.
To solve the disk space issue, temporarily we backup and restoring database (to clear the log files) but this is very unprofessional way and needs human intervention frequently.

Now what we are thinking of is below:
1) Database population starts
2) Log File gets increased automatically
3) Process Finishes (with transaction)
4) Now the Log file should get reduced to original size (of 100MB) with some sql command.

Please suggest some way to solve this problem.

Thanks
Ajay Sharma
LVL 18
Ajay SharmaAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

AnujSQL Server DBACommented:
if you are in SQL Server 2008, a transaction log backup after the transaction will truncate the inactive part of the log and you have the option to shrink the log file using DBCC SHRINKFILE('logfile').

Is there any specific reasons for not using SSIS package? because SSIS is the most efficient way for transferring millions of rows, and as this is a bulk insert method, bulk insert process will not be logged in the transaction log file under BULK RECOVERY Model.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
SJCFL-AdminCommented:
You have two choices.  Get the extra DASD or change your procedure to partition the inserts into smaller transaction groups.  Constantly expanding and shrinking your logfile is not recommended.  it is costly in terms of execution time and leads to DASD fragmentation (also costly in terms of execution time).  you will be shooting yourself in the foot if you do not deal with the root cause of the problem.

I give you this advice with the best of all intentions.  So that you do not constantly suffer from a quick fix...
0
Ajay SharmaAuthor Commented:
Thanks for your inputs. I solved the issue by using below code:

alter database dbName set recovery simple
DBCC SHRINKFILE(@DatabaseLogFileName)
alter database dbName set recovery full

This seems a bit unprofessional way but it suits my requirements for time being.

One thing for future readers :
For SQL 2005, line number 2 will be sufficient
For SQL 2008, use all 3 lines
0
SJCFL-AdminCommented:
Just one additional note for scheduling purposes.  Once the recovery model is flipped like this, you need to immediately take a full backup before you start your transaction log backups again.  :-)
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server 2005

From novice to tech pro — start learning today.