ldf file which is very large.

Posted on 2009-12-18
Medium Priority
Last Modified: 2012-05-08
I have a large .ldf file and would like to delete that file and create a smaller .ldf file.
Question by:vera2006
LVL 75

Accepted Solution

Aneesh Retnakaran earned 336 total points
ID: 26079282
change the recovery model to simple and after running the following command change it back to the original

DBCC SHRINKFILE (YourDB.ldf , 10 )
LVL 30

Assisted Solution

by:Reza Rad
Reza Rad earned 336 total points
ID: 26079285
don't delete this file
just Shrink your database,
right click on database in enterprise manager and then choose shrink in tasks menu

you can also set recovery mode of your database to simple , this will keep your log file (.ldf) small

LVL 26

Assisted Solution

tigin44 earned 332 total points
ID: 26079309
what is servers recovery model setting? it should be full recovery. In this case you should do proper backup operations to truncate the ldf file.
if you want to truncate the ldf file without doing any backup then you should use the commands

USE DatabaseName
DBCC SHRINKFILE(<TransactionLogName>, 1)
DBCC SHRINKFILE(<TransactionLogName>, 1)
Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

LVL 41

Assisted Solution

graye earned 332 total points
ID: 26080550
There is an art (and some science) behind managing log file sizes...   I recommend that you take a few minutes to look at the following article
The bottom line is this....   you control the growth of the log file by the frequency of periodic log backups, and you control the size of the log file with a shrink operation

Assisted Solution

deepajannu earned 332 total points
ID: 26088137
First thing you need to check is why is that transaction file so huge. If the transaction file is growing everday so much then there is no point in shrinking the file as it will again grow.

before you truncate the log please check the below things
1) check if there are any open transaction of the database which is associated to that log file

use <dbname>
dbcc opentran

If it says no open transaction then execute the below query will give you how much the log is used for all the database

dbcc sqlperf(logspace)

the results are in percentage

if the log file is grown only few percentage then directly go for shrinking option
dbcc shrinkfile('logfilename',<size in MB>)

if the database is present and you want delete the data present in the log file then go below query

If the file has grown only once and now you want to shrink it then go for

use master
backup log <databasename> with truncate_only

this will clear the log file , and then again go for shrinking option.
LVL 13

Assisted Solution

MikeWalsh earned 332 total points
ID: 26125531
The comments above that describe fixing the underlying problem through analysis and thought are dead on. the ones that simply suggest you shrink your log file are not.

The real answer is to right size your transaction log from the beginning to handle potential future size. Then decide which recvoery model is right for your company. Full Recovery model means you want to restore to a specific point in time and minimize data loss in the event of a failure. You have told SQL Server to allow the log file to continue to grow and grow until you manually backup your log. This is because you have told SQL you want to take care of the restores at the log file level and use that log file to recover with after you back it up.

In Simple recovery model you are telling SQL that the last point you ever wish to restore to is your last full or differential backup. You are giving SQL permission to truncate the log file all by itself at certain internal intervals in SQL Server which allows the log file to be reused. This keeps the size down to a minimum.

Once you figure out your recovery needs choose the model that works. If Simple recovery model is right for your SLAs and business needs then you should just be able to set that and leave it alone. Perhaps consider shrinking the log file down to it's smallest size and then grow it carefully to the size that allows for future growth (look at this blog post about why you should care about regrowing in chunks to get the right number of VLFs:http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx

I have a series on my blog that talks about the perils of shrinking and just truncating the transaction log and has some info on why the log can grow and management strategies.  That series is here: http://www.straightpathsql.com/archives/category/shrinking-transactions/

Featured Post

Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

I have a large data set and a SSIS package. How can I load this file in multi threading?
It is possible to export the data of a SQL Table in SSMS and generate INSERT statements. It's neatly tucked away in the generate scripts option of a database.
Using examples as well as descriptions, and references to Books Online, show the documentation available for date manipulation functions and by using a select few of these functions, show how date based data can be manipulated with these functions.
Viewers will learn how the fundamental information of how to create a table.
Suggested Courses

579 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question