Link to home
Start Free TrialLog in
Avatar of baulrich
baulrich

asked on

automatically shrink the log file??

hi,

using MS SQL Server 2000

I'm needing a way to automatically shrink the .ldf log file while ensuring that the DB stays set to "FULL" logging mode.  I have an application that uses the DB heavily and quickly fills up the log file.  Within a week it typically grows from 1 MB to anywhere from 8 to 15 GB before the server is brought to a halt.  I guess because so many connections to the server are fighting to write/lock such a big file.  I can shrink the log down to 1MB again by changing the logging mode to "SIMPLE", shrinking the DB and changing back to "FULL" mode.  I just need a way to automate this process since several different sites are experiencing this problem.  Just for additional info, could someone kindly explain the benefits of using FULL logging mode.  are they significant??  Each site uses an app that stores critical medical info, and each night a full backup is made of the DB.  They are insisting on using FULL logging mode in hopes that SQL server can recover from any failure that occurs to recover data entered between the time of the last full backup and the failure.  Thanks much for the help team.

- baulrich
Avatar of PaulBarbin
PaulBarbin

BOL lists the benefits of Full Recovery Mode as:

"No work is lost due to a lost or damaged data file.
Can recover to an arbitrary point in time (for example, prior to application or user error)."

That's pretty significant!

In a full recovery model, you should be performing transaction log backups in between each full database backup.  Backing up the transaction log will automatically truncate the log file (reduce the size of the file).  You should setup a job to perform transaction log backups at least once a day.

Paul
Avatar of baulrich

ASKER

Ok,

Right direction.  Now I need specifics please.  Already, the entire DB gets backed up nightly.  So, during the day I will need to backup/truncate the log file.  How many times I will do this will depend on how small in size I can get these backups down to.  Ok if I am right, tell me if not, I can backup log file by
  backup log DBname to DISK = 'c:\mssql\backup\dbname1.trn'

Ok, if i do this say, 3 times a day, I'll have 3 files in the backup dir that get overwritten every day.  But at the end of the day, when the DB backup runs, those log backups become obsolete, correct?  
Also, how big are the log backups going to be?  The size of .ldf file before truncation?  And, how much is the .ldf file decreased in size after the backup/truncation.  I have seen the option on the Transaction Log tab to clear or set the option for Automatically Grow File.  What effect does this have?  It seems to me, that since I have complete DB backup done nightly, I can should be able start off right after that with a log file size of 1MB and let it grow until the first log backup of the day.  If I turn off the autogrow option, the options to select the file size or % go away.  So how do I specify a static maximum size for the log file in this case?  Hope this is did not get everyone as confused as me, but I just need it explained.  
Thanks  - baulrich
Ok,

Right direction.  Now I need specifics please.  Already, the entire DB gets backed up nightly.  So, during the day I will need to backup/truncate the log file.  How many times I will do this will depend on how small in size I can get these backups down to.  Ok if I am right, tell me if not, I can backup log file by
  backup log DBname to DISK = 'c:\mssql\backup\dbname1.trn'

Ok, if i do this say, 3 times a day, I'll have 3 files in the backup dir that get overwritten every day.  But at the end of the day, when the DB backup runs, those log backups become obsolete, correct?  
Also, how big are the log backups going to be?  The size of .ldf file before truncation?  And, how much is the .ldf file decreased in size after the backup/truncation.  I have seen the option on the Transaction Log tab to clear or set the option for Automatically Grow File.  What effect does this have?  It seems to me, that since I have complete DB backup done nightly, I can should be able start off right after that with a log file size of 1MB and let it grow until the first log backup of the day.  If I turn off the autogrow option, the options to select the file size or % go away.  So how do I specify a static maximum size for the log file in this case?  Hope this is did not get everyone as confused as me, but I just need it explained.  
Thanks  - baulrich
Avatar of Guy Hengel [angelIII / a3]
Here is what I wrote in a similar q:

In your database, you have some data spread over your tables. Every time you update, insert or delete data, this is recorded in the transaction log file (.ldf). If you have several SQL's grouped together in a transaction, they are committed as a whole. in the transaction log file, this will be written using "Append" mode inside the file. If the file is full, SQL Server log file will try to grow the file, if it is allowed to do so (Autogrow=Yes,MaxSize<ActualSize)-> Thus, the .ldf file will grow until physical disk size is reached unless you limit the growth.

If the file cannot be grown, the log writer will try to put the pointer of the write operations back to the beginning of the file, but this will only work if the transactions at the beginning of the file have been cleared.

How to get transactions cleared from the .ldf file so the transaction log writer can cycle around in the .ldf file:
* set the option "Truncate log on Checkpoint" (SQL 7): this will clear all the transaction immediately upon commit
* set the database recovery model to "simple" (SQL2k): same procedure as the above option
* backup the transaction log: this will dump all the active transactions to a dump file (.trn), and mark them as cleared in the log file (.ldf)
* backup the full database (same result as above)
* truncate the log explicitely (using the statement graham_charles indicated under 1)

How to keep the .trn files small: run the transaction log backup often, so only "few" transactions will be recorded per dump file. A value between 10 minutes and 1 hour is result of a compromise between recoverability and load on the SQL Server.

For databases in development:
* use simple recovery model
* limit the transaction log file
* backup the database full 1-2 times per day

For database in production:
* use full recovery model
* limit the transaction log file
* backup the database full 1-2 times per day
* backup the transaction log 10-100 times per day

CHeers
Ok,

sorry guys, but I still need more help.  I am now backing up the db to a .bak file once a night and backing up the log file every hour.  Please answer this one question and it will straigten me out on this issue.  If the server crashes at 5 p.m. how many files will I need to do the restore?  The way I understand now, I'd need the .bak file from last night, plus EACH of the hourly .trn log file backups, not just the latest, is this correct??
Ok, now for original question.  How can I get the .ldf file itself down in size??  Even though the .trn files are getting created, the .ldf file is still enormous. On different DBs I have been purging the .ldf file by manually deleting every 2 weeks because they grow so large.  After 2 weeks some .ldf files get as big as 13GB.  Pretty big, huh?  This is happening at some 20 different sites, with 1 DBA.  So, the solution has got to be automatic, not manual.  So, reasoning tells me I need to wait till after a complete DB backup and then I can safely delete the .ldf file (as a one time thing).  My question is this:  Will the hourly .trn backups keep the .ldf file from growing to such proportions?  Since they truncate the unnecessary (cleared entries), I would assume so, but need to be sure.  Or is my solution to keeping the .ldf file size down to turn off the autogrow option.  Could someone please explain if this is needed and if so how to set the maximum file size the .ldf can grow to.  I don't know why, but to me it just seems dangerous to not allow it to get bigger if it needs to.  I understand how it works by restarting at the beginning and writing over "cleared" entries, but if the max file size is not large enough, it seems like it could still run out of space.  If the only way to keep the .ldf file size down is to turn off the autogrow and set a max file, what size should I set it at to make sure it is big enough????  Please guide me as best you can.  Thanks

- baulrich
thanx for the help guys, but I found what I was looking for in the online documentation from MS.  I am going to just have to make a complete back up and then completely wipe out my .ldf file and re-attach the DB, which is basically what I was trying to find out.  then I'll have to turn off the autogrow option to keep it from getting big again.  Well, i've got it all worked out now, but thanks for the help.

- baulrich
ASKER CERTIFIED SOLUTION
Avatar of Guy Hengel [angelIII / a3]
Guy Hengel [angelIII / a3]
Flag of Luxembourg image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Dear expert(s),

A request has been made to close this Q in CS:
https://www.experts-exchange.com/questions/20469580/please-delete-question.html

Without a response in 72 hrs, a moderator will finalize this question by:

 - Saving this Q as a PAQ and refunding the points to the questionner

When you agree or disagree, please add a comment here.

Thank you.

modulo

Community Support Moderator
Experts Exchange
I'm just going ahead and awarding points to angel.  This is what I wanted to know, just after the fact.

Thanks again angel and Paul

- baulrich
thanks for the help

- baulrich