Link to home
Start Free TrialLog in
Avatar of Mystical_Ice
Mystical_Ice

asked on

SQL database log file keeps growing

Hey - got a question.
We use Microsoft Dynamics GP as our ERP system, with a SQL 2005 back end.

Almost since we upgraded (a few months ago), it seems the log file of the database grows exponentially.  The database is only 13GB in size, but, to give you an example, after 2 days, the log file is now 23GB in size.  Tomorrow it will be almost 40GB, etc.

The only way I know how to fix the problem (there have been several times where the database grows so large that the entire hard drive fills up before I notice it) is by running this script (see attached script).

That shrinks the log file, but after a few days it grows again.  

I have maintenance plans in place, and they run successfully every night.  They include:
backing up the database, reorganizing, reindexing, cleaning history, updating statistics, and shrinking the database.  Yet still it keeps growing.

What am I doing wrong?

Thanks in advance =)


use **database**
backup log **database** with truncate_only
DBCC ShrinkDatabase (**database**, truncateonly)

Open in new window

Avatar of Daniel_PL
Daniel_PL
Flag of Poland image

It's really bad idea to always shrink log file. It creats e.g. file fragmentation.
You need to determine your log size by observation. Do you have your database in full recovery model?
If you have full recovery model  I would suggest is that you can add  jobs - or maintenace plan - for backing up your log every x minutes. You need to determine how big your log should be - how big it is during the day?
Then in time of non activity in database you should backup your log then shrink it, back it up again and shrink it. After this you should allocate new log size - the best ammount is multiplication of 8GB - it's a matter of VLF (virtual log files).
Your reindex task will grow up your log because it's logged ;)
ASKER CERTIFIED SOLUTION
Avatar of Anthony Perkins
Anthony Perkins
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Just as an addendum to what was said above, if you ever upgrade to SQL 2008 R2 the command you are using won't work anymore anyways.  I guess Microsoft got tired of people doing it so they removed that command.  "backup log **database** with truncate_only" is fully deprecated now.
>>I guess Microsoft got tired of people doing it so they removed that command.  "backup log **database** with truncate_only" is fully deprecated now.<<
Very good point.  Too many people actually thought they were doing a BACKUP when they did:  backup log **database** with truncate_only
When you had stuck transaction your truncate (with truncate_only) option isn't going to truncate active portion of the log held by that active/open transaction. You may and up with commiting/rollbacking/killing to be able to truncate that space out. This leads to unablity to performe point in time restore, so I think it may be a good reason to throw off that option ;).
Avatar of Mystical_Ice
Mystical_Ice

ASKER

Don't worry, i've never set the command as a scheduled job - it scares me each time i run it.

See, the way I have it set at the moment, is we backup the entire database nightly.  In the event the server crashes, all we need to do is restore to the previous night's backup (we have no need for a 'point in time' backup, just the last nightly backup, that's all).

So really, i could just switch to simple recovery mode, and that would prevent the t-log from increasing in size as it is now, right?

The other solution it seems, would be to make a transaction log backup periodically, say every 2hrs or so.  That makes sense.  Now that I think about it, i remember after we did the upgrade, we had a job set up to backup the t-log.  Another administrator must have removed it, because I just checked, and it doesn't exist... the DATABASE is backed up nightly, but never the T-log.  Now it seems to make sense - creating a periodical backup for the t-log would fix the problem.

I've had users complain that the database performance has dropped DRASTICALLY, and it all started around the time i had to run these commands.  since the database is fragmented, would that be the reason they're having their perfomance issues?  How would i fix that?
>>So really, i could just switch to simple recovery mode, and that would prevent the t-log from increasing in size as it is now, right?<<
Yes, provided there are no queries gone wild it should not increase.

>>creating a periodical backup for the t-log would fix the problem.<<
That is correct, too.

>>since the database is fragmented, would that be the reason they're having their perfomance issues?  How would i fix that? <<
It could be.  First run the following, if you have hundreds of rows you have a problem:
DBCC LOGINFO()
Can you schedule maintenance window?
First of all if decided to take t-log backups you need to create job to perform backups every x ammount of time you choosed.
Next in scheduled maintenance window (no activity in database) you need to perform t-log shrinking:
backup log, shrink, backup log again, shrink again.
Then you need to size your log again by setting one ammount of space. Ideally it should be multiplication (may be *1) of 8GB, please do not forget to set your log growth to value in MB not percents.

You can perform database reindexation (rebuild, reorganize) update stats etc. in the scheduled window with database set up to simple recovery model, taking full database backup before and after.
But before taking 'after' full backup please change your datbase recovery model to full.

After that you should have your database optimized and you can schedule your maintenace tasks to daily, weekly etc. depending on data changes in database.
>>Next in scheduled maintenance window (no activity in database) you need to perform t-log shrinking:<<
Please do not suggest this.  This is a very bad idea.

If (and this needs to be verified first, see my previous comment) your transaction log is fragmented here are the steps you need to follow:
8 Steps to better Transaction Log throughput
http://www.sqlskills.com/blogs/kimberly/post/8-Steps-to-better-Transaction-Log-throughput.aspx
When i run 'DBCC LOGINFO()', it brings up 310 rows.
It means that your log was incremented in size by small chunks of data.
You should properly resize your log:
backup up, shrink, back up again, shrink again
Then set new size according to this art:
http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx
Also schedule log backups during the day.
Also i've switched the databases back to simple recovery mode, and made a backup of them all
Are you aware that datbase in simple recovery mode prevents you from restoring to point in time in case of disaster?
Well every night the SQL server takes a 'full' backup of the database... so we would be able to restore to that point, right?  

That would be all we need; as long as we could revert to the previous night's backup, that's all that matters.
>>Are you aware that datbase in simple recovery mode prevents you from restoring to point in time in case of disaster? <<
I suspect you overlooked the author's comment here http:#a35075612.  Here it is again for clarity:
In the event the server crashes, all we need to do is restore to the previous night's backup (we have no need for a 'point in time' backup, just the last nightly backup, that's all).
Sure, mea culpa, but in the other side it's good for people to know what they are doing and what consequences it may bring, right?
Yes in your case you can always get back to previous full backup. Please do not forget to verify backup each time you take it, just to be sure it's valid.
E.g.:
 
RESTORE VERIFYONLY FROM DISK=N'<path to backup file>'

Open in new window