Link to home
Start Free TrialLog in
Avatar of canuckconsulting
canuckconsultingFlag for United Kingdom of Great Britain and Northern Ireland

asked on

SQL 2008 DB Log file huge - how to determine what is causing this?

We have a SQL 2008 database which is in full recovery mode.  Regularly (at least once a week) the log grows to several times the database size.  We have nightly full backups scheduled set to truncate the log file.

Out of need we have had to change the DB to simple recovery mode and then shrink the log file.  I believe this makes us vulnerable to only being able to perform a restore to the last full backup.  Is this correct?

I understand the log file must be expanding in response to one or more queries causing it.  How can I determine what queries are causing this in an effort to have them optimized and/or removed?
ASKER CERTIFIED SOLUTION
Avatar of Simon
Simon
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
We have nightly full backups scheduled set to truncate the log file.
The full backup don't truncate the log. Transaction log backup does. Are you running regular transaction log backups on the database (once by hour or two hours at least)?

Out of need we have had to change the DB to simple recovery mode and then shrink the log file.  I believe this makes us vulnerable to only being able to perform a restore to the last full backup.  Is this correct?
By your comments it's like that you are only have set full backups. If that's true you don't need to have the database with full recovery model set.

I understand the log file must be expanding in response to one or more queries causing it.
Not only queries. Any operation that handles with a large amount of data can make the database log grows. Also operations as reindex can do it.
Avatar of canuckconsulting

ASKER

This looks like a great first step for us; thanks.

Is there a way to, when a spike is detected, output a list of active transactions?  It would be ideal if this could be logged along with details on how much resource they are consuming.
There are some monitor tools that can do that. I like SCOM from Microsoft but there are others.
Avatar of Ludo Van den Bosch
Ludo Van den Bosch

With full recovery mode every change in the data is written in the log.
Wich enables you to fully recover the database with the log and the last backup.
You can even revert to time in points in case you executed a wrong command and wrongfully changed a lot of data.
Only query's with inserts and updates the log will grow not with select.

In simple recovery mode no changes to the database are logged.
So the log will never grow. You have only your backup to recover the data.

IMHO: Unless you want to protect yourself against database file corruption or loss (in this case you need the put the log on a separate storage) or a wrongfully massive data changes,  i suggest you set all your databases to simple recovery mode.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
IMHO: Unless you want to protect yourself against database file corruption or loss (in this case you need the put the log on a separate storage) or a wrongfully massive data changes,  i suggest you set all your databases to simple recovery mode.
I can't agree with this statement.
What does you chose from a full or simple recovery model is the business demand. It's ok for the business to loose a day of work? An hour? Or not even a minute? With full recovery model and a proper backup plan you can restore a database to any point of time. Any, means to a exact millisecond of the day.
Following Vitor's statement that reindex also uses log space I see we do a nightly rebuild of all the indices in all DBs on the server via the following:

ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),90) + ')'

Could this cause the log file to expand dramatically?  If so is there away to alter the statement to not impact the size of the log file?
REBUILD option is a little bit drastic. You should use it only if a fragmentation level greater that 30% occurs. Otherwise use REORGANIZE option. It will still use the transaction log but with minimal log. You can add a transaction log backup after a reindex.

Alternative you can change the recovery model to  bulk-logged.
Thanks you Vitor.  

So just to identify if this is the problem, would the rebuild significantly impact the log size of a large db?

Related to your 30% statement, is there a way to test an index to see if is >= 30$ and rebuild/reorganize as appropriate?
Also is it normal practice to reorganize nightly?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanks very much for the help to all you guys!