SQL 2008 DB Log file huge - how to determine what is causing this?

canuckconsulting
canuckconsulting used Ask the Experts™
on
We have a SQL 2008 database which is in full recovery mode.  Regularly (at least once a week) the log grows to several times the database size.  We have nightly full backups scheduled set to truncate the log file.

Out of need we have had to change the DB to simple recovery mode and then shrink the log file.  I believe this makes us vulnerable to only being able to perform a restore to the last full backup.  Is this correct?

I understand the log file must be expanding in response to one or more queries causing it.  How can I determine what queries are causing this in an effort to have them optimized and/or removed?
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Commented:
When I had issues with this I used scheduled runs of DBCC_LogFileAudit , similar to this tip http://www.mssqltips.com/sqlservertip/1178/monitoring-sql-server-database-transaction-log-space/

In my case, the logs got huge at the weekends when a lot of ETL jobs were being run. Bulk deletes and inserts don't always need FULL logging, (in my environment at least).
Vitor MontalvãoMSSQL Senior Engineer
Distinguished Expert 2017

Commented:
We have nightly full backups scheduled set to truncate the log file.
The full backup don't truncate the log. Transaction log backup does. Are you running regular transaction log backups on the database (once by hour or two hours at least)?

Out of need we have had to change the DB to simple recovery mode and then shrink the log file.  I believe this makes us vulnerable to only being able to perform a restore to the last full backup.  Is this correct?
By your comments it's like that you are only have set full backups. If that's true you don't need to have the database with full recovery model set.

I understand the log file must be expanding in response to one or more queries causing it.
Not only queries. Any operation that handles with a large amount of data can make the database log grows. Also operations as reindex can do it.

Author

Commented:
This looks like a great first step for us; thanks.

Is there a way to, when a spike is detected, output a list of active transactions?  It would be ideal if this could be logged along with details on how much resource they are consuming.
Should you be charging more for IT Services?

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

Vitor MontalvãoMSSQL Senior Engineer
Distinguished Expert 2017

Commented:
There are some monitor tools that can do that. I like SCOM from Microsoft but there are others.
Ludo Van den Boschict-manager

Commented:
With full recovery mode every change in the data is written in the log.
Wich enables you to fully recover the database with the log and the last backup.
You can even revert to time in points in case you executed a wrong command and wrongfully changed a lot of data.
Only query's with inserts and updates the log will grow not with select.

In simple recovery mode no changes to the database are logged.
So the log will never grow. You have only your backup to recover the data.

IMHO: Unless you want to protect yourself against database file corruption or loss (in this case you need the put the log on a separate storage) or a wrongfully massive data changes,  i suggest you set all your databases to simple recovery mode.
Commented:
This link is for an easy to implment method (with screenshots) Stack Exchange: How to pinpoint root cause of excessive log file growth
Vitor MontalvãoMSSQL Senior Engineer
Distinguished Expert 2017

Commented:
IMHO: Unless you want to protect yourself against database file corruption or loss (in this case you need the put the log on a separate storage) or a wrongfully massive data changes,  i suggest you set all your databases to simple recovery mode.
I can't agree with this statement.
What does you chose from a full or simple recovery model is the business demand. It's ok for the business to loose a day of work? An hour? Or not even a minute? With full recovery model and a proper backup plan you can restore a database to any point of time. Any, means to a exact millisecond of the day.

Author

Commented:
Following Vitor's statement that reindex also uses log space I see we do a nightly rebuild of all the indices in all DBs on the server via the following:

ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),90) + ')'

Could this cause the log file to expand dramatically?  If so is there away to alter the statement to not impact the size of the log file?
Vitor MontalvãoMSSQL Senior Engineer
Distinguished Expert 2017

Commented:
REBUILD option is a little bit drastic. You should use it only if a fragmentation level greater that 30% occurs. Otherwise use REORGANIZE option. It will still use the transaction log but with minimal log. You can add a transaction log backup after a reindex.

Alternative you can change the recovery model to  bulk-logged.

Author

Commented:
Thanks you Vitor.  

So just to identify if this is the problem, would the rebuild significantly impact the log size of a large db?

Related to your 30% statement, is there a way to test an index to see if is >= 30$ and rebuild/reorganize as appropriate?

Author

Commented:
Also is it normal practice to reorganize nightly?
Vitor MontalvãoMSSQL Senior Engineer
Distinguished Expert 2017
Commented:
Here you can find a script that will do all the work for you (check the fragmentation level and run the properly command). A lot of DBA's around the world are using this script from Ola Hallengren.

Author

Commented:
Thanks very much for the help to all you guys!

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial