Solved

SQL 2008 DB Log file huge - how to determine what is causing this?

Posted on 2014-11-28
13
198 Views
Last Modified: 2014-11-28
We have a SQL 2008 database which is in full recovery mode.  Regularly (at least once a week) the log grows to several times the database size.  We have nightly full backups scheduled set to truncate the log file.

Out of need we have had to change the DB to simple recovery mode and then shrink the log file.  I believe this makes us vulnerable to only being able to perform a restore to the last full backup.  Is this correct?

I understand the log file must be expanding in response to one or more queries causing it.  How can I determine what queries are causing this in an effort to have them optimized and/or removed?
0
Comment
Question by:canuckconsulting
  • 5
  • 5
  • 2
  • +1
13 Comments
 
LVL 18

Accepted Solution

by:
SimonAdept earned 333 total points
Comment Utility
When I had issues with this I used scheduled runs of DBCC_LogFileAudit , similar to this tip http://www.mssqltips.com/sqlservertip/1178/monitoring-sql-server-database-transaction-log-space/

In my case, the logs got huge at the weekends when a lot of ETL jobs were being run. Bulk deletes and inserts don't always need FULL logging, (in my environment at least).
0
 
LVL 45

Expert Comment

by:Vitor Montalvão
Comment Utility
We have nightly full backups scheduled set to truncate the log file.
The full backup don't truncate the log. Transaction log backup does. Are you running regular transaction log backups on the database (once by hour or two hours at least)?

Out of need we have had to change the DB to simple recovery mode and then shrink the log file.  I believe this makes us vulnerable to only being able to perform a restore to the last full backup.  Is this correct?
By your comments it's like that you are only have set full backups. If that's true you don't need to have the database with full recovery model set.

I understand the log file must be expanding in response to one or more queries causing it.
Not only queries. Any operation that handles with a large amount of data can make the database log grows. Also operations as reindex can do it.
0
 

Author Comment

by:canuckconsulting
Comment Utility
This looks like a great first step for us; thanks.

Is there a way to, when a spike is detected, output a list of active transactions?  It would be ideal if this could be logged along with details on how much resource they are consuming.
0
 
LVL 45

Expert Comment

by:Vitor Montalvão
Comment Utility
There are some monitor tools that can do that. I like SCOM from Microsoft but there are others.
0
 

Expert Comment

by:Ludo Van den Bosch
Comment Utility
With full recovery mode every change in the data is written in the log.
Wich enables you to fully recover the database with the log and the last backup.
You can even revert to time in points in case you executed a wrong command and wrongfully changed a lot of data.
Only query's with inserts and updates the log will grow not with select.

In simple recovery mode no changes to the database are logged.
So the log will never grow. You have only your backup to recover the data.

IMHO: Unless you want to protect yourself against database file corruption or loss (in this case you need the put the log on a separate storage) or a wrongfully massive data changes,  i suggest you set all your databases to simple recovery mode.
0
 
LVL 18

Assisted Solution

by:SimonAdept
SimonAdept earned 333 total points
Comment Utility
This link is for an easy to implment method (with screenshots) Stack Exchange: How to pinpoint root cause of excessive log file growth
0
What Is Threat Intelligence?

Threat intelligence is often discussed, but rarely understood. Starting with a precise definition, along with clear business goals, is essential.

 
LVL 45

Expert Comment

by:Vitor Montalvão
Comment Utility
IMHO: Unless you want to protect yourself against database file corruption or loss (in this case you need the put the log on a separate storage) or a wrongfully massive data changes,  i suggest you set all your databases to simple recovery mode.
I can't agree with this statement.
What does you chose from a full or simple recovery model is the business demand. It's ok for the business to loose a day of work? An hour? Or not even a minute? With full recovery model and a proper backup plan you can restore a database to any point of time. Any, means to a exact millisecond of the day.
0
 

Author Comment

by:canuckconsulting
Comment Utility
Following Vitor's statement that reindex also uses log space I see we do a nightly rebuild of all the indices in all DBs on the server via the following:

ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),90) + ')'

Could this cause the log file to expand dramatically?  If so is there away to alter the statement to not impact the size of the log file?
0
 
LVL 45

Expert Comment

by:Vitor Montalvão
Comment Utility
REBUILD option is a little bit drastic. You should use it only if a fragmentation level greater that 30% occurs. Otherwise use REORGANIZE option. It will still use the transaction log but with minimal log. You can add a transaction log backup after a reindex.

Alternative you can change the recovery model to  bulk-logged.
0
 

Author Comment

by:canuckconsulting
Comment Utility
Thanks you Vitor.  

So just to identify if this is the problem, would the rebuild significantly impact the log size of a large db?

Related to your 30% statement, is there a way to test an index to see if is >= 30$ and rebuild/reorganize as appropriate?
0
 

Author Comment

by:canuckconsulting
Comment Utility
Also is it normal practice to reorganize nightly?
0
 
LVL 45

Assisted Solution

by:Vitor Montalvão
Vitor Montalvão earned 167 total points
Comment Utility
Here you can find a script that will do all the work for you (check the fragmentation level and run the properly command). A lot of DBA's around the world are using this script from Ola Hallengren.
0
 

Author Closing Comment

by:canuckconsulting
Comment Utility
Thanks very much for the help to all you guys!
0

Featured Post

Get up to 2TB FREE CLOUD per backup license!

An exclusive Black Friday offer just for Expert Exchange audience! Buy any of our top-rated backup solutions & get up to 2TB free cloud per system! Perform local & cloud backup in the same step, and restore instantly—anytime, anywhere. Grab this deal now before it disappears!

Join & Write a Comment

In this article I will describe the Backup & Restore method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
Ever wondered why sometimes your SQL Server is slow or unresponsive with connections spiking up but by the time you go in, all is well? The following article will show you how to install and configure a SQL job that will send you email alerts includ…
Familiarize people with the process of retrieving data from SQL Server using an Access pass-thru query. Microsoft Access is a very powerful client/server development tool. One of the ways that you can retrieve data from a SQL Server is by using a pa…
Via a live example, show how to backup a database, simulate a failure backup the tail of the database transaction log and perform the restore.

744 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now