Solved

SQL 2008 DB Log file huge - how to determine what is causing this?

Posted on 2014-11-28
13
200 Views
Last Modified: 2014-11-28
We have a SQL 2008 database which is in full recovery mode.  Regularly (at least once a week) the log grows to several times the database size.  We have nightly full backups scheduled set to truncate the log file.

Out of need we have had to change the DB to simple recovery mode and then shrink the log file.  I believe this makes us vulnerable to only being able to perform a restore to the last full backup.  Is this correct?

I understand the log file must be expanding in response to one or more queries causing it.  How can I determine what queries are causing this in an effort to have them optimized and/or removed?
0
Comment
Question by:canuckconsulting
  • 5
  • 5
  • 2
  • +1
13 Comments
 
LVL 18

Accepted Solution

by:
Simon earned 333 total points
ID: 40470275
When I had issues with this I used scheduled runs of DBCC_LogFileAudit , similar to this tip http://www.mssqltips.com/sqlservertip/1178/monitoring-sql-server-database-transaction-log-space/

In my case, the logs got huge at the weekends when a lot of ETL jobs were being run. Bulk deletes and inserts don't always need FULL logging, (in my environment at least).
0
 
LVL 47

Expert Comment

by:Vitor Montalvão
ID: 40470282
We have nightly full backups scheduled set to truncate the log file.
The full backup don't truncate the log. Transaction log backup does. Are you running regular transaction log backups on the database (once by hour or two hours at least)?

Out of need we have had to change the DB to simple recovery mode and then shrink the log file.  I believe this makes us vulnerable to only being able to perform a restore to the last full backup.  Is this correct?
By your comments it's like that you are only have set full backups. If that's true you don't need to have the database with full recovery model set.

I understand the log file must be expanding in response to one or more queries causing it.
Not only queries. Any operation that handles with a large amount of data can make the database log grows. Also operations as reindex can do it.
0
 

Author Comment

by:canuckconsulting
ID: 40470284
This looks like a great first step for us; thanks.

Is there a way to, when a spike is detected, output a list of active transactions?  It would be ideal if this could be logged along with details on how much resource they are consuming.
0
Ransomware: The New Cyber Threat & How to Stop It

This infographic explains ransomware, type of malware that blocks access to your files or your systems and holds them hostage until a ransom is paid. It also examines the different types of ransomware and explains what you can do to thwart this sinister online threat.  

 
LVL 47

Expert Comment

by:Vitor Montalvão
ID: 40470290
There are some monitor tools that can do that. I like SCOM from Microsoft but there are others.
0
 

Expert Comment

by:Ludo Van den Bosch
ID: 40470295
With full recovery mode every change in the data is written in the log.
Wich enables you to fully recover the database with the log and the last backup.
You can even revert to time in points in case you executed a wrong command and wrongfully changed a lot of data.
Only query's with inserts and updates the log will grow not with select.

In simple recovery mode no changes to the database are logged.
So the log will never grow. You have only your backup to recover the data.

IMHO: Unless you want to protect yourself against database file corruption or loss (in this case you need the put the log on a separate storage) or a wrongfully massive data changes,  i suggest you set all your databases to simple recovery mode.
0
 
LVL 18

Assisted Solution

by:Simon
Simon earned 333 total points
ID: 40470298
This link is for an easy to implment method (with screenshots) Stack Exchange: How to pinpoint root cause of excessive log file growth
0
 
LVL 47

Expert Comment

by:Vitor Montalvão
ID: 40470303
IMHO: Unless you want to protect yourself against database file corruption or loss (in this case you need the put the log on a separate storage) or a wrongfully massive data changes,  i suggest you set all your databases to simple recovery mode.
I can't agree with this statement.
What does you chose from a full or simple recovery model is the business demand. It's ok for the business to loose a day of work? An hour? Or not even a minute? With full recovery model and a proper backup plan you can restore a database to any point of time. Any, means to a exact millisecond of the day.
0
 

Author Comment

by:canuckconsulting
ID: 40470312
Following Vitor's statement that reindex also uses log space I see we do a nightly rebuild of all the indices in all DBs on the server via the following:

ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),90) + ')'

Could this cause the log file to expand dramatically?  If so is there away to alter the statement to not impact the size of the log file?
0
 
LVL 47

Expert Comment

by:Vitor Montalvão
ID: 40470318
REBUILD option is a little bit drastic. You should use it only if a fragmentation level greater that 30% occurs. Otherwise use REORGANIZE option. It will still use the transaction log but with minimal log. You can add a transaction log backup after a reindex.

Alternative you can change the recovery model to  bulk-logged.
0
 

Author Comment

by:canuckconsulting
ID: 40470319
Thanks you Vitor.  

So just to identify if this is the problem, would the rebuild significantly impact the log size of a large db?

Related to your 30% statement, is there a way to test an index to see if is >= 30$ and rebuild/reorganize as appropriate?
0
 

Author Comment

by:canuckconsulting
ID: 40470320
Also is it normal practice to reorganize nightly?
0
 
LVL 47

Assisted Solution

by:Vitor Montalvão
Vitor Montalvão earned 167 total points
ID: 40470323
Here you can find a script that will do all the work for you (check the fragmentation level and run the properly command). A lot of DBA's around the world are using this script from Ola Hallengren.
0
 

Author Closing Comment

by:canuckconsulting
ID: 40470336
Thanks very much for the help to all you guys!
0

Featured Post

Enterprise Mobility and BYOD For Dummies

Like “For Dummies” books, you can read this in whatever order you choose and learn about mobility and BYOD; and how to put a competitive mobile infrastructure in place. Developed for SMBs and large enterprises alike, you will find helpful use cases, planning, and implementation.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In this article I will describe the Copy Database Wizard method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
Use this article to create a batch file to backup a Microsoft SQL Server database to a Windows folder.  The folder can be on the local hard drive or on a network share.  This batch file will query the SQL server to get the current date & time and wi…
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
Via a live example, show how to shrink a transaction log file down to a reasonable size.

770 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question