Solved

SQL 2008 DB Log file huge - how to determine what is causing this?

Posted on 2014-11-28
13
203 Views
Last Modified: 2014-11-28
We have a SQL 2008 database which is in full recovery mode.  Regularly (at least once a week) the log grows to several times the database size.  We have nightly full backups scheduled set to truncate the log file.

Out of need we have had to change the DB to simple recovery mode and then shrink the log file.  I believe this makes us vulnerable to only being able to perform a restore to the last full backup.  Is this correct?

I understand the log file must be expanding in response to one or more queries causing it.  How can I determine what queries are causing this in an effort to have them optimized and/or removed?
0
Comment
Question by:canuckconsulting
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 5
  • 5
  • 2
  • +1
13 Comments
 
LVL 18

Accepted Solution

by:
Simon earned 333 total points
ID: 40470275
When I had issues with this I used scheduled runs of DBCC_LogFileAudit , similar to this tip http://www.mssqltips.com/sqlservertip/1178/monitoring-sql-server-database-transaction-log-space/

In my case, the logs got huge at the weekends when a lot of ETL jobs were being run. Bulk deletes and inserts don't always need FULL logging, (in my environment at least).
0
 
LVL 50

Expert Comment

by:Vitor Montalvão
ID: 40470282
We have nightly full backups scheduled set to truncate the log file.
The full backup don't truncate the log. Transaction log backup does. Are you running regular transaction log backups on the database (once by hour or two hours at least)?

Out of need we have had to change the DB to simple recovery mode and then shrink the log file.  I believe this makes us vulnerable to only being able to perform a restore to the last full backup.  Is this correct?
By your comments it's like that you are only have set full backups. If that's true you don't need to have the database with full recovery model set.

I understand the log file must be expanding in response to one or more queries causing it.
Not only queries. Any operation that handles with a large amount of data can make the database log grows. Also operations as reindex can do it.
0
 

Author Comment

by:canuckconsulting
ID: 40470284
This looks like a great first step for us; thanks.

Is there a way to, when a spike is detected, output a list of active transactions?  It would be ideal if this could be logged along with details on how much resource they are consuming.
0
Ransomware-A Revenue Bonanza for Service Providers

Ransomware – malware that gets on your customers’ computers, encrypts their data, and extorts a hefty ransom for the decryption keys – is a surging new threat.  The purpose of this eBook is to educate the reader about ransomware attacks.

 
LVL 50

Expert Comment

by:Vitor Montalvão
ID: 40470290
There are some monitor tools that can do that. I like SCOM from Microsoft but there are others.
0
 

Expert Comment

by:Ludo Van den Bosch
ID: 40470295
With full recovery mode every change in the data is written in the log.
Wich enables you to fully recover the database with the log and the last backup.
You can even revert to time in points in case you executed a wrong command and wrongfully changed a lot of data.
Only query's with inserts and updates the log will grow not with select.

In simple recovery mode no changes to the database are logged.
So the log will never grow. You have only your backup to recover the data.

IMHO: Unless you want to protect yourself against database file corruption or loss (in this case you need the put the log on a separate storage) or a wrongfully massive data changes,  i suggest you set all your databases to simple recovery mode.
0
 
LVL 18

Assisted Solution

by:Simon
Simon earned 333 total points
ID: 40470298
This link is for an easy to implment method (with screenshots) Stack Exchange: How to pinpoint root cause of excessive log file growth
0
 
LVL 50

Expert Comment

by:Vitor Montalvão
ID: 40470303
IMHO: Unless you want to protect yourself against database file corruption or loss (in this case you need the put the log on a separate storage) or a wrongfully massive data changes,  i suggest you set all your databases to simple recovery mode.
I can't agree with this statement.
What does you chose from a full or simple recovery model is the business demand. It's ok for the business to loose a day of work? An hour? Or not even a minute? With full recovery model and a proper backup plan you can restore a database to any point of time. Any, means to a exact millisecond of the day.
0
 

Author Comment

by:canuckconsulting
ID: 40470312
Following Vitor's statement that reindex also uses log space I see we do a nightly rebuild of all the indices in all DBs on the server via the following:

ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),90) + ')'

Could this cause the log file to expand dramatically?  If so is there away to alter the statement to not impact the size of the log file?
0
 
LVL 50

Expert Comment

by:Vitor Montalvão
ID: 40470318
REBUILD option is a little bit drastic. You should use it only if a fragmentation level greater that 30% occurs. Otherwise use REORGANIZE option. It will still use the transaction log but with minimal log. You can add a transaction log backup after a reindex.

Alternative you can change the recovery model to  bulk-logged.
0
 

Author Comment

by:canuckconsulting
ID: 40470319
Thanks you Vitor.  

So just to identify if this is the problem, would the rebuild significantly impact the log size of a large db?

Related to your 30% statement, is there a way to test an index to see if is >= 30$ and rebuild/reorganize as appropriate?
0
 

Author Comment

by:canuckconsulting
ID: 40470320
Also is it normal practice to reorganize nightly?
0
 
LVL 50

Assisted Solution

by:Vitor Montalvão
Vitor Montalvão earned 167 total points
ID: 40470323
Here you can find a script that will do all the work for you (check the fragmentation level and run the properly command). A lot of DBA's around the world are using this script from Ola Hallengren.
0
 

Author Closing Comment

by:canuckconsulting
ID: 40470336
Thanks very much for the help to all you guys!
0

Featured Post

PeopleSoft Has Never Been Easier

PeopleSoft Adoption Made Smooth & Simple!

On-The-Job Training Is made Intuitive & Easy With WalkMe's On-Screen Guidance Tool.  Claim Your Free WalkMe Account Now

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

The Delta outage: 650 cancelled flights, more than 1200 delayed flights, thousands of frustrated customers, tens of millions of dollars in damages – plus untold reputational damage to one of the world’s most trusted airlines. All due to a catastroph…
This article shows gives you an overview on SQL Server 2016 row level security. You will also get to know the usages of row-level-security and how it works
Via a live example, show how to extract insert data into a SQL Server database table using the Import/Export option and Bulk Insert.
Viewers will learn how to use the INSERT statement to insert data into their tables. It will also introduce the NULL statement, to show them what happens when no value is giving for any given column.

737 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question