Solved

Transaction Log Growth

Posted on 2009-07-08
17
506 Views
Last Modified: 2012-05-07
At present we are experiencing SQL Timeouts, especially during the time when the transaction logs are being backed up,  this is scheduled hourly, and the log is truncated after the backup.   The size of each of the backups reaches over 5 gigabytes.  Im wondering if work being done on the tempdb, also gets logged in the transaction log of the main db.  And whether there is a good way to check what causes this extreme growth during peak times.
0
Comment
Question by:dianos
  • 8
  • 6
  • 3
17 Comments
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 24804303
>and the log is truncated after the backup.  
huh? you mean, shrinked?
if yes, remove the SHRINK step from the hourly job.

the transaction log file will keep the size, and reuse space internally, and only grow if even more space was required.
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 24804313
>The size of each of the backups reaches over 5 gigabytes.  
time to increase the frequency also of the backup from 1 hour to 15 minutes, for example.
0
 
LVL 1

Author Comment

by:dianos
ID: 24804382
I meant to say that the transaction log backup is only for the activity within the last hour.  The problem is the amount of data being backed-up when the server is overloaded.  (Not to mention having that amount of data going to the transaction log is most likely taxing as well)   We don't have any issues with space available for the log.
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 24804399
anyhow, please answer my questions.
is it a SHRINK you perform or not?
can you increase the frequency?

also: are you sure you don't have any "runaway" updates that updates the entire table instead of just single rows?
0
 
LVL 1

Author Comment

by:dianos
ID: 24804431
I know we can increase the frequency of the backups,  (or decrease for that matter, to avoid peak times)
But that is not what I am looking for.  
I need to know what is causing the growth, I know we have a lot of activity in the tempdb for instance. i
is there is a good way to check what is causing the growth?
0
 
LVL 1

Author Comment

by:dianos
ID: 24804501
The space reserved for the transaction log is fairly constant, around 18GB,  so no it is not being shrunk.

This is what  I am trying to find out, what could be causing the growth.  It might be a run away update, there is a multitude of applications working with the database.

Perhaps there is some way of looking at what is in the transaction log?
0
 
LVL 142

Accepted Solution

by:
Guy Hengel [angelIII / a3] earned 250 total points
ID: 24804503
>I know we have a lot of activity in the tempdb for instance
that will NOT increase the data in your database .ldf.

you could check with third-party tools the content of the .ldf file (must be offline AFAIK), for example the lumigent log explorer (there are others...)
0
 
LVL 1

Author Comment

by:dianos
ID: 24805184
Looks like someone already attempted to answer this:   {http:#22749498}
Log explorer is not sold anymore,  does it even work on SQL 2005?

I was able to get some information from the transaction log using:
SELECT top 1000 * FROM ::fn_dblog(null, null)
WHERE AllocUnitName is not null

but it is very RAW data, hard to correlate this to actual work performed by applications on the server.

Is anyone aware of any tool/method  that could trace what is causing the log file to grow?
0
Why You Should Analyze Threat Actor TTPs

After years of analyzing threat actor behavior, it’s become clear that at any given time there are specific tactics, techniques, and procedures (TTPs) that are particularly prevalent. By analyzing and understanding these TTPs, you can dramatically enhance your security program.

 
LVL 7

Expert Comment

by:wilje
ID: 24805324
If have allocated 18GB to your transaction log (ldf file), then what growth are you worried about?  
Is it that during peak times your backups are in excess of 5GB every hour?  If so, you need to perform a backup more often to reduce the size.
And when you say you truncate the log, verify that you are not performing a BACKUP LOG {your database} WITH TRUNCATE_ONLY.  If you have this setup, then your transaction log backups are useless because you have broken the log chain.  This command has been deprecated in 2005 and no longer works in 2008.
0
 
LVL 1

Author Comment

by:dianos
ID: 24805880
The backup is performed by a maintenance plan created by the wizard.  I do not see the actual code in there,but I assume it does it properly.  

I am actually looking for a way to get some meaningful data about what is causing the transaction log to grow.  2-5GB/h is with moderate usage.   Potentially reaching 18GB/h this is abnormal behavior, and I'm trying to trace what is causing it.

I know I can reschedule the backups, but this will not solve the source of the problem, only put it off.
0
 
LVL 7

Expert Comment

by:wilje
ID: 24806033
Okay, I am not sure what you are having a problem with.  To review:
  • Transaction log backups are currently running every hour and average 2-5GB/h for normal usage, seen by monitoring the size of the transaction log backups.
  • At some point, you are seeing transaction log backups grow to ~18GB/h, again - seen by monitoring the backups.
  • You now have a transaction log that is 18GB - and you are not shrinking it.
If the above is correct, then you know the time frames where you are seeing these large transaction log backups.  If this is correct, you should be able to run a server side trace to capture the activity during this time frame.
My best guess at this point would be automated reindexing processes.  You should be able to correlate the times you are seeing large transaction log backups with the processing of your reindexing process.
Since disk space is not a problem, I would recommend growing the transaction log to 20GB, set the autogrow to 1GB and leaving it alone, but monitor the size.  When it grows again you should be able to correlate the growth with increased size of the tables being reindexed or additional activity.
0
 
LVL 1

Author Comment

by:dianos
ID: 24806266
this is a 24/h system used from all over the world
- transaction logs  2-5GB/h during peak hours
- transaction logs of less then 50meg during other hours
- size of the main db 15GB
- space currently reserved for transaction log 18GB (set to auto grow)

Again I'm looking for a way to identify what is causing the log file to grow so quickly.
there are thousands of request per minute during the times the log files grow, It is very difficult to determine what is causing the growth by monitoring activity.

when looking at the transaction log with  fn_dblog
I've seen thousands of
LOP_DELETE_ROWS
...
entries for a set of tables
however I will need more data then this.

it is likely that one of the applications uses the main db as a temp db.


0
 
LVL 7

Expert Comment

by:wilje
ID: 24806396
Can you please clarify what you mean by transaction log growing?  What is the current size of the transaction log file (.ldf)?  What do you mean by space currently reserved for transaction log?  Is this how much space is on the drive - and the log file is growing to fill up the drive?  Or is this the size of the file and your backups are that large?
It is not clear...
0
 
LVL 142

Assisted Solution

by:Guy Hengel [angelIII / a3]
Guy Hengel [angelIII / a3] earned 250 total points
ID: 24806550
is the application eventually doing DELETE + INSERT instead of UPDATE?
0
 
LVL 1

Author Comment

by:dianos
ID: 24808673
OK boys thanks for your efforts the culprit was found by fn_dblog +  manual means.  
fn_dblog is able to list the objects effected in the transaction log.

it was one of the 10s of jobs scheduled which was regenerating large tables hourly during peek times, so every transaction backup would be huge during this time.

I still would like to have something that would be able to list what percentage of the truncation log is associated with what objects.  So if anyone knows of anything like that please message me and I'll get you some points :)


0
 
LVL 1

Author Comment

by:dianos
ID: 24808742
I have to stress that the most helpful was  fn_dblog but the system does not let me accept my own answer together with another expert, as the full solution!

0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 24808775
>but the system does not let me accept my own answer
yes, that is possible.
you have to start with "accept your own comment as answer", and could give part of the points to 1 or more other comments.

anyhow, glad we could help
0

Featured Post

Find Ransomware Secrets With All-Source Analysis

Ransomware has become a major concern for organizations; its prevalence has grown due to past successes achieved by threat actors. While each ransomware variant is different, we’ve seen some common tactics and trends used among the authors of the malware.

Join & Write a Comment

Suggested Solutions

Use this article to create a batch file to backup a Microsoft SQL Server database to a Windows folder.  The folder can be on the local hard drive or on a network share.  This batch file will query the SQL server to get the current date & time and wi…
For both online and offline retail, the cross-channel business is the most recent pattern in the B2C trade space.
Using examples as well as descriptions, and references to Books Online, show the different Recovery Models available in SQL Server and explain, as well as show how full, differential and transaction log backups are performed
This videos aims to give the viewer a basic demonstration of how a user can query current session information by using the SYS_CONTEXT function

706 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

19 Experts available now in Live!

Get 1:1 Help Now