Link to home
Start Free TrialLog in
Avatar of davidrobertbristow
davidrobertbristow

asked on

SQL trans log files not getting truncated during backups

I have the initial size for a database set to 2,141mb.  I have the maximum growth set to 2,141mb.  The log file size is currently 2,191mb.  I used the maintenance wizard to create a trans log backup routine.  When I run the backup, the trans log file size stays the same.  Yesterday I saw that the trans log file had grown to 49gb and with the help of an expert, I shrank it to 400mb.  Help!  I have attached a screenshot of the maintenance plan that I setup.  Thanks in advance.
maintenance-routine-wizard-trans-log-bac
Avatar of DBAduck - Ben Miller
DBAduck - Ben Miller
Flag of United States of America image

In order for the transaction log to truncate for a backup the database recovery model cannot be simple.  If you drop down the Specific databases box in your maintenance plan does it list your database?  if the database is not in Full or Bulk-logged it will not show up in the list.

The other thing that influences truncation is if a transaction has not been committed. Do you get a message of No transactions if you run the following command?

DBCC OPENTRAN

Because the log is circular in nature it can only truncate to the point of the last uncommitted transaction (it is a little more complicated than that, but it is the general idea to look at).
Avatar of davidrobertbristow
davidrobertbristow

ASKER

The recovery model is full.  Running the above command shows the message below.

No active open transactions.
DBCC execution completed. If DBCC printed error messages, contact your system administrator
Great that is the first step.

Now run the command while in the database.

DBCC LOGINFO

and post the results in a text file or excel sheet. This will check the virtual log files and also tell you if they are active with a status of 2.
Okay, I've attached the results.  Thanks.
dbcc-loginfo-results.txt
I don't know if this matters, but I run a full db backup prior to the log truncation job.  Would this influence anything?  Thanks.
You have about 311 VLFs which is pretty (really) high for a small log like you have.  It looks like it has been autogrowing, so that is not necessarily helpful.

A full backup will not change the log backup size.

But in your DBCC LOGINFO it shows that none of the VLFs are active, so if you were to do a log backup right now, you should see a relatively small file come out.

You can also tell how much space is being used in the log file by using

DBCC SQLPERF(logspace)

Look for your database and see what the percentage is.  This will give you an idea of how much space is in use in your log.
It shows that the space in use is 0.4663434       %.  The size of the backup log file is 416kb.  The log data file size hasn't shrunk.
Now I see what you are thinking.  The log file itself will not shrink unless you shrink it.  The internal log files get truncated (dbcc loginfo) not the external file.

So the way you handle that is to do the log backup more often to clear out the internal log files so that the log file itself (.ldf file) does not grow.

External - shrink manually
Internal - truncates to the point of the last uncommitted transaction when you perform a BACKUP LOG.

Does that make sense?
Are you saying that I would occasionally have to shrink the log file or that if I do log file backups more often that the file itself would stay at a more consistent size?  Yesterday the file size was 400kb and over night it grew to 2.1gb.  I wonder why it grew given that there was little activity on the database and that the # of internal transactions are taking up less than 1%?
The only way to control or keep the size of the actual file for the log consistent is to back up the log more often so that the committed transactions that have not been backed up get truncated.

How often do you back up the transaction log?  Any activity like rebuild of indexes, stats updates can make the transaction log accumulate data. The transaction log is used for a lot of things so it is hard to say and the only way to really know what took place or how it grew is to look at the sqlperf stats more often.

You could find out when the log grew by running the following query.

Reference: https://www.simple-talk.com/sql/database-administration/sql-server-database-growth-and-autogrowth-settings/

DECLARE @filename NVARCHAR(1000);
DECLARE @bc INT;
DECLARE @ec INT;
DECLARE @bfn VARCHAR(1000);
DECLARE @efn VARCHAR(10);

-- Get the name of the current default trace
SELECT @filename = CAST(value AS NVARCHAR(1000))
FROM ::fn_trace_getinfo(DEFAULT)
WHERE traceid = 1 AND property = 2;

-- rip apart file name into pieces
SET @filename = REVERSE(@filename);
SET @bc = CHARINDEX('.',@filename);
SET @ec = CHARINDEX('_',@filename)+1;
SET @efn = REVERSE(SUBSTRING(@filename,1,@bc));
SET @bfn = REVERSE(SUBSTRING(@filename,@ec,LEN(@filename)));

-- set filename without rollover number
SET @filename = @bfn + @efn

-- process all trace files
SELECT 
  ftg.StartTime
,te.name AS EventName
,DB_NAME(ftg.databaseid) AS DatabaseName  
,ftg.Filename
,(ftg.IntegerData*8)/1024.0 AS GrowthMB 
,(ftg.duration/1000)AS DurMS
FROM ::fn_trace_gettable(@filename, DEFAULT) AS ftg 
INNER JOIN sys.trace_events AS te ON ftg.EventClass = te.trace_event_id  
WHERE (ftg.EventClass = 92  -- Date File Auto-grow
    OR ftg.EventClass = 93) -- Log File Auto-grow
ORDER BY ftg.StartTime

Open in new window

My maintenance jobs, which start at 12:15am, consistent of reorganizing indexes, updating statistics, full database backups, then transaction log backups in that order.  It would makes sense now that the file would autogrow as a result of the jobs leading up to the trans log backup.  In fact, at 12:24am the file autogrew by 4900mb.  It makes sense that when the trans log backup runs it truncates the log but that the log file size is still large as a result of the previous jobs.  

So, to fix the problem, should I
1.  Set the autogrow limit on the file to say 6gb so that it can accommodate growth from the maintenance jobs.
2.  Run a scheduled shrinkdb right after the trans backup job that runs at around 2:15am.
3.  Schedule several trans log backup jobs during the day in addition to the nightly.

Or do you have another suggestions of how I should best handle this?

Thanks again.
I wouldn't shrink the database at all in a maintenance job.

There is nothing wrong with having a 5 GB log file if that is what is needed. But I would not limit the growth of the file, I would set the Auto Growth to a reasonable amount because Auto Growth is for emergency growth.

What I would do is this.

I would shrink the Log file only by Shrink files. then I would grow it to be 5000 MB or 6000 MB and then set the Auto Growth to something like 512 MB and leave the max growth to either unlimited or a number that would keep the disk from filling up. Then leave it at that.  It is better to manage the transaction log instead of letting the auto growth manage the size.  If you need that much log for the jobs, then I would allocate that much and leave it there.

I would set the log backup to run every 15 minutes starting at the time you kick off the reorganization jobs or rebuild jobs to keep things down.  I would run dbcc sqlperf(logspace) and capture the output before each log backup so that you can tell how much log is being used before each 15 minute backup.  Then you can at least know what the file size should be to accommodate the jobs.

Does that make sense and help?
I will try what you have suggested.  However, previously I set the autogrowth to 10% and the growth limit to unlimited.  I ran transaction jobs nightly, which were working, but the trans log file kept growing until it reached 49gb.  I was told by the support technicians at the company who we bought our ERP from that setting the growth limit to unlimited would prevent the trans log file from being truncated.  What do you think?
Setting it to unlimited just tells SQL that it can autogrow the physical file to unlimited, but the truncation is internal and is controlled by the transaction log backup so that is not true.

But I would not set your auto growth to 10% or any percentage.  Always a fixed number.
A few pointers:

1) Never set autogrow to a %, set it to an actual number.
2) Never use maintenance plans to do index reorganizations/rebuilds, because it always rebuilds all indexes.  Instead, use a script(s) to rebuild only those that actually need it.  Ola Hallengren has free backup scripts you can use.
3) Pre-allocate the necessary log space and leave it alone.  Shrinking and re-growing the log is extremely high overhead.  Once you've got #2 in place, you should let the log grow and see how big it gets.  Then shrink it and grow it in, say, 2GB increments (assuming your disk speed can format that amount of space quickly enough) until you reach the necessary limit.
For example, say you do need 6GB, run the next two lines:
USE db_name
EXEC sp_helpfile
From that output, for file #2 (the log), copy the first column, "name", to the "NAME = " value below

DBCC SHRINKFILE ( 2 ) --shrink the log to its minimum size
ALTER DATABASE db_name MODIFY FILE ( NAME = <name from sp_helpfile command>, SIZE = 2GB, FILEGROWTH = 50MB )
ALTER DATABASE db_name MODIFY FILE ( NAME = <name from sp_helpfile command>, SIZE = 4GB )
ALTER DATABASE db_name MODIFY FILE ( NAME = <name from sp_helpfile command>, SIZE = 6GB )
Thanks for your feedback, Ben and Scott.  I have to step away from this for a few hours to work on other priorities.  I will get back to you.  Thanks again.
Okay, I've done the following.
1.  Set the growth factor to 250mb rather than the previous 10%
2.  From running the script that Ben posted, I can see that in a 24hr period the log file grew by 8.9GB.  This is with only doing a single nightly log file backup.  I have therefore set a growth limit on the log file itself of 10gb.  This should be plenty especially with doing more frequent log file backups.
3.  I've scheduled trans log backups hourly starting after the nightly maintenance and ending just before.
4.  I will have to monitor disk space on the destination disk where I store backups.
5.  I'm still unsure about whether to run the nightly job to reorganize indexes and update statistics.  A consultant set this up and thought that it was important at the time.  It takes about 15 minutes to run.  Thoughts?

After receiving your final feedback on this I will award points.  I appreciate the sage advice since I'm a real newby with SQL.
ASKER CERTIFIED SOLUTION
Avatar of DBAduck - Ben Miller
DBAduck - Ben Miller
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Ben, thanks for sticking by me step-by-step to help solve this problem!