SQL trans log files not getting truncated during backups

I have the initial size for a database set to 2,141mb.  I have the maximum growth set to 2,141mb.  The log file size is currently 2,191mb.  I used the maintenance wizard to create a trans log backup routine.  When I run the backup, the trans log file size stays the same.  Yesterday I saw that the trans log file had grown to 49gb and with the help of an expert, I shrank it to 400mb.  Help!  I have attached a screenshot of the maintenance plan that I setup.  Thanks in advance.
maintenance-routine-wizard-trans-log-bac
davidrobertbristowAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

DBAduck - Ben MillerPrincipal ConsultantCommented:
In order for the transaction log to truncate for a backup the database recovery model cannot be simple.  If you drop down the Specific databases box in your maintenance plan does it list your database?  if the database is not in Full or Bulk-logged it will not show up in the list.

The other thing that influences truncation is if a transaction has not been committed. Do you get a message of No transactions if you run the following command?

DBCC OPENTRAN

Because the log is circular in nature it can only truncate to the point of the last uncommitted transaction (it is a little more complicated than that, but it is the general idea to look at).
davidrobertbristowAuthor Commented:
The recovery model is full.  Running the above command shows the message below.

No active open transactions.
DBCC execution completed. If DBCC printed error messages, contact your system administrator
DBAduck - Ben MillerPrincipal ConsultantCommented:
Great that is the first step.

Now run the command while in the database.

DBCC LOGINFO

and post the results in a text file or excel sheet. This will check the virtual log files and also tell you if they are active with a status of 2.
Active Protection takes the fight to cryptojacking

While there were several headline-grabbing ransomware attacks during in 2017, another big threat started appearing at the same time that didn’t get the same coverage – illicit cryptomining.

davidrobertbristowAuthor Commented:
Okay, I've attached the results.  Thanks.
dbcc-loginfo-results.txt
davidrobertbristowAuthor Commented:
I don't know if this matters, but I run a full db backup prior to the log truncation job.  Would this influence anything?  Thanks.
DBAduck - Ben MillerPrincipal ConsultantCommented:
You have about 311 VLFs which is pretty (really) high for a small log like you have.  It looks like it has been autogrowing, so that is not necessarily helpful.

A full backup will not change the log backup size.

But in your DBCC LOGINFO it shows that none of the VLFs are active, so if you were to do a log backup right now, you should see a relatively small file come out.

You can also tell how much space is being used in the log file by using

DBCC SQLPERF(logspace)

Look for your database and see what the percentage is.  This will give you an idea of how much space is in use in your log.
davidrobertbristowAuthor Commented:
It shows that the space in use is 0.4663434       %.  The size of the backup log file is 416kb.  The log data file size hasn't shrunk.
DBAduck - Ben MillerPrincipal ConsultantCommented:
Now I see what you are thinking.  The log file itself will not shrink unless you shrink it.  The internal log files get truncated (dbcc loginfo) not the external file.

So the way you handle that is to do the log backup more often to clear out the internal log files so that the log file itself (.ldf file) does not grow.

External - shrink manually
Internal - truncates to the point of the last uncommitted transaction when you perform a BACKUP LOG.

Does that make sense?
davidrobertbristowAuthor Commented:
Are you saying that I would occasionally have to shrink the log file or that if I do log file backups more often that the file itself would stay at a more consistent size?  Yesterday the file size was 400kb and over night it grew to 2.1gb.  I wonder why it grew given that there was little activity on the database and that the # of internal transactions are taking up less than 1%?
DBAduck - Ben MillerPrincipal ConsultantCommented:
The only way to control or keep the size of the actual file for the log consistent is to back up the log more often so that the committed transactions that have not been backed up get truncated.

How often do you back up the transaction log?  Any activity like rebuild of indexes, stats updates can make the transaction log accumulate data. The transaction log is used for a lot of things so it is hard to say and the only way to really know what took place or how it grew is to look at the sqlperf stats more often.

You could find out when the log grew by running the following query.

Reference: https://www.simple-talk.com/sql/database-administration/sql-server-database-growth-and-autogrowth-settings/

DECLARE @filename NVARCHAR(1000);
DECLARE @bc INT;
DECLARE @ec INT;
DECLARE @bfn VARCHAR(1000);
DECLARE @efn VARCHAR(10);

-- Get the name of the current default trace
SELECT @filename = CAST(value AS NVARCHAR(1000))
FROM ::fn_trace_getinfo(DEFAULT)
WHERE traceid = 1 AND property = 2;

-- rip apart file name into pieces
SET @filename = REVERSE(@filename);
SET @bc = CHARINDEX('.',@filename);
SET @ec = CHARINDEX('_',@filename)+1;
SET @efn = REVERSE(SUBSTRING(@filename,1,@bc));
SET @bfn = REVERSE(SUBSTRING(@filename,@ec,LEN(@filename)));

-- set filename without rollover number
SET @filename = @bfn + @efn

-- process all trace files
SELECT 
  ftg.StartTime
,te.name AS EventName
,DB_NAME(ftg.databaseid) AS DatabaseName  
,ftg.Filename
,(ftg.IntegerData*8)/1024.0 AS GrowthMB 
,(ftg.duration/1000)AS DurMS
FROM ::fn_trace_gettable(@filename, DEFAULT) AS ftg 
INNER JOIN sys.trace_events AS te ON ftg.EventClass = te.trace_event_id  
WHERE (ftg.EventClass = 92  -- Date File Auto-grow
    OR ftg.EventClass = 93) -- Log File Auto-grow
ORDER BY ftg.StartTime

Open in new window

davidrobertbristowAuthor Commented:
My maintenance jobs, which start at 12:15am, consistent of reorganizing indexes, updating statistics, full database backups, then transaction log backups in that order.  It would makes sense now that the file would autogrow as a result of the jobs leading up to the trans log backup.  In fact, at 12:24am the file autogrew by 4900mb.  It makes sense that when the trans log backup runs it truncates the log but that the log file size is still large as a result of the previous jobs.  

So, to fix the problem, should I
1.  Set the autogrow limit on the file to say 6gb so that it can accommodate growth from the maintenance jobs.
2.  Run a scheduled shrinkdb right after the trans backup job that runs at around 2:15am.
3.  Schedule several trans log backup jobs during the day in addition to the nightly.

Or do you have another suggestions of how I should best handle this?

Thanks again.
DBAduck - Ben MillerPrincipal ConsultantCommented:
I wouldn't shrink the database at all in a maintenance job.

There is nothing wrong with having a 5 GB log file if that is what is needed. But I would not limit the growth of the file, I would set the Auto Growth to a reasonable amount because Auto Growth is for emergency growth.

What I would do is this.

I would shrink the Log file only by Shrink files. then I would grow it to be 5000 MB or 6000 MB and then set the Auto Growth to something like 512 MB and leave the max growth to either unlimited or a number that would keep the disk from filling up. Then leave it at that.  It is better to manage the transaction log instead of letting the auto growth manage the size.  If you need that much log for the jobs, then I would allocate that much and leave it there.

I would set the log backup to run every 15 minutes starting at the time you kick off the reorganization jobs or rebuild jobs to keep things down.  I would run dbcc sqlperf(logspace) and capture the output before each log backup so that you can tell how much log is being used before each 15 minute backup.  Then you can at least know what the file size should be to accommodate the jobs.

Does that make sense and help?
davidrobertbristowAuthor Commented:
I will try what you have suggested.  However, previously I set the autogrowth to 10% and the growth limit to unlimited.  I ran transaction jobs nightly, which were working, but the trans log file kept growing until it reached 49gb.  I was told by the support technicians at the company who we bought our ERP from that setting the growth limit to unlimited would prevent the trans log file from being truncated.  What do you think?
DBAduck - Ben MillerPrincipal ConsultantCommented:
Setting it to unlimited just tells SQL that it can autogrow the physical file to unlimited, but the truncation is internal and is controlled by the transaction log backup so that is not true.

But I would not set your auto growth to 10% or any percentage.  Always a fixed number.
Scott PletcherSenior DBACommented:
A few pointers:

1) Never set autogrow to a %, set it to an actual number.
2) Never use maintenance plans to do index reorganizations/rebuilds, because it always rebuilds all indexes.  Instead, use a script(s) to rebuild only those that actually need it.  Ola Hallengren has free backup scripts you can use.
3) Pre-allocate the necessary log space and leave it alone.  Shrinking and re-growing the log is extremely high overhead.  Once you've got #2 in place, you should let the log grow and see how big it gets.  Then shrink it and grow it in, say, 2GB increments (assuming your disk speed can format that amount of space quickly enough) until you reach the necessary limit.
For example, say you do need 6GB, run the next two lines:
USE db_name
EXEC sp_helpfile
From that output, for file #2 (the log), copy the first column, "name", to the "NAME = " value below

DBCC SHRINKFILE ( 2 ) --shrink the log to its minimum size
ALTER DATABASE db_name MODIFY FILE ( NAME = <name from sp_helpfile command>, SIZE = 2GB, FILEGROWTH = 50MB )
ALTER DATABASE db_name MODIFY FILE ( NAME = <name from sp_helpfile command>, SIZE = 4GB )
ALTER DATABASE db_name MODIFY FILE ( NAME = <name from sp_helpfile command>, SIZE = 6GB )
davidrobertbristowAuthor Commented:
Thanks for your feedback, Ben and Scott.  I have to step away from this for a few hours to work on other priorities.  I will get back to you.  Thanks again.
davidrobertbristowAuthor Commented:
Okay, I've done the following.
1.  Set the growth factor to 250mb rather than the previous 10%
2.  From running the script that Ben posted, I can see that in a 24hr period the log file grew by 8.9GB.  This is with only doing a single nightly log file backup.  I have therefore set a growth limit on the log file itself of 10gb.  This should be plenty especially with doing more frequent log file backups.
3.  I've scheduled trans log backups hourly starting after the nightly maintenance and ending just before.
4.  I will have to monitor disk space on the destination disk where I store backups.
5.  I'm still unsure about whether to run the nightly job to reorganize indexes and update statistics.  A consultant set this up and thought that it was important at the time.  It takes about 15 minutes to run.  Thoughts?

After receiving your final feedback on this I will award points.  I appreciate the sage advice since I'm a real newby with SQL.
DBAduck - Ben MillerPrincipal ConsultantCommented:
Great on all, I still would put the max growth for the log at 15 or 20 if you are determined to limit it.

It looks like in #5 that if the maintenance runs in 15 minutes then the generation of that much log file is significant and will be that much before the log backup runs.

I would implement a different index solution rather than a maintenance plan.  http://ola.hallengren.com has some easy to configure and run scripts that will reorganize or rebuild only the indexes that need it instead of all the indexes.  This will cut down on the log space needed.

Good list, even though it took a while to get to the final solution for you.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
davidrobertbristowAuthor Commented:
Ben, thanks for sticking by me step-by-step to help solve this problem!
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server 2008

From novice to tech pro — start learning today.