Go Premium for a chance to win a PS4. Enter to Win

x
?
Solved

Sql server 2012 tranction log keeps growing

Posted on 2014-10-01
15
Medium Priority
?
185 Views
Last Modified: 2014-10-23
I am having an issue with the size on the transaction log on one of my databases. The database is around 90 gigs and the transaction log is about 110 gigs.  I have a maintenance plan that runs nightly full backup 7 days a week and hourly transaction log backups 24/7.  About two months ago  I tried the shrink the log file.  I first did a full backup and then a transaction log backup.  I then shrank the log file down to 30 gigs.  After a few minutes I started seeing the log file growing again.  It went back up to 80 gigs.  I am not a dba, but it was my understanding  that if I am doing regular transaction log backups, it should keep the size of the log file small.  Does anyone know why the log file should be growing?

Thanks,

Bill
0
Comment
Question by:bjennings
  • 5
  • 5
  • 2
  • +2
15 Comments
 
LVL 24

Expert Comment

by:Phillip Burton
ID: 40354395
Are you on FULL recovery model? If you are, then it should truncate the transaction log at each Full backup.
0
 
LVL 143

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 40354431
note: "truncate log" does not mean the file will physically shrink", but just that the space internally is marked to be reused
and I also ask: which recovery model are you on: full or bulk-logged ?
in those 2 cases, the regular t-log backup should solve the issue, BUT if the file continues to grow, you likely have some old open transaction which keeps the file from not growing:

check out the output of  this command:
DBCC OPENTRAN

usually, it's some replication transaction or the like which is the culprit
0
 
LVL 52

Expert Comment

by:Vitor Montalvão
ID: 40354774
If you set tlog backups it's because your database model it's either FULL or BULK-LOGGED. Can you check which one it is?
Like Guy wrote you should have some transaction running since after shrink the tlog didn't come down from 30GB. Check for some long running jobs in SQL Server Agent.
And which kind of database are we talking about? I mean, how's the data are updated? By users with an application? By some import method (SSIS, Bulk-Insert, Replication, ...)?
0
What is SQL Server and how does it work?

The purpose of this paper is to provide you background on SQL Server. It’s your self-study guide for learning fundamentals. It includes both the history of SQL and its technical basics. Concepts and definitions will form the solid foundation of your future DBA expertise.

 

Author Comment

by:bjennings
ID: 40354783
Hello Guys...Thanks for the quick replies!

The database is set full recovery.  I ran the DBCC OPENTRAN and I did not find any old open transactions.   So tonight I am going to do a full backup of the database and then do a transaction log backup.  Once this is done I will then shrink the log file down to 30 Gigs.  

I heard that shrinking the log file could fragment the database....Should I worry about shrinking the log file?

Thanks,

Bill
0
 
LVL 52

Accepted Solution

by:
Vitor Montalvão earned 1000 total points
ID: 40354831
Shrinking data files can fragment the database. Log files not but can affect the performance since you are shrinking a file that will grow later, and during the growth process can affect the database performance.
And with all the shrinks and grows you should check the number of VLF's. You should be careful with high number of VLF's (thousands aren't good).
0
 

Author Comment

by:bjennings
ID: 40354960
Hello Vitor...Thanks for the reply.

I ran DBCC LOGINFO on my production database.....I have over 4 thousand vlf files.  When you say to be careful with a high number of VLF's...How can I reduce the amount of vlf's?

Thanks,

Bill
0
 
LVL 70

Assisted Solution

by:Scott Pletcher
Scott Pletcher earned 1000 total points
ID: 40355327
To clear VLFs, you would typically shrink the log down to its minimum size possible, then re-grow it in relatively large chunks to the size you want it to be.  Log space must be pre-formatted so how much you can allocate at one time will depend on your drive speed; the db will be paused while the new space is being formatted.  You also want to make sure the autogrow is a reasonable, fixed amount (not a percentage).

USE [<your_db_name>]
--first use sp_helpfile to get the logical log file name (it will be the first column in sp_helpfile output)
DBCC SHRINKFILE ( [<logical_log_file_name>], 1 )
ALTER DATABASE [<your_db_name>] MODIFY
    FILE ( NAME = [<logical_log_file_name>], SIZE = 6GB,  FILEGROWTH = 50MB ) --or 100MB or 200MB, if you think it necessary and your disk subsystem can handle it
ALTER DATABASE [<your_db_name>] MODIFY
    FILE ( NAME = [<logical_log_file_name>], SIZE = 12GB )
ALTER DATABASE [<your_db_name>] MODIFY
    FILE ( NAME = [<logical_log_file_name>], SIZE = 18GB )
ALTER DATABASE [<your_db_name>] MODIFY
    FILE ( NAME = [<logical_log_file_name>], SIZE = 24GB )
ALTER DATABASE [<your_db_name>] MODIFY
    FILE ( NAME = [<logical_log_file_name>], SIZE = 30GB )


Btw, you do not need to take a full db backup in order to work with the log file.  At most I would just do a differential backup.  For log work, Just take a log backup and shrink the log.
0
 
LVL 52

Expert Comment

by:Vitor Montalvão
ID: 40356427
ScottPletcher gave a good explanation how to do it but how to know the best chunck size?
Kimberley Tripp has a very good article about it. You should give a read on it so you can understand better the VLF's.
0
 

Author Comment

by:bjennings
ID: 40356614
I just tried shrinking my transaction log down to 30 gigs, but only went down to 80 gigs.  It does say that the log file has 99% unused space.  Does anyone know why that would happen?

Thanks,

Bill
0
 
LVL 143

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 40356684
yes, if the last used VLF are on the end of the file, you get that.
now, just wait a couple of minutes, redo a tlog backup, retry the shrink log file operation.
at some point, it will work
0
 
LVL 52

Expert Comment

by:Vitor Montalvão
ID: 40356762
Or if you transaction log was set 30GB as initial size it won't shrink less than that value.
Can you check the available space in tlog file?
And also if the database is part of a replication?
0
 
LVL 70

Expert Comment

by:Scott Pletcher
ID: 40357232
>> Or if you transaction log was set 30GB as initial size it won't shrink less than that value. <<

That's not true.  You can explicitly shrink a file to a size smaller than what it was initially allocated as.


As to the file not shrinking, it is as Guy said: the currently active VLF is near the end of the file.

DBCC LOGINFO shows which VLF is currently active; it's the one with "status = 2".  Once that status moves back to (near) the beginning of the log file, then you will be able to shrink the log.
0
 
LVL 52

Expert Comment

by:Vitor Montalvão
ID: 40357275
That's not true.  You can explicitly shrink a file to a size smaller than what it was initially allocated as.
Right. Thanks for the correction.
We can shrink the transaction log to the minimum of 1 VLF, so the smallest it can be is the value of 1 VLF. Only if you has a VLF of 30GB than you can't shrink less than that size.
0
 

Author Comment

by:bjennings
ID: 40357713
Thanks everyone for the replies!!  I am going to shrink the log again tonight in a maintenance window.

Thanks,

Bill
0
 

Author Closing Comment

by:bjennings
ID: 40399722
I was able to shrink the log and it has not grown in over a week!  Thank you all for your help!!

Thanks,

Bill
0

Featured Post

Vote for the Most Valuable Expert

It’s time to recognize experts that go above and beyond with helpful solutions and engagement on site. Choose from the top experts in the Hall of Fame or on the right rail of your favorite topic page. Look for the blue “Nominate” button on their profile to vote.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

A Stored Procedure in Microsoft SQL Server is a powerful feature that it can be used to execute the Data Manipulation Language (DML) or Data Definition Language (DDL). Depending on business requirements, a single Stored Procedure can return differe…
One of the most important things in an application is the query performance. This article intends to give you good tips to improve the performance of your queries.
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
Viewers will learn how to use the SELECT statement in SQL to return specific rows and columns, with various degrees of sorting and limits in place.

927 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question