Solved

SQL 2008R2 file (database.ldf) will not shrink

Posted on 2013-12-04
9
1,288 Views
Last Modified: 2014-01-20
Hello Experts,

I have a database that is about 8gb and a LDF file that is about 5gb.   I have a transaction log backup every hour and a full backup once a day.   When I was using a previous version of SQL when the full backup was completed the transaction log file would go to a very small filesize.

I have tried to manually shrink the transaction log file and it does not change.

Should the transaction log file get smaller?

Thanks for your help.
0
Comment
Question by:tucktech
  • 5
  • 3
9 Comments
 
LVL 15

Expert Comment

by:pateljitu
ID: 39696001
Try this article which is similar in terms of shrinking transaction log file:
http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SQL-Server-2005/Q_28287001.html

Are you running any active replication on SQL servers?

Also not sure of you tried manual shrink via t-sql script or GUI, so have provided with both option as below:

T-SQL:
USE DatabaseName
GO
DBCC SHRINKFILE(<TransactionLogName>, 1)
BACKUP LOG <DatabaseName> WITH TRUNCATE_ONLY
DBCC SHRINKFILE(<TransactionLogName>, 1)
GO

Open in new window


GUI to shrink log:
http://www.katieandemil.com/sql-server-shrink-log-file-to-zero-mb-2008-r2
0
 
LVL 10

Expert Comment

by:PadawanDBA
ID: 39696189
Just for grins, when you check the output of this:

dbcc sqlperf( logspace );

Open in new window


What does it give you for the free space in the log file for this database?  

The log is going to grow until a transaction log backup completes and moves the lsn's around to free up the space within the file.  So basically, it stores the log entries for all transactions which occurred between the previous transaction log backup and the current time (when it's in full recovery model).  The backup basically moves the starting LSNs around so that the transaction log only has entries from the oldest open transaction onward.  This dovetails into what pateljitu metnioned with the replication.  I have seen scenarios where replication will flag as an open transaction for the entire time it is active, which will prevent its transactions from being flushed.

What is the velocity with which data is inserted/updated/deleted from this db (5GB worth of transaction log work in a single hour is pretty crazy for an 8GB database, assuming it's not a staging db where the data is regularly changing)?  Shrinking files should be a last resort and should never be something you do regularly... If you need the space and the transaction log is validly that large, then you need more disk.  

When a transaction log grows, it is going to freeze all operations on the database until it is done with the grow operation, so you really do want your transaction logs right sized for the volume that is necessary for your database.  For data files, you can kind of get around the penalty to a large degree with instant initialization, but transaction logs are not able to be instantly initialized and sqlserver must 0 out the new space on disk that it is going to occupy before it can complete the grow operation.  Additionally, you're going to fragment the heck out of your transaction log, which is accessed sequentially and crucial for throughput on the database as a whole.
0
 

Author Comment

by:tucktech
ID: 39696306
hello PadawanDBA, thanks for all the details!  Here are the results of my query

databasename      5298.742         0.4465056      0,

the first number is log file size, next is log size used % and third number is status

If I read this correctly .44% is not even 1 percent.  So I should perform some maintenance.

I am thinking that my FULL backup should reduce the log, you mention and transaction log backkup.  Is there a difference?  Is this what I should do?
0
 
LVL 10

Expert Comment

by:PadawanDBA
ID: 39696503
The full backup does not touch the transaction logs as far as moving the LSN pointers are concerned.  What would be interesting to see is the free space in the transaction log over the course of the day (to see if there is a point where the transaction log is using any meaningful amount of space).  The transaction log backups free up space within the transaction log itself, they won't return that space back to the OS.  If you don't notice it coming anywhere close to 5 GB, I would say you could definitely shrink it down.  The other good part about getting some data points about how full the transaction logs get would give you a good idea of sizing.
0
Threat Intelligence Starter Resources

Integrating threat intelligence can be challenging, and not all companies are ready. These resources can help you build awareness and prepare for defense.

 

Author Comment

by:tucktech
ID: 39726872
Hello, I preformed a backup of the transaction log via ssms.  What next steps should I take in reducing the size of the transaction log file?  It appears, based on the information provided earlier that the file does not have much content.  

I ran dbcc sqlperf( logspace ) and had the following results:

12/04/13 - 5,298.742, 0.4465056%, 0
12/18/13 - 5,291.867, 0.9825476%, 0

the first number is log file size, next is log size used % and third number is status

Not sure what to do from here.  It appears that transaction log is less than 1% used.  

So, how would I go about shrinking?   Do I perform a transaction log backup and then shrink?
0
 

Author Comment

by:tucktech
ID: 39752484
Hello, can anyone respond?  Thanks
0
 
LVL 10

Accepted Solution

by:
PadawanDBA earned 400 total points
ID: 39752609
I would do a full database backup, a transaction log backup and then I would shrink the file for the transaction log.  

There are two methods for shrinking a data file, you can truncate where, under the hood, SQL Server DBE will essentially find the last allocated extent in the file and release everything after that back to the OS (no reordering of pages):

dbcc shrinkfile( N'<logical file name here>', trunateonly );

Open in new window


You can also reorder the pages where allocated pages at the end of the file are moved to unallocated pages at the front of the file:

dbcc shrinkfile( N'<logical file name here>', <target size in MB>  );

Open in new window


The process is a little different in the case of a transaction log:

alter database <dbNameHere>
    set recovery simple;
go

dbcc shrinkfile( N'<logical file name of transaction log here>', <target size in MB> );
go

alter database <dbNameHere>
    set recovery full;
go

Open in new window


The reason for this is microsoft has made you really think about your transgressions before you can shrink your transaction logs (i am saying it in lighthearted fashion, but shrinking transaction logs/data files in general is very very bad - there are, of course, times when it may be necessary, just don't make it a habbit).
0
 

Assisted Solution

by:tucktech
tucktech earned 0 total points
ID: 39782328
the recover mode for the database is "FULL" rather than "simple" so it will not shrink unless I change the mode.
0
 

Author Closing Comment

by:tucktech
ID: 39793739
Valid points made.  I changed a copy of the database to simple and saw that I could reduce the database log size.  It all depends on the environment and what characteristics you want.
0

Featured Post

Zoho SalesIQ

Hassle-free live chat software re-imagined for business growth. 2 users, always free.

Join & Write a Comment

Suggested Solutions

Title # Comments Views Activity
how to import 7 54
how to remove non-duplicated row 5 25
Calculating Business Hours 19 62
Troubleshooting Methodology - steps 3 19
In this article I will describe the Copy Database Wizard method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
This article explains how to reset the password of the sa account on a Microsoft SQL Server.  The steps in this article work in SQL 2005, 2008, 2008 R2, 2012, 2014 and 2016.
This video gives you a great overview about bandwidth monitoring with SNMP and WMI with our network monitoring solution PRTG Network Monitor (https://www.paessler.com/prtg). If you're looking for how to monitor bandwidth using netflow or packet s…
This demo shows you how to set up the containerized NetScaler CPX with NetScaler Management and Analytics System in a non-routable Mesos/Marathon environment for use with Micro-Services applications.

705 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

21 Experts available now in Live!

Get 1:1 Help Now