Solved

Why do I have to backup a transaction log twice for it to shrink?

Posted on 2012-03-26
5
534 Views
Last Modified: 2012-06-28
Greetings all,

This is kind of a two part question.  I have a SQL database that I have to keep running only until the product is replaced.  The transaction log for the database constantly jumps to 2GB or more and I have to manually shrink it.  When I manually shrink I alwasy have to...

1. back up the DB
2. run the shrink
3 back up the DB again
4. run the shrink

For whatever reason - I have to backup/shrink twice for the shrink to work and I think I actually found that work around here on EE.  Why is this the case and how can I get a DBCC shrinkt to work on the first shot.

The reason I need to know this is that I need to automate this shrink process and the maintenance plans never really shrink the DB.  Which leads to my next question - can anyone point me to a good query that will shrink the DB after a backup?
0
Comment
Question by:yccdadmins
5 Comments
 
LVL 25

Accepted Solution

by:
TempDBA earned 250 total points
Comment Utility
There is no need to do it and it would be good if you give the link to the solution which you are trying to do. Basically it should be able to shrink at the first shot itself. Before shrinking check the status of the log file why it is not allowing the log to shrink. The following query will give you some idea on it.

SELECT NAME,d.log_reuse_wait, d.log_reuse_wait_desc FROM MASTER.sys.databases d

If the reason is log backup then only you need to take the backup. Else no need to take the backup. Check the log file usage for the particular database using

dbcc sqlperf(logspace)

This will give you the idea how much of space is used and what percentage of the space is free. If much of space is used with very few free percentage, try checking what transaction is eating up the space. Check for the opentransactions:-

dbcc opentran     --Note this has to be run on the database

Then shrink the database as per the requirement. I am sorry but the fix you are using is not idle.
0
 
LVL 11

Assisted Solution

by:b_levitt
b_levitt earned 250 total points
Comment Utility
see the first part of the "More Information" section:
http://support.microsoft.com/kb/272318

Shrinking removes inactive parts of the log.  In order to be inactive, the log must be truncated which occurs as part of log backup.

However, transaction log is made up of many virtual log files that are used in a round robin fashion.  Only virtual logs at the end of the file can be pruned so if SQL server is currently using the last virtual log, nothing can be removed.  The old fix in sql 7 was to run a script that ran enough dummy transactions to fill up the log and wrap the pointer around to the first virtual log (you can still use this method if you prefer).  
http://support.microsoft.com/kb/256650/EN-US


DBCC now does this for you but still requires the extra step of truncating the log again so the inactive space can be deleted

This article visualizes the architecture nicely:
http://msdn.microsoft.com/en-us/library/aa174526%28v=sql.80%29.aspx
0
 
LVL 11

Expert Comment

by:b_levitt
Comment Utility
I should also mention that if you are doing regular log backups, it is generally not necessary or even recommended to shrink the transaction log.  2GB is simply the equilibrium point for the amount of transactions occurring and the frequency of your log backups.  Shrinking the log just puts unneeded stress on the sql server to increase the log size back to that point.  There are of course exceptions - say a one-off load that causes your log to be larger than normal.  But in most cases, id say change increase the frequency of your transaction log backup before shrinking the log if you believe the log size is unacceptable.
0
 
LVL 75

Expert Comment

by:Anthony Perkins
Comment Utility
I have a simple solution for you:  Change your Recovery Model to Simple.

You are obviously not aware of how you are fragmenting the database beyond recognition and any advantage you may have by using a Full Recovery Model is useless, you will never be able to use point-in-time restore.
0
 

Author Comment

by:yccdadmins
Comment Utility
All I had to do was discontinue the automated optimization and integrity checks the last Admin had left on the database maintenance plans.

And I was perfectly aware of what I wsa doing to the transaction log but had disk space issues while trying to port to a virtual server.  Also, point-in-time restore worked fine.
0

Featured Post

How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

Join & Write a Comment

Suggested Solutions

Let's review the features of new SQL Server 2012 (Denali CTP3). It listed as below: PERCENT_RANK(): PERCENT_RANK() function will returns the percentage value of rank of the values among its group. PERCENT_RANK() function value always in be…
The Delta outage: 650 cancelled flights, more than 1200 delayed flights, thousands of frustrated customers, tens of millions of dollars in damages – plus untold reputational damage to one of the world’s most trusted airlines. All due to a catastroph…
Familiarize people with the process of utilizing SQL Server functions from within Microsoft Access. Microsoft Access is a very powerful client/server development tool. One of the SQL Server objects that you can interact with from within Microsoft Ac…
Via a live example, show how to backup a database, simulate a failure backup the tail of the database transaction log and perform the restore.

763 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

7 Experts available now in Live!

Get 1:1 Help Now