Solved

Transaction Log Backup And Database Mirroring

Posted on 2009-07-06
16
551 Views
Last Modified: 2012-05-07
Hi All,

I have the below scenario:

I have a database db1 which has been configured for database mirroring.

The log file is increasing big time and due to that I have setup transaction log backup job for every 6 hours and it will delete the trn files which are older than 2 days.

After doing this do I still need to shrink the database so that the log file is not increasing or what do I need to do In order to stop the log file from increasing big time.

Thanks in advance.
0
Comment
Question by:matrix_aash
  • 7
  • 7
  • 2
16 Comments
 
LVL 57

Accepted Solution

by:
Raja Jegan R earned 250 total points
ID: 24784410
Taking a Backup Log will not shrink your Log File for sure.
It will take a backup of all Active transactions allowing system to use those space further.

Hence you have to apply the command below to shrink the Log file and keep its size under control

DBCC SHRINKFILE ('ur_db_name', 1000)

where 1000 specifies the Size of your Log file.

Hope this helps
0
 
LVL 8

Author Comment

by:matrix_aash
ID: 24784925
Thanks rrjegan17 for the reply.

Shall I include the dbcc shrinkfile command in my job ?

How can I calculate the right size for my db ?

Thanks.

0
 
LVL 57

Expert Comment

by:Raja Jegan R
ID: 24785122
>> Shall I include the dbcc shrinkfile command in my job ?

Yes.. you need to do it to keep the log file size in control

>> How can I calculate the right size for my db ?

That depends upon the frequency of this particular task. By default once you take your Full / Differential Backups active portions will be backed up leaving only the uncommitted transactions in your LDF File.

Then this would lead us to have a Log file size that would be sufficient between your Full / Differential Backup.

For eg, If you have Full Backup at 9.00 am and Differential Backup at 12.00 noon then your LDF file size should be a minimum of the growth that it may occur during these times.

Hope this helps
0
 
LVL 7

Assisted Solution

by:wilje
wilje earned 250 total points
ID: 24786679
No, you should not include a shrinkfile command in your job.  It is not something that should be done on a regular basis as it is going to cause file fragmentation.  For a transaction log, excessive shrinking and growing cause multiple VLF files to be created which will cause performance issues.
Backing up the transaction log on a regular basis will control the size of the log file.  It will grow as large as it needs to, to handle the number of transactions being processed between log backups.
I would recommend backing the transaction log up more frequently.  Every 6 hours is not often enough to recover from a disaster.  Every hour might be enough, depending upon your business requirements.
http://www.karaszi.com/SQLServer/info_dont_shrink.asp  

0
 
LVL 57

Expert Comment

by:Raja Jegan R
ID: 24787042
wilje,
     Transactional Log backup job was already scheduled and to reduce multiple VLF files, I am requesting to set its initial size great so that it would avoid the creation of multiple Log Files.
One probable way for Multiple Log files is having your Initial file size very lower and your Autogrowth percent or file size value lower.
0
 
LVL 7

Expert Comment

by:wilje
ID: 24787112
rrjegan17:
The question was whether or not the OP should put an option to shrink the transaction log in his job.  You answered that yes - he should schedule a shrink file operation on a regular basis.
Shrinking files on SQL Server should never be done on a regular basis.  There are many reason why you should perform a one time shrink, but no reasons why it should be scheduled.  For example, when a very large transaction has occurred that you know is not ever going to happen again.  Then, you would shrink the log file back to the normal operating size.
 
0
 
LVL 8

Author Comment

by:matrix_aash
ID: 24787601
Then what is the best way to stop the increasing size of the transaction log.

Please let me know.

Thanks.
0
 
LVL 7

Expert Comment

by:wilje
ID: 24787673
The best way to stop the increasing size of the transaction log is to backup the transaction log more often.  I backup my transaction logs every 15 minutes on some systems, on others I back them up every hour.
Once you have this scheduled and running, you can then review the size of the transaction log backups.  I would wait at least a month to make sure I captured all types of processing that can occur (month end for example).  Once I have seen the largest backup size - I would then perform a shrink (one time) to a size a bit larger than the largest backup size.
I would then set the autogrow size to a percentage of that size.  For example, if I ended up sizing the log file at 1000MB - I would set the autogrow to 100MB.  Unless using that size would be smaller than a normal size of the transaction log backup.
0
Control application downtime with dependency maps

Visualize the interdependencies between application components better with Applications Manager's automated application discovery and dependency mapping feature. Resolve performance issues faster by quickly isolating problematic components.

 
LVL 57

Expert Comment

by:Raja Jegan R
ID: 24790465
>> Shrinking files on SQL Server should never be done on a regular basis.

It can be done on LDF / Transaction Log Files. And its not recommended to apply it on MDF / Primary data files.

If you do it on Primary Files, then it will increase the level of Fragmentation to your data. For your Log Files, once you do your Transactional log backup and backup log with truncate_only, the active portions are freed up and available for you. Hence if you shrink that portions to release memory to OS, it wont create a Fragmentation.

But this is not the case with MDF files and hence I never mentioned to do this for MDF files as your previous explanations holds good for MDF files.

Kindly revert if I need to explain more on this.
0
 
LVL 7

Expert Comment

by:wilje
ID: 24790610
Shrinking a file does not release memory to the OS.  I think you meant to say the shrinking frees up space on the drive.
I have to recommend against using BACKUP LOG ... WITH TRUNCATE_ONLY.  The TRUNCATE_ONLY flag has been deprecated and no longer works in 2008, specifically because people would use it without understanding what it does.
Using that command breaks the log chain and will prevent further transaction log backups until you have performed a full backup.  There are occasions where you need to clear a transaction log because it was not managed correctly, or a very unusual event has occurred.  In that case, you should perform the following:
  • ALTER DATABASE ... SET RECOVERY SIMPLE;  -- breaks log chain
  • DBCC SHRINKFILE(logical_file, size);  -- shrink the file back down to normal size
  • ALTER DATABASE ... SET RECOVERY FULL;  -- resets back to full recovery
  • BACKUP DATABASE...; -- start the log chain
  • BACKUP LOG ... -- backup transaction log
Again, shrinking of either data or log files should not be performed on a regular basis.  Backing up the transaction log on a frequent basis is how you manage the size of the transaction log, as well as what is required to meet your disaster recovery requirements.
0
 
LVL 57

Expert Comment

by:Raja Jegan R
ID: 24790995
>> I think you meant to say the shrinking frees up space on the drive.

This is what I meant by releasing memory to OS.

>> Using that command breaks the log chain and will prevent further transaction log backups until you have performed a full backup.

FYI, I have suggested to do that after a Full / Differential Backup.

>> The TRUNCATE_ONLY flag has been deprecated and no longer works in 2008

Agreed and since this question pertains to 2005, I have provided a workable solution.

>> Backing up the transaction log on a frequent basis is how you manage the size of the transaction log

I repeat it again, Regular Backup of your Transactional Log backup will not keep your Log File size in control and you have to perform Shrinkfile to keep the log file size in control.

FYI, asker has periodical Transactional Log Backups defined. And since his log file size is not in control, raised this question on how to achieve it.

You can reduce the frequency of Shrinkfile job but not remove that task in your Maintenance plan if you want to have log file size in control.

Hope this clarifies.
0
 
LVL 7

Expert Comment

by:wilje
ID: 24794233
If you perform a truncate only after performing a full backup, you have broken the log chain and will have to perform a full backup again to reinstate it.  That is just the way it works - don't take my word for it.  Test it out yourself.
And I will repeat this again - there are no good reasons for shrinking a log file on a regular basis.  If you have to worry about disk space usage where you are constantly shrinking a log file - you need to invest in more disk space.
0
 
LVL 57

Expert Comment

by:Raja Jegan R
ID: 24794608
>> If you have to worry about disk space usage where you are constantly shrinking a log file - you need to invest in more disk space.

Then You need to calculate the maximum possible size for Log file and need to take Transaction logs at shorter intervals.
0
 
LVL 7

Expert Comment

by:wilje
ID: 24795015
>>Then You need to calculate the maximum possible size for Log file and need to take Transaction logs at shorter intervals.
Which is what I have been saying all along...
0
 
LVL 57

Expert Comment

by:Raja Jegan R
ID: 24795265
matrix_aash,

>> After doing this do I still need to shrink the database so that the log file is not increasing or what do I need to do In order to stop the log file from increasing big time.

Keep the Log file at a maximum possible size and then manually shrink files as required for the reasons suggested by wilje and don't include a maintenance task as I suggested earlier.

wilje,
    Hope we are on consensus on this aspect now.
Had a good discussion with you..
0
 
LVL 7

Expert Comment

by:wilje
ID: 24795471
rrjegan17: yeah, we are in agreement on this aspect now.  Good discussion.
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

Suggested Solutions

INTRODUCTION: While tying your database objects into builds and your enterprise source control system takes a third-party product (like Visual Studio Database Edition or Red-Gate's SQL Source Control), you can achieve some protection using a sing…
In SQL Server, when rows are selected from a table, does it retrieve data in the order in which it is inserted?  Many believe this is the case. Let us try to examine for ourselves with an example. To get started, use the following script, wh…
Access reports are powerful and flexible. Learn how to create a query and then a grouped report using the wizard. Modify the report design after the wizard is done to make it look better. There will be another video to explain how to put the final p…
This tutorial demonstrates a quick way of adding group price to multiple Magento products.

747 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

12 Experts available now in Live!

Get 1:1 Help Now