Solved

SQL 2005 transaction log

Posted on 2011-03-11
9
256 Views
Last Modified: 2012-05-11
Hi,

I have a SQL 2005 cluster server running 32 bit. I have a database roughly around the size of 520gb. The transcation log is currently sitting at 72GB. I have a full backup that runs daily which is supposed to truncate the logs files? It appears this transcation log is not shrinking.

What can i do to solve this and it is effecting performance.

Regards
0
Comment
Question by:monarchit
9 Comments
 
LVL 7

Expert Comment

by:Gene_Cyp
ID: 35108099
0
 
LVL 30

Expert Comment

by:Rich Weissler
ID: 35108110
Backing up, and truncating the logs doesn't cause them to shrink, it allows the space to be reused.

The best practice recommendation is also to NOT shrink the transaction logs, unless you know something very unusual has occurred to cause them to grow much larger than they need to be.
0
 
LVL 7

Expert Comment

by:Gene_Cyp
ID: 35108126
One key way to stop it from growing too much is to change the recovery type to:

"Simple recovery model" in the SQL Settings

Just make sure that the recovery model you select meets your recovery needs.
0
Microsoft Certification Exam 74-409

Veeam® is happy to provide the Microsoft community with a study guide prepared by MVP and MCT, Orin Thomas. This guide will take you through each of the exam objectives, helping you to prepare for and pass the examination.

 
LVL 6

Expert Comment

by:graf0
ID: 35108240
If you don't need to take transaction log backups, just switch the database to Simple Recovery Model and shrink the log file manually, using Tasks > Srink > Files on the database.
0
 
LVL 30

Expert Comment

by:Rich Weissler
ID: 35108296
If you don't need the point in time recovery... I concur with Gene_Cyp and graf0 -- Simple Recovery Model might be a good fit for you.  

I feel I should mention however -- one of the mistakes I made when starting with SQL was to assume the Transaction Log in the Simple Recovery Model was either minimally used or not used.  Be aware that the Transaction Log still needs to be sized sufficient to hold entire transactions while they are being committed to the database.  It won't be 'nothing' and probably won't be just the size of a single transaction... but it will probably be smaller than your transaction log will want to be with the Full Recovery Model.

One of the worst things you can do for performance would be to go through repeated cycles of shrinking and growing your transaction log files...
0
 
LVL 21

Expert Comment

by:Alpesh Patel
ID: 35108321
Hi,

First take back off Log file and truncate it.
0
 
LVL 2

Expert Comment

by:Umesh_Madap
ID: 35115890
most of the guys have answered to u r question.

I would like to know how frequent u r taking the t-log backups, for example if u r taking t-log backup every 1 hr then change it to 30 min.. and see. if you don't want point in time recovery then u r set it to simple recovery model.
0
 
LVL 1

Accepted Solution

by:
rcharlton earned 500 total points
ID: 35116075
I've worked in high availability VLDBs (Very Large Database Environments) with SQL Server 2005. The best way that I've found is to perform the following:

1. Set the recovery interval to 5 minutes. Too long of a recovery interval and SQL Server is doing to many "background things" while your DB users are waiting for it. Too short and you have the same scenario. If there's a restore necessary, you're only talking about granularly losing data for the last 5 minutes prior to the failure.
2. Backup the transaction log every 5 minutes; this keeps it small and compact, allows you to shrink the transaction log (yikes! Yours is at 72GB? Do you need that much for the transaction log? Probably not)
3. You only have to worry about space where the transasction log backups are being placed.

As a caveat, I would implement the following backup strategy:

1. Perform a full backup once a week, say on Sunday.
2. Perform a differential backup Monday through Saturday.
3. Perform a log backup every 5 minutes.
4. Create stored procedures for the above, including ones that will do the restore.
5. Optionally, your database can be architected to use filegroups and partitioning. Non frequently accessed data, or data that never changes, can be on filegroups; data that frequently changes can be on other file groups. You only need to backup data on filegroups that are changing. This SIGNIFICANTLY reduces the amount of time to perform the backup. Although somewhat more complex to implement, once you get the hang of it -- it's a breeze, and your backups / restores happen very quickly. In my scenario for example, I had imports which happened on a certain day of the week. Those tables were targeted for certain filegroups, with partitioning, and a backup of those filegroups was performed on those days. The same was true for other filegroups which accepted data on different days. Those filegroups were backed up on those certain days as well. IT sounds complicated, but it reduces the amount of backup space required, and frees up the server for the production users, and will increase performance.
0
 

Author Closing Comment

by:monarchit
ID: 36032707
thanks
0

Featured Post

Optimizing Cloud Backup for Low Bandwidth

With cloud storage prices going down a growing number of SMBs start to use it for backup storage. Unfortunately, business data volume rarely fits the average Internet speed. This article provides an overview of main Internet speed challenges and reveals backup best practices.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
Insert statement is inserting duplicate records 15 63
SQL Express connecting form remote error 26 7 52
Caste datetime 2 64
Restrict result set 1 39
When writing XML code a very difficult part is when we like to remove all the elements or attributes from the XML that have no data. I would like to share a set of recursive MSSQL stored procedures that I have made to remove those elements from …
This article explains how to reset the password of the sa account on a Microsoft SQL Server.  The steps in this article work in SQL 2005, 2008, 2008 R2, 2012, 2014 and 2016.
Finds all prime numbers in a range requested and places them in a public primes() array. I've demostrated a template size of 30 (2 * 3 * 5) but larger templates can be built such 210  (2 * 3 * 5 * 7) or 2310  (2 * 3 * 5 * 7 * 11). The larger templa…

827 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question