Solved

SQL 2005 transaction log

Posted on 2011-03-11
9
258 Views
Last Modified: 2012-05-11
Hi,

I have a SQL 2005 cluster server running 32 bit. I have a database roughly around the size of 520gb. The transcation log is currently sitting at 72GB. I have a full backup that runs daily which is supposed to truncate the logs files? It appears this transcation log is not shrinking.

What can i do to solve this and it is effecting performance.

Regards
0
Comment
Question by:monarchit
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
9 Comments
 
LVL 7

Expert Comment

by:Gene_Cyp
ID: 35108099
0
 
LVL 30

Expert Comment

by:Rich Weissler
ID: 35108110
Backing up, and truncating the logs doesn't cause them to shrink, it allows the space to be reused.

The best practice recommendation is also to NOT shrink the transaction logs, unless you know something very unusual has occurred to cause them to grow much larger than they need to be.
0
 
LVL 7

Expert Comment

by:Gene_Cyp
ID: 35108126
One key way to stop it from growing too much is to change the recovery type to:

"Simple recovery model" in the SQL Settings

Just make sure that the recovery model you select meets your recovery needs.
0
Creating Instructional Tutorials  

For Any Use & On Any Platform

Contextual Guidance at the moment of need helps your employees/users adopt software o& achieve even the most complex tasks instantly. Boost knowledge retention, software adoption & employee engagement with easy solution.

 
LVL 6

Expert Comment

by:graf0
ID: 35108240
If you don't need to take transaction log backups, just switch the database to Simple Recovery Model and shrink the log file manually, using Tasks > Srink > Files on the database.
0
 
LVL 30

Expert Comment

by:Rich Weissler
ID: 35108296
If you don't need the point in time recovery... I concur with Gene_Cyp and graf0 -- Simple Recovery Model might be a good fit for you.  

I feel I should mention however -- one of the mistakes I made when starting with SQL was to assume the Transaction Log in the Simple Recovery Model was either minimally used or not used.  Be aware that the Transaction Log still needs to be sized sufficient to hold entire transactions while they are being committed to the database.  It won't be 'nothing' and probably won't be just the size of a single transaction... but it will probably be smaller than your transaction log will want to be with the Full Recovery Model.

One of the worst things you can do for performance would be to go through repeated cycles of shrinking and growing your transaction log files...
0
 
LVL 21

Expert Comment

by:Alpesh Patel
ID: 35108321
Hi,

First take back off Log file and truncate it.
0
 
LVL 2

Expert Comment

by:Umesh_Madap
ID: 35115890
most of the guys have answered to u r question.

I would like to know how frequent u r taking the t-log backups, for example if u r taking t-log backup every 1 hr then change it to 30 min.. and see. if you don't want point in time recovery then u r set it to simple recovery model.
0
 
LVL 1

Accepted Solution

by:
rcharlton earned 500 total points
ID: 35116075
I've worked in high availability VLDBs (Very Large Database Environments) with SQL Server 2005. The best way that I've found is to perform the following:

1. Set the recovery interval to 5 minutes. Too long of a recovery interval and SQL Server is doing to many "background things" while your DB users are waiting for it. Too short and you have the same scenario. If there's a restore necessary, you're only talking about granularly losing data for the last 5 minutes prior to the failure.
2. Backup the transaction log every 5 minutes; this keeps it small and compact, allows you to shrink the transaction log (yikes! Yours is at 72GB? Do you need that much for the transaction log? Probably not)
3. You only have to worry about space where the transasction log backups are being placed.

As a caveat, I would implement the following backup strategy:

1. Perform a full backup once a week, say on Sunday.
2. Perform a differential backup Monday through Saturday.
3. Perform a log backup every 5 minutes.
4. Create stored procedures for the above, including ones that will do the restore.
5. Optionally, your database can be architected to use filegroups and partitioning. Non frequently accessed data, or data that never changes, can be on filegroups; data that frequently changes can be on other file groups. You only need to backup data on filegroups that are changing. This SIGNIFICANTLY reduces the amount of time to perform the backup. Although somewhat more complex to implement, once you get the hang of it -- it's a breeze, and your backups / restores happen very quickly. In my scenario for example, I had imports which happened on a certain day of the week. Those tables were targeted for certain filegroups, with partitioning, and a backup of those filegroups was performed on those days. The same was true for other filegroups which accepted data on different days. Those filegroups were backed up on those certain days as well. IT sounds complicated, but it reduces the amount of backup space required, and frees up the server for the production users, and will increase performance.
0
 

Author Closing Comment

by:monarchit
ID: 36032707
thanks
0

Featured Post

Ransomware-A Revenue Bonanza for Service Providers

Ransomware – malware that gets on your customers’ computers, encrypts their data, and extorts a hefty ransom for the decryption keys – is a surging new threat.  The purpose of this eBook is to educate the reader about ransomware attacks.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
SSIS how to COMPARE a data column from different servers? 6 128
display data in text field from data base for updating 6 71
tempdb log keep growing 7 45
Replication failure 1 23
Introduction This article will provide a solution for an error that might occur installing a new SQL 2005 64-bit cluster. This article will assume that you are fully prepared to complete the installation and describes the error as it occurred durin…
In this article I will describe the Copy Database Wizard method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
Although Jacob Bernoulli (1654-1705) has been credited as the creator of "Binomial Distribution Table", Gottfried Leibniz (1646-1716) did his dissertation on the subject in 1666; Leibniz you may recall is the co-inventor of "Calculus" and beat Isaac…
Finding and deleting duplicate (picture) files can be a time consuming task. My wife and I, our three kids and their families all share one dilemma: Managing our pictures. Between desktops, laptops, phones, tablets, and cameras; over the last decade…

726 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question