Go Premium for a chance to win a PS4. Enter to Win

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 269
  • Last Modified:

SQL 2005 transaction log

Hi,

I have a SQL 2005 cluster server running 32 bit. I have a database roughly around the size of 520gb. The transcation log is currently sitting at 72GB. I have a full backup that runs daily which is supposed to truncate the logs files? It appears this transcation log is not shrinking.

What can i do to solve this and it is effecting performance.

Regards
0
monarchit
Asked:
monarchit
1 Solution
 
Rich WeisslerProfessional Troublemaker^h^h^h^h^hshooterCommented:
Backing up, and truncating the logs doesn't cause them to shrink, it allows the space to be reused.

The best practice recommendation is also to NOT shrink the transaction logs, unless you know something very unusual has occurred to cause them to grow much larger than they need to be.
0
 
Gene_CypCommented:
One key way to stop it from growing too much is to change the recovery type to:

"Simple recovery model" in the SQL Settings

Just make sure that the recovery model you select meets your recovery needs.
0
Get your Disaster Recovery as a Service basics

Disaster Recovery as a Service is one go-to solution that revolutionizes DR planning. Implementing DRaaS could be an efficient process, easily accessible to non-DR experts. Learn about monitoring, testing, executing failovers and failbacks to ensure a "healthy" DR environment.

 
graf0Commented:
If you don't need to take transaction log backups, just switch the database to Simple Recovery Model and shrink the log file manually, using Tasks > Srink > Files on the database.
0
 
Rich WeisslerProfessional Troublemaker^h^h^h^h^hshooterCommented:
If you don't need the point in time recovery... I concur with Gene_Cyp and graf0 -- Simple Recovery Model might be a good fit for you.  

I feel I should mention however -- one of the mistakes I made when starting with SQL was to assume the Transaction Log in the Simple Recovery Model was either minimally used or not used.  Be aware that the Transaction Log still needs to be sized sufficient to hold entire transactions while they are being committed to the database.  It won't be 'nothing' and probably won't be just the size of a single transaction... but it will probably be smaller than your transaction log will want to be with the Full Recovery Model.

One of the worst things you can do for performance would be to go through repeated cycles of shrinking and growing your transaction log files...
0
 
Alpesh PatelAssistant ConsultantCommented:
Hi,

First take back off Log file and truncate it.
0
 
Umesh_MadapCommented:
most of the guys have answered to u r question.

I would like to know how frequent u r taking the t-log backups, for example if u r taking t-log backup every 1 hr then change it to 30 min.. and see. if you don't want point in time recovery then u r set it to simple recovery model.
0
 
rcharltonCommented:
I've worked in high availability VLDBs (Very Large Database Environments) with SQL Server 2005. The best way that I've found is to perform the following:

1. Set the recovery interval to 5 minutes. Too long of a recovery interval and SQL Server is doing to many "background things" while your DB users are waiting for it. Too short and you have the same scenario. If there's a restore necessary, you're only talking about granularly losing data for the last 5 minutes prior to the failure.
2. Backup the transaction log every 5 minutes; this keeps it small and compact, allows you to shrink the transaction log (yikes! Yours is at 72GB? Do you need that much for the transaction log? Probably not)
3. You only have to worry about space where the transasction log backups are being placed.

As a caveat, I would implement the following backup strategy:

1. Perform a full backup once a week, say on Sunday.
2. Perform a differential backup Monday through Saturday.
3. Perform a log backup every 5 minutes.
4. Create stored procedures for the above, including ones that will do the restore.
5. Optionally, your database can be architected to use filegroups and partitioning. Non frequently accessed data, or data that never changes, can be on filegroups; data that frequently changes can be on other file groups. You only need to backup data on filegroups that are changing. This SIGNIFICANTLY reduces the amount of time to perform the backup. Although somewhat more complex to implement, once you get the hang of it -- it's a breeze, and your backups / restores happen very quickly. In my scenario for example, I had imports which happened on a certain day of the week. Those tables were targeted for certain filegroups, with partitioning, and a backup of those filegroups was performed on those days. The same was true for other filegroups which accepted data on different days. Those filegroups were backed up on those certain days as well. IT sounds complicated, but it reduces the amount of backup space required, and frees up the server for the production users, and will increase performance.
0
 
monarchitAuthor Commented:
thanks
0

Featured Post

Learn Veeam advantages over legacy backup

Every day, more and more legacy backup customers switch to Veeam. Technologies designed for the client-server era cannot restore any IT service running in the hybrid cloud within seconds. Learn top Veeam advantages over legacy backup and get Veeam for the price of your renewal

Tackle projects and never again get stuck behind a technical roadblock.
Join Now