[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
?
Solved

Avg. Disk Queue Length & Transaction Log

Posted on 2009-02-24
6
Medium Priority
?
864 Views
Last Modified: 2012-05-06
Hello Experts!

We have a SQL server that handles about 2000 transactions per second from a series of web servers. Each web server delivers content and that content is read from the database server. Everything works fine most of them time but then about once ever couple of minutes the transaction log truncates (which is good otherwise it would grow forever). However whilst truncating it effectively pauses all incoming traffic from the web servers. Then the web servers queue the requests and don't start processing them until the log is finally truncated. The size of the database is about 50 GB and the size of the transaction log is about 200 MB. I think that it is better to have the transaction log small and truncate quickly rather than having it large and taking ages to truncate.

We have taken as much precaution as possible when setting up the SQL server. The log files are on a seperate RAID controller from the database. The RAID controller is 5+1 spread over 6 72 GB SATA disks and works as quickly as we think it can work. There is no other processes running on this server as it is dedicated SQL server.

Considering that we don't really care about ROLLBACK and other transactional things is there any Options, or switches we can flick to turn off Transaction logging all together? Alternatively has anybody else been turning SQL servers and found a faster way to avoid HDD bottlenecks whilst Transaction Logs are being truncated?
0
Comment
Question by:Quai_Gon_Jin
  • 3
  • 3
6 Comments
 
LVL 35

Expert Comment

by:David Todd
ID: 23725147
Hi,

See
http://www.sql-server-performance.com/articles/per/performance_audit_p1.aspx
for some good pointers on hardware and settings.

Are you shrinking the transaction log file, at the same time you are truncating it? A truncate (Backup log blah with truncate_only) is fine, but shrinking the file is not.

What recovery model are you using? At your size and volume I'd be looking at full recovery model, and backing up the transaction log every hour (or more)

With a 50GB data file, I'd be happier with 5 - 10GB of transaction log than 200MB. I think that you are overdoing the smallness of it.

Some first thoughts
  David
0
 

Author Comment

by:Quai_Gon_Jin
ID: 23726253
Hi David,

Thanks for your prompt reponse. Great 1st throughts. The artical you posted regarding the tuning check list is great and it will take me a while to digest all of it.

- Regarding the transaction log. It is being truncated only and not shrinked :)
- The recovery mode is simply as we perform a full backup each night, We are more worried about performance rather than losing data during the day.
- I see your point regarding the size of the transaction log. However is it true to say that if I made the log 10 x the size it is now then when it "eventually" does truncate then it would require 10 x the time to complete the truncation process?

Lastly, does changing the transaction log file size require a SQL restart?

Cheers! :)
0
 
LVL 35

Expert Comment

by:David Todd
ID: 23731616
Hi,

Changing the log size doesn't require a sql restart.

I would start with truncating the log every hour. In fact, if using simple recovery model you should not need to be dumping transactions unless something bad is happening. Just the daily full database backup. But you would have to have a bigger log-file.

How much memory does your server have? what edition of SQL? I'd start with giving the server at least 4GB of ram. While not necessarily the whole answer, a) it wont hurt, b) its cheap c) doesn't requrie too much regigging like adding disks will.

Cheers
  David
0
VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

 

Author Comment

by:Quai_Gon_Jin
ID: 23732209
Hi David,

Thanks again. I feel that this is really helping and I've collected almost enough information to soon close off this ticket.

I've checked the transaction log some more. It is 175 MB. Of which when it gets to 75 MB used and only 100 MB free space it automatically truncates. This happens about once per 2-3 minutes. When it truncates the used file size goes down to about 40 MB and the free up to about 135 MB. Then the used starts to grow again and when the used gets to about 75 MB the cycle starts over. Each time the transaction log is truncated the Avg Disk Queue Length goes to about 40 and there are six disks in the array so the length per disk goes over the magic number 2.

The server has 4 GB of RAM of which it can 100% of this amount. It has 1 GB dedicated for query management. The CPU utilisation on the database server is only about 20% and the RAM looks like it has about 1 GB of free space.

I think that the answer is to tweek the size of the Transaction Log and turn off items such as Auto_Creative_Stats and Auto_Update_Statistics.

Thanks again for your help. If you have any final words then please let me know and then I will signoff this ticket.
0
 
LVL 35

Accepted Solution

by:
David Todd earned 2000 total points
ID: 23736649
Hi,

My first read of your last post suggested that the file was physically changing size. Re-reading suggests I'm wrong.

If the FILE SIZE reduces, then you have auto-shrink turned on. Turn it OFF. This is a priority. The physical shrinking and growing of the file would not be good for performance.

One suggstion is to change the recovery model to full, and do a transaction log backup (or truncation) out of busy times. Of course you are still likely to get some of the above behaviour, but it would put it in your schedule, and not the systems.

Other thoughts are that since you are using SATA disks, this isn't really a server solution. I would expect SCSI disks of some flavour. I'm wondering if the controller is as efficient as it should be, if the controller is possibly a bottle-neck. I don't suppose there is much you can do about that at this point. Do be aware that compared to a mirror, the more complex raids really do load the controller.

Cheers
  David

0
 

Author Closing Comment

by:Quai_Gon_Jin
ID: 31551224
Thanks for the final answer. The file size of the database is not shrinking. However the amount of "used" data decreases inside the transaction log and therefore the amount of "free" size increases each time the log is flushed. The overall file size remains the same. Thefore I will try your options above.

1. After reviewing the tuning guide we have deduced that Auto_Create and Auto_Update statistics can be turned off

2. We will try to tweak the file size of the transaction log to see if a transaction log which is twice has big will truncate half as often without doubling the disk queue.

3. We have now purchased a second database server and will share the load between the servers.

Well done and thanks for the assistance. Full points awarded.
0

Featured Post

Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In this article we will learn how to fix  “Cannot install SQL Server 2014 Service Pack 2: Unable to install windows installer msi file” error ?
Ready to get certified? Check out some courses that help you prepare for third-party exams.
Via a live example, show how to extract insert data into a SQL Server database table using the Import/Export option and Bulk Insert.
Viewers will learn how to use the INSERT statement to insert data into their tables. It will also introduce the NULL statement, to show them what happens when no value is giving for any given column.
Suggested Courses

873 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question