Want to win a PS4? Go Premium and enter to win our High-Tech Treats giveaway. Enter to Win


Defragmenting the Hard drive to improve SQL DB performance

Posted on 2010-11-15
Medium Priority
Last Modified: 2012-05-10
Hi All,

Recently my SQL DB has really slowed down and while trying to look for things to optimse to try and improve the situation we found the following;

1. MCafee Antivirus scanning

2. The DB was sitting on a RAID 5 Volume.

3. File fragmentation is at 86%

We are trying to do the following to optimse the following to correct the situation.
1. For the Antivirus, we have put the DB files under exceptions option.
2. We have created a RAID 10 volume and that is where we will move the Db data files
3. We would like to leave the log file on the current volume. But because Fragmentation is resource intensive we wanted to move the log files to another volume and then defrag the current volume. My question is, if you move the busy DB files to another volume and defrag the former volume and then copy back the files, is there a benefit. When you copy the files to another volume, which is nicely fragmented is there a benefit or it gets fragmented the same way? I need your advice on this one.

Question by:ackimc
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
LVL 10

Expert Comment

ID: 34135830
I guess it would depend on the defragmentation software, Diskeeper is an excellent product that does support defragmentation of SQL servers and may be a better defragmentation solution.
LVL 16

Expert Comment

ID: 34136153
Also check your partition alignments. This can give you 10%-20% more disk io.


Is the disk sub-system SAN or Directly attatched as SANs (depending on the manufacturer) arent nessecarily effected by file fragmentation.
LVL 21

Expert Comment

ID: 34136244
You have auto statistics on and/or regular index maintenance plan?
Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

LVL 16

Accepted Solution

EvilPostIt earned 1200 total points
ID: 34136278
You may want to do a bit of monitoring around the disk itself to ensure its not actually the database. I would suggest looking at the disk queue lengths.
LVL 20

Assisted Solution

by:Iain MacMillan
Iain MacMillan earned 800 total points
ID: 34136782
the latest version of Diskeeper Server is very good at defragging when the server is not @ peak usage, and has new Intelliwrite prevention routines which can reduce the need for defragging in the first place.

I agree with EvilPostIt's reply, you need to check that its not your disk's I/O or RAID controller performance that's letting you down, as i have run several large SQL systems in RAID 5 over the years on HP Proliant direct attached storage, and never encounter many issues (keeping spare disks is a must though, even with hot-spares enabled).

if you do decide to move the DB, as long as the destination has been defragged beforehand, it should copy contiguously, which will then allow you to quickly defrag the rest of the log partition.  I usually set my DK to allow manual defragging at high priority, but the automated jobs run at low priority (InvisiTasking) in the background and you can specify jobs to be done out-with peak times, and backup schedules (like weekends).

PDF doc at bottom of page regarding Exchange and SQL DB - 3rd Link from bottom, you can also get a 30 day trial if you want to test it out - http://www.diskeeper.com/business/diskeeper/server/default.aspx

Author Comment

ID: 34142805
Hi all,

Am now inclined to looking at other issues as well, after reading your comments here is what we found;

1. Using performance monitor, we noticed that during peak hours we are having disk queues

2. We restored the DB on the back and ran the command DBCC Showcontig, we noticed that tables commonly used had extent fragmentation up to 99.99%, I suspect this could be a big problem. Our application developers had give us the impression that there is auto indexing within the application. My question is can I rebuild indexes while we are running live, am worried about the server becoming too slow. Any ideas or how best can we do this. Which one should we do first, rebuild indexes first or defragment the drive first.

LVL 16

Assisted Solution

EvilPostIt earned 1200 total points
ID: 34143064
Hi ackimc,

Yes you can re-index online, though depending on the version of SQL Server you have will give you how many options you have. If you have enterprise version the you can use a rebuild statement

Open in new window


You can still use the REBUILD statement in non enterprise versions is just means that during the operation a table lock will be placed on the base table.

If you do not want to use the rebuild statement then you still have another option (although it will take a lot long with fragmentation for 99%). You could use the

Open in new window

statement which would also defragment your indexes.

Just as a note though, you mentioned that you are using DBCC SHOWCONTIG and that you are running 2005. Microsoft have added some useful DMV's in SQL Server 2005. Have a look at running this
SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('[DBNAME]'),NULL,NULL,NULL,NULL)

Open in new window

LVL 16

Expert Comment

ID: 34143074
With regards to the order in your final line. I would focus on the indexes first. As to defragment the drive you will need to take you SQL instance offline as otherwise it could corrupt your database.

Best way to defragment the disk would be to copy the files off, format the drive, copy them back. I have done this before and it actually saves a lot of time. (Depending on the size of the database)
LVL 16

Expert Comment

ID: 34143968
Was just thinking about other reasons that you may have 99.9% fragmentation. If you are doing any shrink operation you should really turn this off unless it is absolutely needed. You may as well rename any shrink database command to fragment_all_data.....

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Why is this different from all of the other step by step guides?  Because I make a living as a DBA and not as a writer and I lived through this experience. Defining the name: When I talk to people they say different names on this subject stuff l…
What if you have to shut down the entire Citrix infrastructure for hardware maintenance, software upgrades or "the unknown"? I developed this plan for "the unknown" and hope that it helps you as well. This article explains how to properly shut down …
Using examples as well as descriptions, and references to Books Online, show the documentation available for date manipulation functions and by using a select few of these functions, show how date based data can be manipulated with these functions.
Via a live example combined with referencing Books Online, show some of the information that can be extracted from the Catalog Views in SQL Server.
Suggested Courses

618 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question