Solved

Does defragging a RAID 5 array help performance?

Posted on 2008-11-01
4
1,296 Views
Last Modified: 2012-06-22
Our office is hotly contesting defragging a RAID 5 array on our Exchange Server 2003.  Some admins say to do it, while others say it is pointless.  The disk analysis in 2003 server shows drive D: *entirely* in red, and states is should be defragmented.  Can anyone clear this debate up?
0
Comment
Question by:thenightlife
  • 2
4 Comments
 
LVL 24

Expert Comment

by:DMTechGrooup
ID: 22859226
This was answered here..

http://www.experts-exchange.com/Storage/Misc/Q_21576901.html

http://forums.storagereview.net/index.php?showtopic=19099

IMO, make sure you have a good know backup cause I did see a thread or two where the raid got corrupted.

Also, probably would be wise to do an eseutil offline defrag of the exchange DB's to reclaim disk space.

0
 

Expert Comment

by:MuddyBulldog
ID: 22990355
No disrespect intended, but it's NOT answered in those links. I do research on this topic a couple times a year and have yet to find a good answer that really stands up to analysis. In defense of those who support defragmentation on RAID I have yet to see anyone who opposes it effectively explain why they see it as fruitless so there's been little to discuss or refute. Most times this question is asked a couple people come back with unsupported opinions that it's good. Add one other person who will try to provide some technical foundation that generally falls short and the topic ends up closing.

The concept of defragging a disk came about because it was well known that fragmented files caused a performance hit. This is due to the added time required for the hard disk heads to physically move across the platter(s) as they seek out each fragment of a file. With a standard disk controller the operating system has (mostly) direct hardware access and therefore has firsthand knowledge of the physical layout of the sectors on the disk. This is what you see represented when you call up a disk map such as the one you see in the Windows defrag tool.

When you add an array controller, the operating system loses this direct knowledge of the physical layout of the disk. The array controller becomes a point of abstraction between the operating system and the actual physical hardware. Here is where the confusion starts.

Let's take a simple RAID5 array of 3 40GB disks. Combined the physical capacity is 120GB, but due to the RAID 5 only 80 is usable. The array does some sleight of hand and presents the 80GB of usable space to the operating system as a single 80GB physical device. It is essentially emulating a single  80GB physical disk even though there are actually three 40GB physical disks providing the foundation. At no time is the operating system ever aware of the true physical nature of the media.

The operating system has probably been written to deal with hard disks by identifying tracks, sectors and/or blocks. This is why the array controller presents itself the way it does. By emulating a single disk it doesn't require any changes to already existing filesystems (e.g. NTFS, FAT, EXT2). The array hardware presents an emulated physical disk layout which can be used by virtually any existing operating system using already existing storage routines.

When using an array what you see in Windows defrag layout ISN'T REAL. It's an abstraction that the array controller presents to the OS because it knows that it's what it needs. It translates all the writes to this "logical" or "virtual" layout into actual physical writes to the true physical disks on the fly. While the abstracted disk layout may appear to be fragmented that's only because the the array controller is presenting the map back to the OS the same way the OS requested it be written. There is no direct correlation between where the OS thinks the data is and where it really is on the physical disks. Most importantly there is no longer a one to one correlation between the sectors or blocks that the operating system "sees" and the actual physical sectors on the disks. In fact the block size on the actual physical disks  may be wildly different than the block size that is being presented to the OS.

Fact is, if you defrag an array the actual physical layout on the array will change, but that because of how defrag works. It's a series of file copies and subsequent file deletions. Since the OS is requesting a write as part of the copy process the array controller is obliged to write new data at a new physical location (because the current physical location is already occupied). The subsequent delete causes the array to then free up the physical space on the array that corresponds to the locations on the abstracted map. the data will definitely have moved, but there is no evidence that the new physical layout is more efficient because, once again, there is no direct correlation between where the OS thinks the data is and where it  actually is on the physical disks.

I'm sure it has become apparent that I have my doubts about defragging an array. That's because I cannot see how it is possible for the operating system to make decisions regarding the best way to physically layout files on a disk (which is what defrag does) when it has no true knowledge of the physical nature of the disk.

The only potential benefit I can see would be in command queuing. If the OS believes that a file is fragmented it will send more commands (Read 4 sectors beginning at sector 1000, then read 10 sectors beginning at sector 2000,  then read 20 sectors beginning at sector 3000) then if it believes the file is in a single chunk (read 34 sectors beginning at sector 1500, [yes this is a very small file]). This benefit is theoretical and likely intangible as the additional delay to send three commands, or even hundreds, into the queue as opposed to a single command is irrelevant when compared to the delay associated to a true physical head seek.

If anyone has any information that can truly counter (or factually support) these arguments PLEASE list it here, I beg you. If it truly stands up I will sing out loud praises to the author who can save me ever having to research this again.





0
 

Author Comment

by:thenightlife
ID: 22991314
Very well written.  I too am confused by this subject, and the debate within our office continues.  We suffer from very poor Exchange performance on a very high horse power server.  My question now is, knowing that most systems using Windows 2003 Server will be  using an array of some sort, if the Defrag tool is ineffective...why include it in the OS?  The Diskeeper Corp sent me some documentation supporting that you should defrag a RAID 5 volume...but I would assume this has marketing considerations for them.

So red is not red is not in the server contained defrag tool as you state in not so many words above.  How would you optimize the drives?  Is there another tool or procedure you would recommend?
0
 

Accepted Solution

by:
MuddyBulldog earned 125 total points
ID: 22992330
The Windows defrag tool is a carryover from Windows 95/98. Fact is, Microsoft didn't include it in Windows NT 4.0 in spite of the fact that NT 4.0 was released after 95/98. This led to a lot of complaints from purchasers and opened the door for companies like Diskeeper (Executive Software) to fill the void. The defrag tool that got added back in to Windows 2000 was in fact a Microsoft branded version of Diskeeper Lite (the Help About would show the appropriate copyrights attributed to Executive Software). Why do they continue to include it in the Server OS, can't say for sure. Windows 2003 is built from the same codebase as Windows XP and Xp has the defrag so Server gets it as well? Who knows. Why do they include Media Player and Sound Recorder?

I've seen Diskeeper's most recent release "The Benefits of Automatic Defrag on RAID" dated 10/30/2008. The closest they come to a technical explanation is "It defragments the logical drive, improving the speed and performance of a RAID environment by eliminating wasteful and unnecessary I/Os from being issued by the file system. This occurs because the file system sees the files and free space as being more contiguous. The file system will spend less time checking file attributes, meaning more processor time can be dedicated to doing real useful work for the user and application."

This aligns perfectly with what I said previously. Fewer commands sent to the queue by the file system driver because it is under the (false) impression that the blocks are continuous based on the information being presented by the logical (fake) allocation map. They make no mention whatsoever of actually improving physical disk I/O.

The release follows up in typical marketing style with no actual statistics or benchmarks. It does provide a customer testimonial which they expect us to take to heart even though it's from somebody we don't know from a company we've never heard of. He claims that prior to using Diskeeper their backups were timing out because "The servers could not put the file fragments together fast enough for the backup software to maintain its throughput." While I can't say this is untrue, I find it hard to swallow. My opinion is that if this defragmenting of the "fake drive" actually caused a noticeable improvement the servers in question really need to be upgraded.

As for "red is not red", that's a really tough one, which is why I keep returning to this debate over and over throughout my career. In theory the array controller should be optimizing the layout of the disks behind the scenes, but this isn't optimal because the controller can only base it's decisions on raw disk access. Because the array controller (just like a single hard drive) doesn't have any awareness of the filesystem sitting on top of it it can't make decisions that benefit the filesystem (or the OS) only ones that can benefit itself in the most generic sense.

To be truly effective an optimizer has to have firsthand knowledge of both the physical disk structure and of the filesystem on top of it. If you can add to that an understanding of the data on top of the filesystem, even better. In the old days of single drives this was easy because what the OS saw was a literal map of the physical drive structure. Even as abstractions started to get introduced, such as LBA, it was feasible because there remained a one-to-one correlation between the logical sectors and the physical sectors. These days I know of no such animal that meets these criteria.

Now, fact is Exchange is murder on disk I/O. It uses JET database technology so it's essentially a cousin of  Microsoft Access and MSDE, not the more robust Microsoft SQL server. Add to that the fact that every bit of data going in essentially gets written twice (once to the transaction logs, and then later to the actual database) and you've got a performance nightmare.

Who knows, maybe defragging will reduce command queue clutter enough that you do see a difference. There are probably other things that will produce much better results, say placing your transactions logs and your database on two different arrays (not two different logical drive letters on the same logical array. No benefit there, IMO). Without knowing the specifics of your setup it's impossible to say what might help. You say your server is robust, but an eight-way server with 32GB of RAM is going to fall over flat if it has a few hundred users and only three physical disks in a RAID-5 array. You'll have virtually no CPU load but will be waiting forever because you'll always be waiting on disk I/O, fragmentation or not.

In reality, we just don't know. If you have faith in your backups, set some time aside an defrag the array (not just enough time to do the defrag, but enough time to recover the server should something go awry), Maybe you'll see a boost in performance, maybe not. At least you'll know for sure.
0

Featured Post

Better Security Awareness With Threat Intelligence

See how one of the leading financial services organizations uses Recorded Future as part of a holistic threat intelligence program to promote security awareness and proactively and efficiently identify threats.

Join & Write a Comment

Resolve DNS query failed errors for Exchange
"Migrate" an SMTP relay receive connector to a new server using info from an old server.
In this video we show how to create an Accepted Domain in Exchange 2013. We show this process by using the Exchange Admin Center. Log into Exchange Admin Center.: First we need to log into the Exchange Admin Center. Navigate to the Mail Flow >> Ac…
This video discusses moving either the default database or any database to a new volume.

706 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

12 Experts available now in Live!

Get 1:1 Help Now