RAID 10 vs. RAID 5EE - Better Performance

Hey!  Not a question, actually, but I'm posting it for (hopefully) someones benefit. There are many questions on this topic up here, and a lot of conflicting info.

When looking to replace our DB server with a new one, I wanted to find the "best" RAID solution. I initially thought I'd go with RAID 10 (striped sets, mirrored). I purchased the Adaptec 5805 RAID controller, bought 5 Seagate 72GB Cheetah SAS drives, and put them all in a fast new server.  The Adaptec documentation mentioned RAID 5EE (striping, with parity, so if any one disk fails you're OK).  The documentation indicated that 5EE had the best read times, but slowest write times of the various RAID options. Since most of our bottlenecks seem to be reading from the DB I decided to go that route instead.

Bad decision!  I have done a lot of testing, and here is what I have come up with.  All these times are for copying a 2.2GB file from one location to another locally.

Copy from RAID 5EE to single SATA-II drive:   27 seconds
Copy from single SATA-II drive to RAID 5EE array:  260 seconds
That's not a typo... it took 9 1/2 times as long to write to the RAID 5EE !

So I reconfigured the system to RAID 10 (4 disks) plus a hot spare. New times:
Copy from RAID 10 to single SATA-II drive:   29 seonds
Copy from single SATA-II drive to RAID 10 array:   34 seconds.
(BTW... I repeated these tests with larger files and got comparable results)
I am **more** than happy to lose a minimal amount of read performance to be able to write over 7 times faster!

Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

aleghartCommented: what is so surprising about a RAID5 write hole?  This is not news.

Some holes in your testing:

Database writes do not occur 2GB at a time.  Try less than a KB at a time.  You are performing a single large file transfer.

The manufacturer (and everyone else for the last 10+ years) warns that writes on a RAID5 array are slow due to the nature of parity calculation and distribution.

The controller has a lot to do with the write hole.  Spend some decent money for a 3Ware controller, and you'll see 100MBps writes.

Granted, it's 25% the write speed of the same drives in RAID0, but that's not apples-to-apples!

RAID5 or RAID6 arrays are not made for write speed.  They're not designed for extremely large single-file transactions.  Cache works great with multiple small transactions.  Increasing the number of spindles to 5 or more will give you better read performance, as well as reduce the rebuild time versus 3 drives with higher capacity.

In our database environment, there are far more lookup than writes.

Same goes for general file access.  How many times do you look at a JPEG file, versus edit it?
When we originally set up or SQL server we used RAID 5 mostly because that's what we've always done.  Over time we kept running into access time issues and we has someone suggest to us moving to RAID 10.  We had to redo the server anyway so we buffed everything up and transitioned from RAID 5 to RAID 10 and have noticed a huge increase in our ability to get things done on the server.  RAID 10 was definitely a good choice for our SQL boxes.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
I moved ours to RAID10.  Mostly because we do very frequent backup/restores of a live production database for testing purposes.  The write-hole becomes an issue, because we are running a low-end RAID controller that came with the server (HP 6i).
SolarWinds® VoIP and Network Quality Manager(VNQM)

WAN and VoIP monitoring tools that can help with troubleshooting via an intuitive web interface. Review quality of service data, including jitter, latency, packet loss, and MOS. Troubleshoot call performance and correlate call issues with WAN performance for Cisco and Avaya calls

aleghart: I don't know where from You got that picture, but I have a farm of servers with 3ware 9650SE-8LPML and 8 hdds attached. The performance is terrible if all drives are utilized. I'm just in my way moving to adaptec 5805 - performs really better.

Here's some tests on 3ware with 6*750G seagate
HW raid10:
1 stream burst read ~ 300MB/s
3 streams burst read ~ 45MB/s total
1 stream burst write ~ 81MB/s
3 streams burst write ~ 22MB/s total

HW raid6:
1 stream burst read ~ 170MB/s
3 streams burst read ~ 50MB/s total
1 stream burst write ~ 7MB/s
3 streams burst write ~ 6MB/s total
andyalderSaggar maker's framemakerCommented:
aleghart, if you've got £600 to spare you can upgrade the 6i to a 6502/512 :)
(you do have a battery connected to the 6i don't you?
@ravenpl: straight from 3Ware
For your 9650:
"....he 3ware 9650SE outshines the competition as the new standard bearer for RAID 6 performance, delivering over 800MB/s RAID 6 reads and 600MB/s RAID 6 writes."

There seems to be a severe disconnect between your setup and 3Ware's advertising.  I will automatically cut claims in half, figuring best-case setup with fastest and largest # of spindles, men in bunny suits taking the measurements and rounding up.

But, you're saying that you can only get 6MBps write time on their RAID6?

@andyalder: Yes.  BBU installed from the get-go.
> But, you're saying that you can only get 6MBps write time on their RAID6?
correct. But i have no BBU and therefore cache disabled.
With cache=on I'm getting 85 on write - not the speed monster anyway.
I also tried configuring 6 single drives on the 3ware(no cache) and building linux software raid6 - 56MB/s...
I see.  Two things that make servers faster:

1. cache
2. cash

To my CFO, I am a black-hole of money.
More about 3ware. I was testing it on linux only. Maybe Win performs better. The 3ware slowdown cause(?) was observed with blktrace under linux.
If scsi commands are flying fast to the controller, the controller starts replying them in batches with ~10Hz frequency(like it's chocking?). It results in filled queue and speed drop.
Dbrecht... did anything suggested here solve your issue?

If the you do not know how to close the question, the options are here:
jhyiesla: good You recalled this thread.
Recently i wanted to go from 3ware to 5805 controller. What a disaster! It appeared that 5805 is incompatible with most of seagate drives.
DBrecht, I hope cheetah are compatible...
I've had ST3750630AS & ST31000340AS drives. The performance is terrible(it chokes when lot of writes). Drives are failing(looks stable now, after hdd's firmware update).
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.