Is RAID 10 enough?

I'm beginning a technology startup company and need to keep hardware to a minimum initially.  Can I get high reliability out of using non clustered RAID 10 storage (plus offsite backups) . I'd like to offer a very reliable service but not commit to dual server storage clusters from the start.  Is this a reasonable approach?
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Well many people in your shoes go to a storage appliance, as they can provision devices as necessary, perform quick and easy migrations/backup, and clustering is easy because multiple host systems can attach.

A used NetApp appliance with onsite support may be the way to go.  Certainly users will have little or no concerns with a NetApp-based storage farm, especially when it comes to reliability.
Phil5780Author Commented:
NetApps looks good for step #2 as we scale out but currently that's above budget for us.  We're going to admin our own servers via co-location and were thinking just to use server RAID 10 arrays to offer up indepent IO file access.  With backups in hand, is that too risky?
Having a hot spare is a good idea if you're concerned about reliability. I find the following site quite useful for determining reliability levels:

If you're really concerned about data availability then you might want to look at RAID 6 (NetApp call it RAID DP). RAID 6 is however is slower than RAID 5 for writes which is in turn slower than RAID 10.

If you go down the clustered road, DRBD ( is an open source block replication/mirroring project that'll you'll want to look at.

Finally a note about software v's hardward RAID. I've used Linux software RAID in many systems for years and it's always worked very well. You will get better performance from a hardware RAID card with battery backed cache, but it's not something you want to cut costs with. I've been burnt by a cheap hardware RAID card in the past.
How do you know if your security is working?

Protecting your business doesn’t have to mean sifting through endless alerts and notifications. With WatchGuard Total Security Suite, you can feel confident that your business is secure, meaning you can get back to the things that have been sitting on your to-do list.

I would never consider RAID10 or RAID5.  Go RAID6 because it can tolerate 2 drive failures.  It can take days to rebuild a RAID5 set, especially with multi-TB SATA drives.  If you have a double failure, then not only do you lose a customer forever, but word-of-mouth will severely impact your business.  

If performance hits concern you, then make sure you get SAS or even FC drives.  Also, make darned sure if you go down the SATA path, that you buy enterprise class drives good for 24x7x365 day use.  Many of the SATA drives are consumer class designed for 8 hours a day, 300 days/year.
As long as you have dual controllers it shouldn't be necessary to have two SAN storage systems, HP EVA4400 for example has only one set of disks and gives five nines reliability. RAID 10 is fine since rebuild is very quick compared to RAID 5 or 6 and it gives much better performance. Would have to know a rough idea of the budget to be more specific, you say you don't want to mirror two storage systems but presumably you are clustering servers.
Determine what MTBF is acceptable to you, configure storage to achieve it..
And get a suitable backup plan to permit you to recover from failure.

Generally  RAID10  is  used when you need  high-performance storage,  in particular, without the RAID5  read penalty. The cost with RAID10  vs RAID5 is disk space.

It is in common use, you can certainly use it,  but  be very careful about your hardware choices.     Try  to   get   different  drives  in the same mirrored set from different  batches,   and burn in your drives thoroughly  (a few reads and writes to the entire disk surface),   before placing them into an  array, and making them live.

Always use  Brand new Enterprise drives that have the 5yr warranty, and a good hardware RAID controller fully patched for any firmware errata -- LSI MegaRAID or  HP SmartArray  are great;  for  RAID  on a critical server, you should use a controller with battery-backed NVRAM,  to protect any write cache, be sure you can get a spare easily.  No  desktop grade drives.    The  rest of the components in your server can be aftermarket  if you really want,   just buy another one if it breaks --  but data is  irreplacable.

If the server stays in commission for more than 5 years (or approaches your computed MTBF),  plan to swap those drives  and  move them to something less critical.

RAID6 provides better fault tolerance at the cost of performance.

RAID10  can withstand two drive failures, as long as two drives in the same mirrored set do not fail.     However, a failure while  rebuilding can be fatal.

If you do use RAID10,  stick with disk drives that are 200gb or less.
Do not use RAID10  with   500gb or  1tb   desktop  drives,  the chance of a failure while rebuilding  or a second failure is much higher for large disks...


Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
I also agree with LINUX software RAID .. have used to for many, many years.  It is also the foundation of a great number of SAN and NAS appliances.  Just one important bit of advice on LINUX software RAID,  make sure you are patched up and using a current kernel.  Many vendors I know use a small industrial flash drive to boot their O/S also, and use a soft link to the ramdisk for /tmp and log files that nobody will ever want to keep.  
Phil5780Author Commented:
Thanks for the information.

RAID6 peeks my interest.  What is the performance loss?  How is the rebuild time compared to RAID 10?
RAID6  is similar to RAID5, but uses  two  parities  "double parity".
It means that for every 'data block',   there will be  two parity blocks.
In RAID5 there was just one parity block for every block of  data disks.

e.g.  in RAID5 you could have
ParityBlock1(disk1sector6) = DataBlock1(dsk1sec6)   +  DataBlock2(dsk1sec6)

So if the drive with 'Datablock1'  failed,    you can still determine the value DataBlock1 had  before the failure,  by computing:  DataBlock2(dsk1sec6) + Parity(dsk1sec6)  which is equal to the missing value.

By  XOR'ing  the bits in the two blocks of remaining disks you have, you get the missing bits.

Keep in mind... this does take longer than reading Datablock1 would take.    If  you lose 1 drive in a RAID5 array, your array is in a degraded state:   it means, that you lose all data on the array if another drive fails,  but it  ALSO  means  that  READ performance is degraded,  since  your   hardware  now needs to perform this XOR computation for most reads;   the same is true of RAID6 if you lose a drive.

In RAID6,   a  second,  more complex  (more CPU-intensive) computation is to
compute a second parity, in addition to the first parity, when writing any block.

The main performance penalty in RAID6 arises when writing, due to the computational expense of calculating a second value  which is  independent of the first Parity block.

There is some flexibility in the RAID controller manufacturer's choice here.
The important property is that, an  additional block is written,   and  that it's sufficient to calculate  the 'user data' value of a given  data block having only:  

*    1  data block and  1 parity block   (as with RAID5,  where 2 datablocks on different disks were used to compute the corresponding parity block)
*    2  parity blocks  (where 2  datablocks on different disks were used to calculate  both the first and the second  corresponding parity blocks)

RAID6 and 5  are not like RAID4  that had a dedicated parity disk and dedicated data disks.   Instead, both RAID 6 and 5 are distributed parity.  If you have N disks...   there is a sequence  over  'which disk' the parity is written to  that alternates for each stripe.

However, the OS wants to read a block of data off the array, the RAID  controller  has to read parts of the block from multiple drives, before it can calculate the value to pass to the OS.       This is what allows  RAID10  to achieve  (slightly) faster reads.

Also,  since RAID10 is simply mirroring.
If a drive does fail,  there won't be a performance degradation on reads.

Normally, the only way you will detect a failure of a single drive in a RAID10 array, is by watching your  array monitoring.

Much different from RAID5  or RAID6,  where  (in some cases),  a server with a failed drive loses much read performance,   can bring the server to its needs and result in thrashing,  depending on how  important  read performance is to the application.

It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Server Hardware

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.