Link to home
Start Free TrialLog in
Avatar of Steve Hood
Steve HoodFlag for Canada

asked on

Raid 6 or Raid 10?

Am setting up a new file server/domain controller for our company of 15 staff.

The server is this model: HP ProLiant DL380 Gen9
The RAID controller is this: Smart Array P440ar with 2GB FBWC
Server can only accept 2.5" Disks
 
- I figure about 5TB will be enough space.
- The majority of files on this server will be large AutoCAD and Photoshop documents.
- Unlikely server will be Write intensive as no DB or Applications to run on it

I'm thinking Raid 6 (5 SAS drives of 1.8TB 10K RPM).

Anyone want to weigh in?
Avatar of Lance Hietpas
Lance Hietpas
Flag of United States of America image

From: https://community.spiceworks.com/topic/1155094-raid-10-and-raid-6-is-either-one-really-better-than-the-other
Raid 10 is better in nearly every way for resilience.

Lets say we have 12 3TB drives. And we have 2 sets of that for test arrays.

Array 1 is Raid 6.  That gives you 30TB of storage with 2 drives of capacity lost.  You can lose 2 drives and still function.  A third drive failing will kill the array.

Array 2 is Raid 10.  That gives you 18TB of storage because you lose half due to the mirror set.
At worse, a 2 drive failure could kill this IF both drives from the same mirror set failed.  At best I could lose 6 drives and the array would still function.

Now 2 weeks ago I saw an array lose 2 drives at once in a raid 6.  Rebuild time was about 5 days with the rebuild rate set to 50% (and that's high and nerfs the disk performance while it's rebuilding).  During that time, a third drive died.  The array was toast.  Looking at the specific drives that died, that array would have survived if it was raid 10 because none of those drives would have been mirrors based on the ones that failed.

In raid 6, during a rebuild it has to read every drive and recalculate the missing data.  That means you have to read 30TB of data to rebuild that and hope a URE does't occur.

With Raid 10, it simply copies the data from the good member of the pair, so it only has to read 3TB of data.  The rebuild is orders of magnitude faster.  This problem scales up and gets worse for raid 6 for each drive that you add.
SOLUTION
Avatar of Paul MacDonald
Paul MacDonald
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
With ANY RAID option, you should, of course, replace a failed disk when it happens -- not wait for a 2nd failure just because you have a RAID type that supports 2 failures.   The whole idea of dual fault tolerance in RAID-6 is so your rebuild will still succeed if a 2nd drive happens to fail during that process.

In the example given above in favor or RAID-10 ... consider the following:

=>  First, the rebuild times are not realistic -- at least not in any array I've seen.   To rebuild a 3TB drive in a RAID-6 array generally takes well under a day.   Yes, it's reading a lot of data; but all of those reads are happening concurrently, so it's not as bad as it sounds.

=>  Second, the implication is that you have better protection against multiple failures with RAID-10, but that's not at all true.   With RAID-6 you can ALWAYS sustain two disk failures without loss of data.   With RAID-10 you MIGHT be able to sustain two failures (in the example above, there's a 5/6th chance you could -- but in the other case you'd lose all data on the failed pair) ... but you also might not.

=>  Assuming you're always going to replace a failed disk when it happens, I think RAID-6 is a decidedly better choice for a capacity/fault-tolerance tradeoff.

=>  One final thought:   No matter what RAID level you use, RAID is NOT a substitute for backups => and assuming you have current backups, I'd much rather have the higher storage capacity and guaranteed dual fault-tolerance of a RAID-6 setup.
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Vladislav Karaiev
Vladislav Karaiev

as an option I would try to calculate the same capacity with 2.5” SSDs in RAID5 instead.

Totally agree. Nowadays, SSDs are more cost efficient compared to 15k SAS drives. With SSD RAID5, you also achieve much faster rebuild times in comparison with HDD arrays.
£8,200 to use read-intensive SSDs in RAID5, £2,100 to use 1.2TB in RAID6 (I used 1.2TB as they're a lot cheaper than 1.8TB and more spindles make it faster)

SSDs have superior random access performance but your big files are sequential so SSDs may not even be any faster.

Sequential write performance of RAID 6 is the same as sequential write performance of RAID 10 assuming the controller can do a full width write because one stripe is a single write for each disk in the array (there will be more disks in the RAID 10 array for the same amount f data)

Sequential read performance of RAID 6 is better than sequential read performance of RAID 10 because Smart Arras can't predict which disk will get to the data quickest in RAID 10 so it sends both after the same data so 8 disks in RAID 10 will give about 5 times read speed of 1 disk, 7 disks in RAID 6 will give 7 times read speed of 1 disk.
£8,200 to use read-intensive SSDs in RAID5, £2,100 to use 1.2TB in RAID6

First of all, sorry for the wall of text.

I have initially compared SSD pricing VS 15k SAS spindles (they are significantly expensive than 10K ones). But, from what you have told, I assume you are comparing SSDs VS 10K SAS drives, which is still ok.

In order to fulfill the OP's requirements, we need 8x 800GB SATA SSD's in RAID5 to achieve 5,6 TB usable capacity, or we can use 7x 1,2TB 10K SAS drives in RAID6 and achieve 6TB usable.

Price for HP DL380 G9 compatible SSD: https://www.amazon.com/HP-804599-B21-Intensive-2-SmartDrive-carrier/dp/B013Y4XV86

$416*8= $3328

Price for HP DL380 G9 compatible SAS drive: https://www.amazon.com/HP-1-2TB-12G-2-5in-781518-B21/dp/B00TOCV1SU

$319*7=$2233

Total drives cost would be: $3328 for SSD setup vs $2233 for HDD setup. SAS array would be $1095 cheaper. This is, obviously, not even close to 6100 pounds difference as you have mentioned :)

Now, let's talk about performance. You are correct if we are speaking about the sequential write performance. Sometimes 15K SAS spindles or even 10K drives can beat the read intensive SSDs on 64K> 100% sequential patterns. But this is not always true for the sequential read patterns.

I'm working in a "storage-oriented" company, so I've managed to gather some performance metrics from our previous lab tests. Assuming the hardware factor, they still can be applied in this case with a slight margin of error. All below results were achieved using MS Disk speed software.

Sequential access performance results:

1st setup: Dell R730xd: 7x 10k SAS drives in RAID5

  • 64K 100% Sequential Read = 16150 IOPS
  • 64K 100% Sequential Write = 15917 IOPS

2nd setup: Dell R730xd: 8x SSD drives in RAID5

  • 64K 100% Sequential Read = 55200 IOPS
  • 64K 100% Sequential Write = 12704 IOPS

As we can see, SAS drives are indeed 20% faster under the 64K 100% SW access pattern. Looking at 100% SR pattern, it comes clear that SSD array performs 70% better than 10K spindles.

From my personal experience and metrics gathered from our Customers, I can tell that a typical file-server workload is usually about 30% Writes and 70% Reads. This is why I think that, in our particular case, SSD array will perform significantly better than SAS array.

Moreover, all my statements above assume that we have a single Photoshop/AutoCAD user who works with the file-server generating sequential I/O. Now let's imagine we have 15 users as stated by OP. This is where our RAID array starts to receive a partially randomized I/O workload.

In such case, we need to take into account the random I/O metrics.

Random access performance results:

1st setup: Dell R730xd: 7x 10k SAS drives in RAID5

  • 64K 100% Random Read = 1770 IOPS
  • 64K 100% Random Write = 748 IOPS

2nd setup: Dell R730xd: 8x SSD drives in RAID5

  • 64K 100% Random Read = 51522 IOPS
  • 64K 100% Random Write = 6540 IOPS

This is where our SSD RAID5 array shines much brighter.

Considering all above mentioned, I believe that $1050 disk price difference can be really justified. An SSD-based setup will perform much better and would have a significantly bigger gap for future.
Avatar of Steve Hood

ASKER

I'm confused now. In so far selecting a RAID level using SAS 10K drives everyone, and I mean everyone, advises against RAID 5 and recommends either Level 6 or 10 (mostly 10)
..then..
some folks speak to using SSD drives .. but.. they recommended SSD configuration being RAID 5, and not RAID 6 or 10.
The opposite of above.. I do not understand why RAID 5 is better for SSD but non-ssd use Raid 6 or 10.. am I missing something??

I
RAID-5 is NOT better for any type of drive.   I suspect the reason it was mentioned is that an SSD is generally more reliable than a traditional drive; and the rebuild time for an SSD-based array is MUCH faster, so there's not as much "at risk" time.   Clearly a RAID-6 or -10 array would be even better -- even with SSDs.
I agree with Gary, RAID-5 is now not recommended at all for spinning rust, the jury is out on RAID-5 and SSD's
Those 800GB SSDs are certainly much cheaper per GB than the 1.6TB ones I based my pricing on.
Ok, it's down to either:

A) 6 of these SSD's in a RAID 10 (gives me 6TB) Link to these drives from Vendor I always buy from

or

B) 8 of these genuine HP with 3 year warrenty

Which would you choose?
Assuming 6TB is plenty of storage, I'd go with the SSDs.

... and buy 7, so you have a spare => then if/(when) one fails, you can do a rebuild immediately and order another spare at that time.   Remember that RAID-10 effectively only guarantees single fault tolerance.
You've swapped the supported SSDs for even cheaper ones that aren't supported in the server?
I agree with Andy, the SSD's you have referenced may be cheap but they will give you support problems with HP and there may be compatibility issues  as well!

HP has a whole load of SSD's that are compatible with this server and controller!
Thanks friends, I've ordered the HP Certified 1.2TB 10K SAS Mechanical drives, 10 of them for a raid 10. I think given type of file storage, AutoCAD. and not applications or databases just big files people open.. that these drives will be ok. SAS still just too expensive for SMB server
Do you need any further assistance?

If not please close the question and award points please?
10 of them? The default configuration of DL380 G9 only takes 8 disks unless you add backplane 2 kit and SAS expander card.