Link to home
Start Free TrialLog in
Avatar of erteam
erteam

asked on

What is maximum number of disc recommanded in RAID 5 Array?

I'm having 14 * 1 TB HDD.

I would like to make RAID 5 Array using those 14 discs.

What is the recommanded number of discs can be used in Single Array?

Like can we use all 14 discs and make it as a single Array or can we use less number of discs and make it as mulitple arrays?

Which one is recommanded?

Which will give good performance?

Please help me.
Avatar of Randy_Bojangles
Randy_Bojangles
Flag of United Kingdom of Great Britain and Northern Ireland image

Generally speaking the more spindles (disks) you have in a RAID5 the better your performance is.

The number of diks you can put into a single array depends on the specific hardware (controller and enclosure) you are using
Avatar of Member_2_231077
Member_2_231077

HP recommend not to have more than 7 or 8, reason being if a disk fails the performance goes through the floor and the chance of a second disk failing before the array is rebuilt gets too high. For more than 8 they would suggest using RAID 10 or RAID 6 depending on whether you want capacity or performance. Another option to tackle it would be to use RAID 50.
Avatar of Gary Case
As noted above, the number of disks you can use in a RAID-5 array is limited by your controller.  If your controller supports it, you can easily use 14 drives in the array.

However, with 14 disks, the concern Andy noted is very real -- the likelihood of a 2nd drive failing during a rebuild is a real concern.   I'd recommend you get a controller that supports RAID-6 and use these drives in a RAID-6 array => you'll get the same level of performance, and your array will tolerate two drive failures with no data loss.   This provides superb fault tolerance; and makes it very unlikely you'll ever lose data during a rebuild.
erteam could do an experiment for us, build a 14 disk RAID 5 with 1TB SATAs (HP were talking about 10K SCSI when they said max 8 for safety) and then yank one of them out - I presume they are hotplug. Shove it back in again and record how long it takes to rebuild.

What's the duty-cycle of the disks? HP's 8 disk max was for 100% duty-cycle disks but SATAs (ignoring 10K ones with SCSI/SAS capacities like raptors) are rated at only 40% duty-cycle and rebuilding is a full-time job. I expect them to go into slow read-after-write mode to protect themselves from overheating but as yet I have no proof. So I'd really like erteam to do the experiment.

There's a good rebuild/performance whitepaper from HP (you can guess I work for an HP house) but it only covers RAID 5, RAID 6 rebuild/performance time would be interesting to see.
The rebuild time is very much a function of the controller.   The newer controllers with the 1680 chips can rebuild a drive in an array like that in a matter of 2-3 hours.  The older 12xx series chips would take ~ 25 hours.

These numbers are based on a similar system detailed on the AVS forum with 20 1TB drives.
As Randy said the basic premise is that the more disks in a RAID stripe (RAID-5 is striping with redundancy) the better the performance, certainly where IOPS are concerned, and that is where HP's EVA scores with its ability to have over 150 disks in a single VRAID5 RAIDset Its really a RAID-50 implementation (striped RAID-5 sets, but with lots of extra smarts (what HP StorageWorks Division calls Secret-Sauce)) But conventional RAID controllers usually have a limit of 15 to 30 odd spindles per RAIDset, and as Andy and Gary have already said, 7-8 spindles is the usual recomendation due to rebuild times (although what controller Gary is talking about that can rebuild a 7 spindle RAIDset in a couple of hours, when 1TB spindle are becoming the standard, sounds a bit optimistic, especially if its supposed to being real work at the same time [1TB in 3 hours works out to be 100MB/s]).
Another reason that 7-8 spindles is often considered a maximum is down to your personal paranoia about the reliability of the shelves/trays/etc that your disks are plugged into. You have two choices -only have one spindle from your RAIDset in a shelf and therefore survive a shelf failure, or allow your RAIDset to wrap round and then unless you are using RAID-6 you won't! Now i realise its an unlikely scenario as shelves don't fail very often, but its a risk to consider.

I would also concur with Andy and Gary and recomend that you use RAID-6, although if your Write to Read ratio is high you will need to consider the penalty that running RAID-6 brings as very very few RAID controllers have 0% RAID-6 overhead (i only know of one, and its very expensive)

Hopefully you have already taken into account the IOPS, MB's/Sec and the SATA duty cycle constraints that come with using these big disks, and that you are not expecting this setup to be high performance.
None of us remembered to mention the possibility of unrecoverable read errors either, if a spindle fails what's the chance of one of the remaining disks also having a UWE on one sector in which case backup/restore is normally the only way to regain parity since the array can't be rebuilt. Bad that happen even on fairly good SCSI disks.

(to be honest I think they dropped the server because there was a whole bunch of UWEs at about the same sector number making it look like a partial head crash. And that was with a controller that does background parity scrubbing.)
I mean I had that happen, not bad that it happens - typo.
To better understand the very dangerous scenario of RAID 5/6 large arrays, I used the below spec sheet for WD RE3 Enterprise SATA 1TB drive
   http://www.wdc.com/en/library/spec/2879-701281.pdf

Those are the numbers:
-Unrecoverable read error per bits read : 1 bits per 10exp15 bits read
-1 RE3 1TB drive : 1953525168*512*8 bits
-At rebuild time, you have (N-1) drive to be read at once to rebuild a spare drive
-That is (N-1)x 1953525168*512*8 / 10exp15 probability for another unrecoverable read error
==> A raid 5 array using 14x 1TB drives would have a 10% probability to have another drive failure at rebuild time

Using the below numbers, you can choose your risk level...or decide to go with a RAID 10 array (which is at 0,8% level for 1TB drive).
 1-2      0,8%
 3      1,6%
 4      2,4%
 5      3,2%
 6      4,0%
 7      4,8%
 8      5,6%
 9      6,4%
 10      7,2%
 11      8,0%
 12      8,8%
 13      9,6%
 14      10,4%
 15      11,2%
 16      12,0%

NB: Some large SATA drives are claiming for a 1 unrecoverable bits read per 10exp14 bits read...that would be 10 times WORSE in a raid Array
The possibillity of a unrecoverable bit error during a rebuild is indeed fairly high with large arrays; that's why I noted that with RAID-5 arrays "... the likelihood of a 2nd drive failing during a rebuild is a real concern."   But RAID-6 dramatically mitigates this, since it will recover from a single bit error even after one drive has failed.  This makes it very unlikely you'd have an issue during a rebuild unless a 2nd drive physically failed -- random unrecoverable bit errors would simply be corrected.
Gimme a Savvio 15K.2 any day, 1 URE in 10^16. (Mind you they do cost just a tad more).
Your definition of "... a tad ..." is different than mine !! :-)
(But Seagate's Savvio series are VERY nice drives)
ASKER CERTIFIED SOLUTION
Avatar of BigSchmuh
BigSchmuh
Flag of France image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial