Link to home
Start Free TrialLog in
Avatar of exp_exch1
exp_exch1

asked on

RAID 5 - 4 drives vs 5 drives.

I am currently rebuilding a RAID 5 array.

I would like to use one of the available drives as a hot spare, but not if there will be a performance impact.

Is there a performance difference in a RAID 5 array operating with 4 drives and a hot spare vs 5 drives without a hot spare?
ASKER CERTIFIED SOLUTION
Avatar of jramsier
jramsier
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Avatar of David
David
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of flakier
flakier

Yes, there will be a performance impact.  How much impact depends on several variables.  The only way to know for sure is to benchmark each configuration.  Even then, raw perf stats may not matter.  It also would be usefull to gather application performance statistics.

I would guess that the difference will not be too great.  Maybe a cold spare on the shelf is good enough if you want to use all drives?
Hi,

This is my point of view only, but it may help you.
I never used a spare drive because for a server, a drive being quite expensive, I don't see why I shouldn't use it right away.
Using RAID5 gives me enough security as long as only 1 drive fails.
Anyway you spare drive could have only cover 1 failing drive.

For RAID5, the more drive you add, the more secure you are because data is plitted on more drives.
But then again, the more you split, comes a moment where performance declines.
It also depends on your RAID hardware, and the cache size.

Using 4+1 offers a different security in case 1 of the 4 fails... which can happen tomorrow or in 3 years depending on the age and usage of you disks.
But, will that spare drive still be operational after sitting there in the dust of the array for a long time ? And wil it keep up or also fail after a few days ?

If you have at least 128MB of cache on the array, I would advise you to use you 5 disks array for security and performance instead a 4+1.
The performance impact between the 2 configs will be seemlessly minimal.

For more than 5 disks, I use RAID1+0 (RAID10).

Keep it simple, you have 5 disks, use them all ... that's what I would do ;-)

Cheers,
Andy
Actually, since you have a hot spare, then if you want best possible data integrity & reliability, then just incorporate the hot spare and resilver the RAID5 + hot spare into a RAID6 with no hot spare.  Then the added parity information is available 24x7.  In event of a drive failure, it is essentially already rebuilt for you .. instantly.

RAID6 does have slight performance disadvantage of RAID5 on writes, but this varies greatly depending on make/model.

However, if this was my data, then I would not hesitate to migrate from a RAID5 + spare to a RAID6.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
@dlethe > IOPs and throughput are mutually exclusive

You keep making that statement and as far as I'm concerned it's total tosh. If I've got twice as many disks in one array than I have in another then assuming everything else to be equal I'd expect more IOPS and greater throughput from the array that has more disks in it.

Can you please either stop saying it or post something to justify it bearing in mind that the question is about the number of disks and not about the size of individual I/Os.
You do not magically increase IOPs and do not increase throughput for all (controller stripe)  sizes and types of I/O for all host-generated I/O requests by adding disks.  Some I/O operations will result in less throughput and/or less total throughput.

In a perfect world, where there is no saturation, and all I/Os use different parity drives and those I/Os are balanced equally so queue depth is equal among all disks, and the physical I/O size is is exactly the same as the stripe size of the RAID set in the controller, and so on... then there will be such balance.

Real world, it doesn't work that way.  
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
This question has been classified as abandoned and is being closed as part of the Cleanup Program. See my comment at the end of the question for more details.