Link to home
Start Free TrialLog in
Avatar of RenovoIT
RenovoITFlag for United Kingdom of Great Britain and Northern Ireland

asked on

Really simple RAID question

A really simple question.  I have four disks in a server and would like reasonable performance and redundancy but I'm not too worried about space.

Is RAID5 still considered slow?  I figure RAID5 + spare would allow two disks to fail at the same time (live and spare on rebuild) and I still get 50% of space.  But is RAID1+spare or RAID1+0 faster/better solution?
SOLUTION
Avatar of David
David
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of RenovoIT

ASKER

This all seems to match with what I assumed.  Perhaps it would also help if I explained what I was doing :)

I'm building a few MS Hyper-V boxes that will use a SAN to store the VMs.  Therefore the local disks will really only be used for the MS Hyper-V OS and perhaps a few test VMs that won't be run on the SAN.  So I want the stability for the OS but I suspect it doesn't need fast write access as once it's loaded it shouldn't need to do much writing.  But If I'm running a few test environment VMs of the local disks, if possible a bit of write performance would be nice.

I'm currently thinking RAID6 would probably be the safest route to go down.  Probably both the slowest for read and write, but I like the idea of being able to lose two active disks (not just one active and the spare on rebuild).
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
The SAN itself is two arrays setup as RAID5+1 (a couple of HP LeftHand SAN boxes).

Previous ESX boxes I've setup that use SAN for VM storage have only had two local disks, so RAID1 and purchase a spare disk to sit in a cupboard that will work in all servers.

Having 4 disks in an existing server just got me thinking if I could get a good setup that also allows stability and the ability to run a couple of test VMs if required from the local disk.
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanks for the response, all  similarly valid answers I guess.  As mentioned above "no free lunch".  For the servers that support it I've used RAID 6 over the four disks and for the server that doesn't support RAID 6 I've used RAID1+0.  Given that they're Hyper-V hosts I'm hoping there won't be much disk access.