We help IT Professionals succeed at work.

Server 2012 R2 Hardware Raid or Storage Spaces

We are migrating a SBS 2011 system to Server 2012 R2; we have about 800 GB of data; 15 users with 5 needing remote access.  We decided on a Dell T430 with 4 x 1TB SATA drives configured in RAID 1.

Having got to the nitty gritty of building the new server (2012 R2 with Hyper-V and 2 VMs – 1 DC and 1 exchange)  I have got myself into a debate on whether to use storage spaces or not.

So my questions is:  Do I

1)      break the Hardware RAID and set up mirrored storage space.

2)      do a partial break – keep 2 drives in hardware RAID 1 to hold the Hyper-V OS (+ OS’s for the DC and Exchange servers ?)  and configure the other 2 drives as Storage Space for data etc.

The install of Hyper-V uses a partition for the OS:  I have a concern that this partition would not be included in the storage space so would have issues if that drive fails.

3)      stick to the Hardware RAID (and don’t use storage spaces)

Watch Question

I would stick to hardware RAID and not use storage spaces.  It will not benefit you with the small number of drives that you have.  Storage Spaces is best used in environments where you have a lot of hard drives, and even better, spanned across multiple hosts.

I would also like to share with you a video which shows you what improvements there are with Storage Spaces and Windows Server 2016 - just in case you were planning for the future.


Windows 2016 will be released at the end of this month.
Distinguished Expert 2018

Storage solace's *cannot* use a system drive, so for a hyper-v setup, you'll want to either RAID-1 that or keep it simple with a backup. Restoring a backup of just system data to a new drive can sometimes be faster than rebuilding a RAID array, so there is merit to that approach.

For the rest, I definitely would go storage spaces. Place your VM VHD(x) files on the storage spaces and they get whatever protection you configure. Even with a few drives, you get much faster rebuilds than RAID, and get the benefit of portability. Should a whole server go kaput, you can pull the drives and pop them in a new server. Windows will recognize and use the pool immediately. Mich tougher to do with RAID, where getting the exact same controller and firmware can be tricky for older systems.

I don't even deal with the cost of Raid controllers anymore, but if you *really* need five nines of uptime on a single host (which is odd, as other hardware cab fail and a cluster would solve both issues) then RAID on the system drive nay be worth considering as said above.
Philip ElderTechnical Architect - HA/Compute/Storage
In a standalone setting we deploy 8 to 24 10K SAS spindles in RAID 6 with a 1GB cache based hardware RAID controller. We make sure the cache is flash-backed.

Storage Spaces is great for storage in a cluster setting to provide storage for SMB based VHDX access.


Sorry for not getting back sooner.

Having researched this a little more and based on Cliff Galiher's outline implementation I would probably implement that.  Apart from the quicker / easier recovery from a crashed disc it also appears that write speeds are quicker in a mirrored storage space compared to a hardware RAID.

However, my research suggests that the RAID on the DELL (a PERC H730 with 1GB cache) does not support JBOD and can only be setup in RAID 0 .  Apparently Storage Spaces cannot fully handle discs in that state.

If that is true then I'm obliged to stay with a Hardware RAID 1
Distinguished Expert 2018

Or buy an HBA.
Philip ElderTechnical Architect - HA/Compute/Storage

Dell has a 12Gbps SAS HBA: (405-AAFB) SAS 12Gbps HBA External Controller, Low Profile

NOTE: Storage Spaces on a single server will _not_ perform anywhere near as well as a dedicated hardware RAID setup would. BTDT

Drive failure: Please research this. It is not a simple disk swap with Storage Spaces.
Drive failure: Note that one needs to keep at least ###GB + 50GB (Where ###GB = size of one drive in the JBOD) of space free in the Pool to allow Storage Spaces to rebuild a failed disk into that free space. We try to keep about 75% of a Pool's available storage free to allow for multiple disk failures. Once a failed disk is rebuilt into free Pool space the two disk resilience in a 3-Way mirror is back.

NOTE: 3-Way Mirror is the _only_ resilience model to use. That's 33% of total storage available for the VHDX files. 33%

We build Scale-Out File Server (SOFS) clusters that provide a storage backend for Hyper-V clusters via SMB (two 2 node clusters at the minimum). SOFS is based on Storage Spaces. We also build SMB/SME clusters built on clustered storage spaces.


Sorry for not replying sooner.

I decided to go with option 3 - use raid and no storage spaces.

The installation of server 2012 r2 went well as did the install of the DC and exchange  except the migration from exchange 2010 was a disaster !!

So did fresh install of 2012 R2 and exchange 2013 and manually migrated exchange data.

All seems OK Now !
Philip ElderTechnical Architect - HA/Compute/Storage
When doing a PST migration method make sure to update the "new" user's x500 with the one from the previous Exchange setup so Free/Busy and other items work.


Noted and thanks