Link to home
Start Free TrialLog in
Avatar of pettma1
pettma1

asked on

VM host RAID

Have a Dell server with PERC H730P raid controller with sixteen 300gb 10k sas drives in it.

This will be my primary virtual machine host with Hyper-v server or server 2012 R2 with Hyper-v role.

How would you configure the disks for it:

1. Raid 10 with 16 disks, install OS and have a folder for the VM's
2. Raid 1 with 2 x 300gb for OS and Raid 10 with 14 disks for VM's
3. Something elsa. what?
Avatar of Dan McFadden
Dan McFadden
Flag of United States of America image

I would take 2 HDDs and create a hardware RAID 1 set (mirror) for the OS install.  Then manage the balance of the HDDs with Windows Storage Spaces.

Link:  https://technet.microsoft.com/en-us/library/hh831739(v=ws.11).aspx

Microsoft's take on designing for Hyper-V using Storage Spaces.

Link: https://technet.microsoft.com/en-us/library/dn554251(v=ws.11).aspx

An article that discusses a similar setup and a walk thru the configuration.

Link:  http://muegge.com/blog/hyper-v-cluster-with-storage-spaces/

Dan
Well, It mostly depends on the size of the drive you want to give to the virtual machines.
Since the 730P supports Raid 6, I'd create 2 Raids.
2 Disks as Raid 1 which will give you 300GB for the OS
and the rest as a Raid 6 which will give you 3,600 GB of Storage (about 3.5 Tera) with 2 disks failure tolerance which is preferred in a high availability infrastructure ESPECIALLY if it's going to be a production environment.
IMO, RAID 6 is a waste of hardware.  The probability of losing 2 HDDs in a single RAID container is quiet low.

In 24 years of IT, I've never had 2 HDDs in the same RAID container, fail, at the same time... either way, the container would still be in operation but without parity support.  You would need 3 HDDs to fail, at the same time, in the same R6 container to lose the volume.  Probability.... even lower.

A single server cannot be considered an HA setup.  The server is a single-point-of-failure.

But lets review:

 16x 300GB HDDs

1.  RAID10 container for everything
1a. RAID10 container with 16x 300GB HDDs, results in 2400GB usable space
--- you lose 50% of your space (2400GB)
--- that's 8 mirrors configured as a stripe.

2. RAID1 for OS, RAID10 for data storage
2a. RAID1 (300GB) for OS
2b. RAID10 with 14x 300GB HDDs, results in 2100GB usable space
--- this would be the fastest setup for the data storage volume.

3. RAID1 for OS , RAID6 for data
3a. RAID1 (300GB) for OS
3b.  RAID 6 with 14x 300GB HDDs, results in 3600GB usable space
--- you lose N-2 in space for R6 (600GB)

4.  RAID1 for OS , Windows Storage Spaces for data
4a. RAID1 (300GB) for OS
4b. Manage 14x 300GB HDDs as JBOD via Storage Spaces, results in 3600 to 3900GB usable space
--- create a parity pool (similar to R5/6)

Storage Spaces info:
1. https://technet.microsoft.com/en-us/library/hh831739(v=ws.11).aspx
2.  https://technet.microsoft.com/en-us/library/jj822938(v=ws.11).aspx

The advantage to the Storage Space setup is, mixed disk types, your can create tiered storage volume, mixing SSDs and HDDs for better performance.  Expansion of existing volumes is easier and quicker, HDDs failure recovery and rebuilds is faster than HW RAID due to parity being stored on all the disk, not just a single parity drive.

Dan
I have an EE article: Some Hyper-V Hardware and Software Best Practices.

We would set up either one RAID 6 array across all disks and partition 75GB for the OS and the balance ahead of time (via DiskPart in CMD after booting OS installer flash drive --> Repair) or two logical disks of the same setup.
Avatar of pettma1
pettma1

ASKER

I'm leaning towards performance and therefore I will put raid 10. What would be the cons for having one big disk to install OS and have a folder for VHDX's? This way I'd get the write performance of 8 disks and read performance of 16 disks.
ASKER CERTIFIED SOLUTION
Avatar of Dan McFadden
Dan McFadden
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of pettma1

ASKER

Thanks. This is what I shall do.
With 8 or more spindles in RAID 6 with 1GB flash-backed cache the performance difference to RAID 10 is very little.

We do a lot of disk subsystem work for clusters. Our expectation is that 8x 10K SAS spindles in the above setup will yield at least 250 IOPS per disk at 128KB/256KB slice/block sizes and 450 IOPS per disk for 64KB slice/block size. Throughput for the former averages around 800MB/Second sustained write speeds.

Testing conditions:
RAID 6 array across all spindles with RAID on Chip, 1GB cache, and flash/non-volatile memory backing. Write-back is enabled.

+ Logical Disk 0: 75GB for host OS
+ Logical Disk 1: Balance GB/TB for Data and Testing