RAID configuration advice for a new server

OGDITAdmin
OGDITAdmin used Ask the Experts™
on
Hi, we have a new Dell PowerEdge R540 and I'm looking for opinions on how best to configure the RAID(s). The system has 12 - 8TB SATA drives, PERC controller and has 512GB of RAM. The system will be a Hyper-V Host and will have many (probably a dozen or more) Hyper-V Virtual Machines on it. Some will be database servers like Oracle and SQL and others will be application servers and file servers. The OS will be Windows 2016. The VMs will be mostly 2016/2012 servers with a couple of W10 workstations sprinkled in. Interested in knowing how you would configure those drives. I usually like to use the hot spare option especially if the RAID only allows one drive failure. Years ago we had a RAID (5 I believe) with no hot spare and we lost two drives at the same time and all of our VMs … definitely would not want to go through that again. Thanks for your input.
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Use RAID 6, which means 80TB of usable disk or RAID 10 for 48TB.

RAID 5 has been obsolete since 1TB and 2TB disks.  It's still usable with SSDs, of that size but basically, the rebuild times will be too long and tax the system far too much before a 2nd disk will fail.
kevinhsiehNetwork Engineer

Commented:
RAID 6 is too slow. I am concerned that even RAID 10 will be too slow.

Why is there no SSD?

Those are slow capacity drives and not meant for running many modern VMs.

RAID 10 and 6 are the only safe options for those drives.

Commented:
I would consider two arrays one for the OS and a data storage with a global hot spare.
The OS array would be RAID 1 and the storage array RAID 5,  8TB OS, 72 TB storage.
Philip ElderTechnical Architect - HA/Compute/Storage

Commented:
The only place we are deploying large form factor (LFF) disks is in warmish to cold data applications like backup repositories.

A set of eight 2.5" 10K drives 1.2TB or larger, thus high areal density, in a RAID 6 configuration where the controller has 1GB Non-Volatile cache RAM can push 800MB/Second and 250 to 450 IOPS per drive depending on the storage stack format. That's reasonable performance for a virtualization platform that has four to six moderate to low load VMs on it.

Today, we deploy Intel D3-S4610 series or Micron SATA series SSDs in RAID 6 with that virtually eliminating the number one bottleneck in most virtualization platforms: The Storage Subsystem.

I have two very thorough EE articles on all things Hyper-V:

Some Hyper-V Hardware and Software Best Practices
Practical Hyper-V Performance Expectations

Some PowerShell Guides:
PowerShell Guide - Standalone Hyper-V Server
PowerShell Guide - New VM PowerShell
PowerShell Guide - New-VM Template: Single VHDX File
PowerShell Guide - New-VM Template: Dual VHDX Files
Top Expert 2014

Commented:
I.m with kevinhsieh, even RAID 10 is likely to be too slow for a dozen VMs unless they aren't doing anything. You could improve performance by using SSDs as cache using CacheCade on the PERC.
Brian BEE Topic Advisor, Independant Technology Professional

Commented:
The issue with RAID 5 is going to be rebuild time. If a drive fails, it takes a long time for the hot spare to come online, possibly even causing a second failure like you said.

I also like the idea above for separating OS and data drives. Maybe pick up a pair of SSDs to use as cache (check your system suppors it first, of course).
kevinhsiehNetwork Engineer

Commented:
No way would I ever put 8 TB HDD in RAID 5. Too likely to have a URE causing total array failure upon rebuild.
You might want to look at RAID 1 on your boot drive and use M$  Refs on you vm drives.
https://docs.microsoft.com/en-us/windows-server/storage/refs/refs-overview
kevinhsiehNetwork Engineer

Commented:
In general, if you take a bunch of disks and take some for one RAID set, and other disks and put them into another RAID set, your overall performance will be worse than if you put all disks into a single RAID set. This is because the IO for any given workload is limited to a smaller number of spindles serving the IO.

The sad fact is that the server seems to be terribly out of balance. There is lots of DRAM, but storage will be very slow. Far better to have 128 GB RAM and faster storage.
Those Dell R540s come with 12 front bays and 2 rear bays.  The 12 drives in front should be all used as RAID 6 or RAID 10.  The 2 bays in back are what you would use as boot disks in  RAID 1.

RAID 5 is obsolete for any spinning disks over 1 TB.  If you're still using RAID 5, then you don't understand RAID.
When you create your VMs, turn of pagefiles or swap.  Force them to use only RAM as much as possible to minimize unnecessary disk access and your VMs will be fast.  Make sure you allocate enough RAM and you have no good reason to ever swap to disk at all.

Creating swap or pagefiles is really stupid with a VM.  With Linux systems, you can recompile the kernel to fit in RAM and run as much of it as possible without disk access.  Certain Linux servers, depending on the services you run, require absolutely no disk access except during bootup and user login.

If you're creating SWAP you're wasting disk I/O cycles and slowing down all your VMs.  Anyone still creating SWAP has not really been understanding RAM needs on server systems and VMs.
Top Expert 2014

Commented:
>2 rear bays.

Damn sight better than two hidden internal bays on a previous generation that lead to a right mess I had to clear up as it caused someone to miscount and remove a healthy disk.
Hi,

12 x 8TB disks in RAID10 should give you 12x read and 6x write speed gain with 48TB usable space. Otherwise you should stick with RAID6, which is a good design with at least 1 disk that can fail.

Cheers

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial