Link to home
Start Free TrialLog in
Avatar of syssolut
syssolutFlag for United States of America

asked on

Do I need to have a Raid 5 to have two Virtual Machines on my 2019 Server?

I have a Server running 2019 Server.  I have the drives broken up into a Raid 1 with the 2019 on two 1,2 TB drives.   Then I have a Raid 5 with 3 - 1.2 TB drives.  I was looking at installing 2 VM after I installed it this way.  One of the VMs were going to run a software specifically for the business.  The other VM was going to run the Domain Controller and also as a file server.  Can this be done with the present set up on the server?   Does the server have to be set up differently?    If so how would you do it?
Avatar of CompProbSolv
CompProbSolv
Flag of United States of America image

Yes, this can be done with the present setup.

My issue would be the use of RAID 5.  Its use is discouraged these days, especially with large spinning drives.  There is great concern about having a single drive fail and then running into a bad spot on one of the other drives when restriping with the replacement drive.  RAID 10 or 6 are usually preferred.  If you put all 5 drives in RAID6 you'd have the capacity of 3 (3.6T), which is what you'll have with your configuration.  In addition, you'll have much more flexibility in arranging space when it is one big drive.

In general, it's preferred to have the DC just do AD, DNS, and DHCP.  Would it be impractical to put the special software on the same VM as the file server?  If not, that would be my preference.

ASKER CERTIFIED SOLUTION
Avatar of kevinhsieh
kevinhsieh
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Agree with Kevin - no problem for a small environment to share file services with AD services.  Large data center, no, small business, just fine.

To an extent the RAID doesn't matter for a couple of VMs.  It depends on how heavily used the system is, but odds are, in a small environment, regardless of RAID config, no one is going to be complaining the server is slow or experience issues.  IF YOU DID, these are VMs!  Create a temporary server, move them off, wipe and reload with a new RAID config, put them back.  Yes, it's a bit of a hassle but this can actually be done transparently to the users if you live migrate the VMs to the temporary host and back.  
Avatar of syssolut

ASKER

What if I just keep as is, create the 2 VMs and use the Raid 1 space for Domain vm and use the raid 5 for the Specialty software and file server VM?    There are only 3 users at most that will be using the server.   Or since very few users just keep it as is with no VMs?
Not virtualizing is silly regardless of size of the company (unless you have a VERY specific requirement not to).

Again, To an extent the RAID doesn't matter for a couple of VMs.  It depends on how heavily used the system is, but odds are, in a small environment, regardless of RAID config, no one is going to be complaining the server is slow or experience issues.  
If all of the drives are identical then set up one RAID 6 array with two logical disks:
 95GB for host OS
 xGB/TB for the virtual machines

You are then able to work with the host OS if something chokes say after a patch without impacting the guests.

I have two very thorough EE articles on all things Hyper-V:
Some Hyper-V Hardware and Software Best Practices
Practical Hyper-V Performance Expectations

Some PowerShell Guides:
PowerShell Paradise: Installing & Configuring Visual Studio Code (VS Code) & Git
PowerShell Guide - Standalone Hyper-V Server
PowerShell Guide - New VM PowerShell
PowerShell Guide - New-VM Template: Single VHDX File
PowerShell Guide - New-VM Template: Dual VHDX Files

Here are some focused articles:
Set up PDCe NTP Domain Time in a Virtualized Setting
Slipstream Updates Using DISM and OSCDImg
Protecting a Backup Repository from Malware and Ransomware
Disaster Preparedness: KVM/IP + USB Flash = Recovery. Here’s a Guide
You don't need a RAID for only 2 systems.  However, it's going to be slower.  RAID 5 is still ok with smaller capacity drives.  It's not ok when you have larger capacity drives, because they will take too long to rebuild and may result in a 2nd disk failure before the rebuild completes.  My ballpark range is 1TB disks or lower can still be done as RAID 5, but above that, you should not do RAID 5.
@Philip:
Looking for my own education here:
"set up one RAID 6 array with two logical disks "
What is the advantage of two logical disks?  I'm missing it.
This is the primary reason why: Disaster Preparedness: KVM/IP + USB Flash = Recovery. Here’s a Guide

Having two distinct partitions on the host, one for the host OS and the other for the guests allows for host recovery within 15-30 minutes if its OS goes blotto. BTDT many times.
I would not use RAID 6 unless you had 7 or more disks to maintain enough speed to make it worthwhile to compensate for the parity hits.  If you had 2, 4, or 6 disks, you really should use RAID 1 to mirror it instead.  You could use RAID 5 if your disks were 1TB or preferably less, or you use much faster SSDs where the rebuild times are short enough.

https://www.actualtechmedia.com/io/raid-disk-rebuild-times/

Time is of the Essence
The fundamental problem is that it takes too long to fill the large disks we are now using just to regain redundancy. All of these RAID levels will use a single disk to replace a failed disk and all will need to fill the one disk with data. The time it takes to recover from the failed disk cannot be less than the size of the disk divided by its sequential write speed. For a 72GB disk with an 80MBps write rate we get 72,000MB / 80MBps = 900 seconds, about 15 minutes. This is an acceptable rebuild time and ten years ago when we used 72GB disks RAID was good.  Today we use at least 1TB disks and their sequential write rate has only gone up to around 115MBps, the same math is 1,000,000MB / 115MBps = approximately 8700 seconds which is nearly two and a half hours. If you are using 4TB disks then your rebuild time will be at least ten hours.
1 TB rebuilds are at the limit of what could be safe.  SSDs should rebuild much faster, so you can still use SSDs in RAID 5 if they aren't too large.  If you are using 4TB or larger, they should be RAID 1, RAID 10 or RAID 6, never RAID 5.
@Philip:
Thank you for the insight.
Just to go back, the drives are all 1.2 TB.
Yes, so don't use RAID 5.  You should break the boot disk RAID and make it a single disk.  Convert the remaining 4 disks to RAID 10.

Otherwise, if you have another slot, get a 6th disk to keep the boot RAID mirror, and create a RAID 10 of the remaining 4.  RAID 6 with only 4 disks is stupid, because it will be much slower than RAID 10 with the same 4 disks and you still lose 2 drives.
@serialband
"RAID 6 with only 4 disks is stupid "
Aren't we dealing with 5 disks here?

"two 1,2 TB drives.   Then I have a Raid 5 with 3 - 1.2 TB drives. "
Read the rest of the comment.