VM 101 Question!

I will be replacing a physical file storage server (currently has 650Gbs of data), and also a virtual machine Exchange 2010 server (25 Mailboxes)
I want to acquire a new Hyper-V host  and upgrade and virtualize both servers.  We do not have money for a SAN or anything like that so all VMs will reside on local disks.  The file storage VM guest OS drive will be configured with 120Gbs fixed disk, and then 1.2Tb dedicated for data that is also fixed storage.  I am just wondering if adding a virtual hard disk with 1.2Tb works well, etc.  I mean I have not worked with VMs with disks larger than 500Gbs in total.  

I've do not believe its a good idea to have the user data reside on the guest OS either.  It's nice just dealing with one disk per VM, but in a physical server that's a no-no, so I would think this also applies in a virtual application.

The new Exchange server 2016 or 2019 will most likely all reside on the guest OS drive.  I've been running Exchange 2010 with this configuration on a 2012 Hyper-V host for many years now with minimal issue.  

I just want to make sure adding a virtual hard disk to a VM that is greater than equal to or greater than 1.2Tb is reasonable and works without issue.  

Any suggestions/recommendations?
cmp119IT ManagerAsked:
Who is Participating?
 
Philip ElderConnect With a Mentor Technical Architect - HA/Compute/StorageCommented:
We always set up two partitions. One for the OS and one for the data and LoB/Server/Apps installs.

If something blows up with the Hyper-V host or one of the guests it's a lot simpler to flatten the OS partition and re-install and reconfigure things. Or, to use the backup to restore to just the system partition to avoid overwriting data on the other one.
0
 
JohnBusiness Consultant (Owner)Commented:
Yes, a VM can use more than 1.2 TB.  Why is everything so small?  A virtual or real Server OS should be 300 GB for inevitable updates and changes. 100 GB has always proven to small.

Get your new server with 4 TB of disk or more.
0
 
cmp119IT ManagerAuthor Commented:
The host will come with around 3TB+ of storage dedicated for the VM files, it depends on whether I go with RAID10 or RAID50 with (6) 900Gb drives.  Leaning more toward RAID50 if the server can handle it.  The Dell R440 can only accommodate a total of (8) drives so its maxed out, but that's fine since we are purchasing it with plenty of disk space planning for growth.
 
So I take it you agree setting up each VM with a Guest OS of lets say 200Gbs fixed disk, and adding a virtual hard greater than 1.2Tb ought to work fine then.  I hate asking questions redundantly, but I want to make sure I get it right.  Thanks.
0
Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

 
JohnBusiness Consultant (Owner)Commented:
200 GB or more should work fine and adding as much disk as you wish should work fine:   2 TB or more.
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
RAID 6 is just fine for a virtualization stack. Many questions answered here: Some Hyper-V Hardware & Software Best Practices.

We carve up the host's storage into two logical disks on the RAID controller to allow for recovery of the host without killing the guests and requiring a full restore.

Some additional info here: Disaster Preparedness: KVM/IP + USB Flash = Recovery. Here’s a Guide
0
 
Lee W, MVPTechnology and Business Process AdvisorCommented:
I want to acquire a new Hyper-V host  and upgrade and virtualize both servers... all VMs will reside on local disks.  The file storage VM guest OS drive will be configured with 120Gbs fixed disk, and then 1.2Tb dedicated for data that is also fixed storage.  I am just wondering if adding a virtual hard disk with 1.2Tb works well, etc.  I mean I have not worked with VMs with disks larger than 500Gbs in total.  

You do not need more than 120 GB for a C: drive, ESPECIALLY in a VM.  I very much disagree with John on this.  I NEVER setup a VM with a C: drive larger than 127 GB (default VHD size) EXCEPT for RDS servers.  That's the only time it makes sense.  If you understand Windows and keep your C: drive for OS only, 127 GB should be more than enough.  I have 6 servers running right now and NONE have used more than 42 GB used. My DC has used 12 GB out of 60, my Exchange Server has used 42 GB out of 60, my SQL server has 29 GB used out of 60, my web server has used 15 GB out of 60, and my "services" server has used 19 GB out of 60.  (The VHD is 127 GB, the PARTITION is 60 GB with 67 GB of unallocated space JUST IN CASE.  All systems have been up over a year and run 2016.

I've do not believe its a good idea to have the user data reside on the guest OS either.  It's nice just dealing with one disk per VM, but in a physical server that's a no-no, so I would think this also applies in a virtual application.
Can you clarify what you're talking about with this?  What do you mean "do not believe its a good idea to have the user data reside on the guest OS"?  Are you talking about the PHYSICAL C: drive/partition?

The new Exchange server 2016 or 2019 will most likely all reside on the guest OS drive.  I've been running Exchange 2010 with this configuration on a 2012 Hyper-V host for many years now with minimal issue.  
Again a little confused with how you're using terminology/what exactly you're planning?

I just want to make sure adding a virtual hard disk to a VM that is greater than equal to or greater than 1.2Tb is reasonable and works without issue.

VHDX format can be well over 2TB. I wouldn't expect any problems and while the largest VHDX I have is about 700 GB, I have ZERO concern with a file that large, other than the amount of time that might be required if I ever want to move it from one drive or computer to another without some kind of pre-staging.

Philips information is excellent... I link to some of it myself... but for another perspective/phrasing, you might want to read over my article (part 1 is why to go virtual, part 2 is tips for optimizing).
https://www.experts-exchange.com/articles/27799/Virtual-or-Physical.html
0
 
cmp119IT ManagerAuthor Commented:
I am speaking of the actual VM Guest hard drives and not the hypervisor (host) hard drive configuration.  The Guest OS hard drive should be large enough for all the VM OS files, and that drive should not contain user data, databases, etc.  A separate Virtual hard disk (VHDX) should be added to host user data, databases, etc.  Meaning everything should not reside on the VM Guest OS drive (C:\).  So in the end when you look at the drives within the VM you'll see a C:\ for the OS, and another drive (D:\ or E:\) where the user data and databases are actually located.  The same way it should be setup on an actual physical server.

I have several 2012 and 2016 Virtual servers, and I have VM DC with a 100GB C-drive and its been in operation for about 4 years now, and it still has 12Gbs of available space. On the same 2012 hypervisor I have a 2010 exchange server, and I setup exchange so that everything resides on the VM OS drive (c:\), and it initially was configured with 150Gbs.  I wound up adding more disk space so now it has 264GBs, and now has 152Gbs of available disk space.  Looking back, I think I should have not have done that and should have left the Guest OS drive for only windows data, and added another VHXD drive for the exchange databases, etc.  Keeping the OS and Exchange data on separate drives and not partitions.
0
 
cmp119IT ManagerAuthor Commented:
I also would like to get your opinions on the preferred RAID selection to house the VHXD files.  I will have (6) 900Gbs conventional disks.  Please remember, this hyper-v 2016 host will have one VM to start with (.NET/File Storage server (700Gbs of data).  Next year I will employ an Exchange 2016 Server, so that will become the second VM.  Depending on how well things go, I may add one file small VM to serve as a second DC.

I was originally entertaining a RAID6 array (6 - 900Gbs = 3352 Usable.)  This is more than enough space for the Storage VM and also the Exchange and possible a third DC VM.

I also really like RAID10 arrays, however, half the disk space is consumed only providing 2514Gbs of usable disk space.  Which is still adequate.

So, I started looking at RAID50 as an alternative.  I've never configured a RAID50 before, but it looks promising.  With 6 disks it provides 3274 usable disk space.  The Dell R440 server will be purchased with a PERC H730P, so I presume its capable of offering a RAID50 array.
0
 
cmp119IT ManagerAuthor Commented:
I just got off the phone with a Dell Tech that deals with OS support, and he suggested staying with a RAID5 to house the VHXD files simply because RAID10 which is great (faster) but you lose half the disk space, and all three proposed VMs really would not need the higher disk writes as DBs require.  He mentioned having (5) 900Gb drives in RAID5 will provide 3350Gbs usable disk space, and leave the sixth drive as a hot spare.
0
 
Lee W, MVPTechnology and Business Process AdvisorCommented:
I think you'll be hard pressed to find an experienced admin who will advocate for RAID 5.  RAID 6 is generally standard now.
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
Our testing of RAID 6 arrays with various disk setups yields on average 800MB/Second of sustained writes to eight (8) 2.5" 10K SAS disks on a RAID controller with 1GB of cache, write-back mode, and a flash based cache or backup battery for cache.

IOPS can be 250 to 450 per disk depending on the format of the storage stack.

One should never deploy RAID 5. It's not worth it.
0
 
cmp119IT ManagerAuthor Commented:
I hear you both.  I will take your advice and employ a RAID6 array.
0
 
cmp119IT ManagerAuthor Commented:
Philip can you clarify the following statement:

We carve up the host's storage into two logical disks on the RAID controller to allow for recovery of the host without killing the guests and requiring a full restore.

Not sure what you mean exactly.  Do you mean for instance you have 2TBs of storage, and you create two separate RAID6 arrays?  If so, how does this make a difference for recovery purposes?
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
Yes, we create two logical RAID 6 disks on the RAID controller.

How does that make recovery simpler? If we need to flatten and re-install the host we can do so without touching the VM configuration and VHDX files on the second partition. Once the host OS is set up and updated (~45 minutes to 90 minutes) we import the VMs and we're back where we were.
0
 
cmp119IT ManagerAuthor Commented:
With only (6) 900Gb SAS disks I won't be able to do that.  I believe there is a four disk min to config a RAID6 array.  Thanks for sharing.
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
One can configure a RAID 6 array with six physical disks. Why wouldn't that be possible in this case?
0
 
cmp119IT ManagerAuthor Commented:
Sorry for the confusion, I believe you are correct, I can have (6) disks for a RAID6 array, and then I can create two separate logical RAID6 virtual disks.  So, lets say I have to (2) 2Tb  RAID6 logical disks configured, and you want to employ (2) VMs.  How would you configure the VM files on each logical disk.  Just trying to get a visual of the preferred configuration.
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
RAID 6 is just fine for a virtualization stack. Many questions answered here: Some Hyper-V Hardware & Software Best Practices.

Two RAID 6 logical disks: 75GB for host OS and balance to guest data (RAID 6 6x 900GB = ~3.5TB)

Once the host OS is in set up the second partition and once the Hyper-V Role is set up make sure to set the configuration and VHDX file setting to that partition.
0
 
cmp119IT ManagerAuthor Commented:
I just returned from vacation, and now have reviewed Philip Elder's replies referencing carving up 2 logical disks on a RAID6 array.  I do not believe this matters with our server hard disk configuration.  When I purchased the server I purchased 2 disks (RAID1) for the host OS, and 6 900Gb disks for the RAID6 for all the VHDX files, so the need to carve up the RAID6 is not necessary or beneficial for this configuration.  Otherwise, all that you have mentioned was beneficial and very useful.
0
 
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
That works. The reason we run with all disks in one array is to pull out the extra 500-800 IOPS and/or 200MB/Second of throughput the extra two disks would add to the array's performance.
0
 
cmp119IT ManagerAuthor Commented:
I see.  I have one final question - why setup a VM C-drive separate from the data (D-drive VHDX file).  Why not just have just one VHDX file that houses both the OS and client data all on one larger VHDX file.  I've seen some administrators create a VM with OS and client data on one VHDX file, while its recommended setting up a VM with guest OS disk and a separate disk for the data.  If the guest os and the attached data drive reside on the same RAID array what advantages/disadvantages are there?  This approach makes sense for the physical server, but I don't see any major advantages applying it to a virtual server.
0
 
Lee W, MVPConnect With a Mentor Technology and Business Process AdvisorCommented:
As I read you're question, you're asking why you shouldn't just partition a SINGLE VHDX or even no partitions at all and just put file shares and data on the same logical drive.

1. By separating the OS from the Data, your neither can run amok and fill the drive bringing the other down.  No user can "accidentally" put his entire DVD collection on the file server, filling it and crashing the system so no one can access anything.  That doesn't mean there won't be problems, but they won't be as severe.

2. How easy is it to detach a VHDX and attach it to a new host?  Want to upgrade from 2012 to 2016?  2016 to 2018 (or whatever is next), Just detach the VHDX and attach it to the new VM.  EASY. Quick.  You COULD do that with a single drive, but then you have to reconfigure shares (instead of exporting them), and it's MESSY.  Be professional and keep things clean.
0
 
cmp119IT ManagerAuthor Commented:
Thank you both for your comments/suggestions.  Your clarification on keys points have made a vague picture more clear.  Thank you kindly for your patience and recommendations.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.