Link to home
Start Free TrialLog in
Avatar of E C
E CFlag for United States of America

asked on

Best practice for Hyper-V VMs - NTFS or ReFS?

So I have a brand new server and a Server 2019 Datacenter license. Two small-ish drives working together in a RAID 1 will be for the OS. That will be formatted as NTFS and 2019 DC will be installed here with the Hyper-V role (and that's it!). Then ... I have 6 SSD drives working together in a RAID5 and the only things that will live on this drive are the actual Hyper-V VM files.

I've read about how ReFS has come a long way since it was released with 2012, yet when you format a data drive in Windows, NTFS is still the default. So I wanted to ask - who out there has embraced ReFS for Hyper-V? Should I keep it safe and stick with NTFS? Or should I use ReFS?

This drive will never be used for anything other than storing Hyper-V files.

Also: If I go with ReFS would there be any potential compatibility issues if, say, I had to migrate a VM from an older server (having only NTFS) to this new server (having only ReFS) or vice versa?
Avatar of Shaun Vermaak
Shaun Vermaak
Flag of Australia image

Before getting to your question, did you consider S2D?

2019 DC will be installed here with the Hyper-V role
Build the DC as a VM, not a Hyper-V server
Avatar of Member_2_231077
Member_2_231077

They mean 2019 datacenter, not domain controller.
The second DC tripped me up
ASKER CERTIFIED SOLUTION
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Personally I would stick all those drives in a S2D pool with ReFS
Avatar of E C

ASKER

Shaun,
Sorry, DC=Datacenter not Domain Controller. I probably should stop using DC as DataCenter since it's so popular as 'Domain Controller.

Regarding S2D ... I would love to implement this. But... doesn't this implementation require 2 (ideally 3) or more near-identical servers? And all have to be 'certified' by Microsoft? Last time I ventured into S2D territory I got a quote that was over $70,000. Seems awesome but it's not in my budget. I have 1 server and one Datacenter license.

Philip,
Thanks for the great articles on Hyper-V. Also, I did not realize ReFS was really only for Storage Spaces or S2D.
A small S2D setup can be configured starting at $20K. It's all in the setup and the testing done by the company that put the solution set together.
Avatar of E C

ASKER

Thanks everyone for your wisdom. Given my scenario I am going to stick with NTFS. And I hope to explore S2D in the near future.
Regarding S2D ... I would love to implement this. But... doesn't this implementation require 2 (ideally 3) or more near-identical servers? And all have to be 'certified' by Microsoft? Last time I ventured into S2D territory I got a quote that was over $70,000. Seems awesome but it's not in my budget. I have 1 server and one Datacenter license.
Only is you build a cluster. S2D is part of OS don't know why numbers such as 20K and 70K are thrown around. I can build S2D cluster in a few hours
@Shaun Vermaak The Storage Spaces Direct (S2D) feature is indeed built-in to the operating system. However, getting S2D set up and tuned for a particular set of workloads is another thing altogether.

$20K is an entry level S2D system with flash and rust hybrid storage set up to provide ~45K IOPS @ 70/30 Read/Write. The correct network setup would be in place for both East-West (node to node) and workload/production.

$70K would be a mid-grade S2D system that could be all-flash or NVMe cache with SSD capacity. Depending on the solution setup it could run 100K IOPS to 250K IOPS @ 70/30. The same goes for East-West and production/workload network fabric(s).

Azure Stack HCI certification requires a solution set that runs through a rigorous test approval process to make sure the solution would stand the test of time. Any solution in this list would have a significant investment in research and development thus expect a premium cost.
So it's a lot cheaper for the same performance to stick with traditional dual controller SANs that have write cache?
@andyalder Not in my experience. Unless we are talking about 32Gbps or 128Gbps Fibre Channel, performance from S2D would always outpace traditional SAN. FC has it's cost in dollars and knowledge overhead to deal with though.