Avatar of K B
K BFlag for United States of America asked on

Recommendations for Hyper-V cluster with SMB3

I currently have:

**4 Servers (2012 R2 datacenter, dual xeon procs, min 96GB RAM) each with:

1.   2  x  1Gb ethernet ports
2.   2  x  10Gb ethernet ports

1 Switch (D-Link DGS-1510-28X) which has:

1.   24  x  1Gb ethernet ports
2.     4  x  10Gb ethernet ports

** I planned on using 3 servers for Hyper-V and 1 server as a 2012 R2 SAN.  1 Server has 2 Raid Controller Cards (16 x 1TB drives plus 1 SSD for OS) the other 3 servers have an OS drive and a smattering of 500GB hdds **

How would you utilize the network ports to layout your Hyper-V lab with the equipment I have?   How would you cable and why?  What would you VLAN?

side note: How can I maximize my learning on Hyper-V?  I recognize that not everyone will have SMB3 but I don't think you can test Cluster Shared Volumes at the same time, correct?


Thank you for your time in advance!
K.B.
Hyper-VVMwareWindows Server 2012Network ArchitectureWindows Networking

Avatar of undefined
Last Comment
Philip Elder

8/22/2022 - Mon
ASKER CERTIFIED SOLUTION
Philip Elder

Log in or sign up to see answer
Become an EE member today7-DAY FREE TRIAL
Members can start a 7-Day Free trial then enjoy unlimited access to the platform
Sign up - Free for 7 days
or
Learn why we charge membership fees
We get it - no one likes a content blocker. Take one extra minute and find out why we block content.
See how we're fighting big data
Not exactly the question you had in mind?
Sign up for an EE membership and get your own personalized solution. With an EE membership, you can ask unlimited troubleshooting, research, or opinion questions.
ask a question
ASKER
K B

Philip,

Thank you for that information!!
I especially enjoyed your article!  

Two questions if I may...

I read through everywhere VLAN was mentioned in the Word document you linked to in your article.  Though I still wonder how do you typically VLAN your Hyper-V networks?

Would this sound reasonable or would you change something (edit: I recognize that I should VLAN at the team level but losely used "ports"):

1. Hypervisor management ports (VLAN 1)
2. Hypervisor/SAN storage ports (VLAN 2)
3. Guest OS vNIC ports (VLAN 3)

Also, the SMB3 method you mentioned includes SOFS.  Does that require SAS (as I am running all SATA, centralized in on 2012 R2 box)?  That being said...

Which method would you recommend I use?
Lastly, what interface would Live Migration live on?

Thank you again,
K.B.
Philip Elder

VLAN: We use in VOIP, multi-tenant cluster scenarios, or labs where we want segmentation between the various "tenants".

VLAN is set via the vNIC properties either in PoSh or the VM's settings in Hyper-V Manager.

A typical SOFS setup would be two or three storage nodes connected to one or more JBODs via SAS HBAs. 9300-8e LSI for two ports per HBA is where we start.

LSI/Avago makes a four port 12Gbps SAS HBA. Each SOFS node would get a pair of these. That would allow each node to be connected to four Quanta or DataON 4U JBODs for enclosure resilience. Total storage yield with 6TB NearLine SAS would be about 480TB-560TB across the four enclosures.

That being said, SOFS is a SAS only solution set due to the need for disk arbitration (multi-node to same disk access switching). SATA is single channel with single source access only. NVMe is as well thus its appearance in Hyper-Converged solution sets like Storage Spaces Direct.

You can pick up an Intel JBOD2224S2DP for a song out there. That would provide a great platform for testing SOFS. Or, any dual SAS port based JBOD with 8 or more disks can be had for a good cost too.

For your setup? Get a copy of 2016 TP4 and deploy Storage Spaces Direct (S2D). :)
 + TechNet: Storage Spaces Direct

Or deploy S2D in a VM:
 + Testing Storage Spaces Direct using 2016 VMs

Or, set up one of the servers as your SOFS single node cluster and use the other two for Hyper-V. Then use the fourth to insert into the existing Hyper-V cluster. You could then test Rolling Cluster Upgrade to TP4 using the three nodes.

Our preference for LM is at least two 10GbE ports per node.
This is the best money I have ever spent. I cannot not tell you how many times these folks have saved my bacon. I learn so much from the contributors.
rwheeler23