Link to home
Start Free TrialLog in
Avatar of diesel1218
diesel1218

asked on

HyperV Cluster Networking Question

I am seeing some errors in my hyper v fail over log.  Most due with networking.  When I run validation test everything passes however I have a lot of warning that my adapters have IP address on the same sub-net, which I do not fully understand being an issue.  

One of the errors I am seeing is Event ID 1129

I have attached a picture of the cluster validation test on the network. I am sure it is something simple I am doing wrong.
Capture.JPG
Avatar of Cliff Galiher
Cliff Galiher
Flag of United States of America image

Yep, those are concerns. That is a *lot* of adapters and that alone is concerning.

But you usually want your cluster/heartbeat traffic on its own network so that network congestion won't cause a false-positive and cause failovers.  You'd isolate it by subnet *and* a separate switch or VLAN at the very least.

And all those other adapters? Not sure why you have so many active on the host. In a Hyper-V situation, you'd usually bind adapter(s) to virtual switches so they wouldn't have IP addresses at all in the host. Or you'd disable them. Either way, having a lot of adapters in the host like that usually just muddies up troubleshooting and offers little to no benefit as all traffic will route out the primary interface based on bind order.

Managing a healthy cluster is not a trivial endeavor and requires a fair amount of network knowledge. You may want to bring in some assistance if you are unfamiliar with the core concepts. Trying to summarize an entire book into one EE response is not practical. But hopefully this can get you started.
Avatar of diesel1218
diesel1218

ASKER

Okay so my thought process in this was to give each guest its own Virtual network, thinking that would help.  So basically I have 4 adapters on each host setup as virtual networks.  then three adapters on each host connected to the iSCSI network.  I am not sure how to set this up correctly.
You can give each guest its own virtual network. However, that network should then be dedicated to the guest. They shouldn't have IP addresses in the host. THAT won't help anything.

You can also connect multiple adapters to the iSCSI network as long as the iSCSI target supports it *and* you set up MPIO. Otherwise that'll also have issues. And those warnings would indicate that you haven't set up MPIO properly on the iSCSI side of things either.
Okay so basically looks like I need to bring someone in who knows what they are doing on the networking side of things.  I am totally lost here.
How many physical NIC ports are on each node?

To keep things simple team at least two ports on separate adapters for your virtual switches. Leave those ports exclusive to the vSwitch (not shared with the host OS).

For your iSCSI connection have each port on a separate subnet with MPIO enabled.
192.168.15.0/24, 192.168.16.0/24, ETC

Keep one pair of NIC ports for your management access preferably on two different adapters.

Finally, if any of the physical Gigabit ports are Broadcom make sure to turn off VMQ (Virtual Machine Queues) for each physical port so as to not experience VM network performance stalls.

EDIT: Our cluster nodes have at least eight Gigabit ports and four 10GbE ports.
I believe I have 12 on each sever and 4 on each are broadcom.  

4 on each are virtual switches but I left the check under allow managment on host for all of them. (Are you saying to uncheck the allow host management?)

My iSCSI network is on the 192.168.100 subnet so I am not quite sure what I would do to there.  Wouldnt there be more configuring at the switch level to see the additional subnets?
So, eight are Intel based?

Depending on your requirements you could team two Broadcom for management and leave two for iSCSI.

10GbE involved or not?

If the eight are Intel then we would team based on our VM needs. If things are relatively simple where all nodes are connected to a production network we would create two teams of four ports (two per NIC). Once the teams were created we would then create our vSwitch in Hyper-V Management and yes that "Allow Host Management" option would not be ticked.
Yes I have 8 Intel based NIC's 2 have have selected for management (I believe this is the case one for the physical IP of the host and a second.) but I haven't involved teaming because I am unfamiliar with it.

Then I have 4 that I have created virtual switches with (Virtual Network, Virtual Network 2, Virtual Network 3, and Virtual Network 4.) These all have the allow Host Management option ticked.  I have four guest all of which are assigned to their own Virtual Network.

iSCSI is on the boadcom already.  With one subnet 192.168.100 however I have three out of the four in use and one disabled.  10GBe is not involved.
ASKER CERTIFIED SOLUTION
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Okay thanks fort the help I am obviously going to have to get someone in here who knows what they are doing because I am lost.  However they way I have the virtual switches now setup they should not have the management host option ticked correct?

I have one dedicated switch for my iSCSI, I am running dell PS4000's for my SANS and they have two controllers in each of them.
Not is the way for vSwitch. Should be dedicated.

Need two switches for iSCSI. Eliminate the single point of failure. Otherwise Cluster is moot. Same with single NIC port.
For your iSCSI NIC's, I would recommend Teaming the NIC's if you are running 2012/2012 R2.  You don't have to team them, however it makes everything cleaner and if you are using MPIO it doesn't really matter if they are teamed or separate.

For everything else, you should only need one cluster management IP on the same subnet.  If you are using a Microsoft logical switch, you should be using NIC teaming to prevent you from requiring a million vSwitches.  This also allows for redundant paths if one NIC goes down.

Honestly, if you have the option, I would recommend investigating the Cisco Nexus 1000V Hyper-V switch.  It is a addin for System Center VMM which permits you to run a Cisco based switch within Hyper-V.  It performs all functions such as NIC-bonding/etc and allows for advanced ACL's and the like.