Hyper-V Cluster Network configuration

Posted on 2011-04-26
Last Modified: 2013-11-06
I have a 3 node Hyper-V cluster setup on W2K8R2.  My nic configuration is as follows:
1 Team for Host Management - 10.11.x.x
1 Team for VM Access - DHCP
1 Team for Live Migration/CSV 172.16.x.x
Each node has two QLogic iscsi adapters to access storage on netapp.

My questions are:
1- Can I use the Host Management Nics for VM Access instead of having two separate networks?  What are the pros or cons of that?  

2 - Keeping the current configuration should my network for VM Access stay on DHCP or should it get a static IP?

3- With the current configuration I set the Live Migration\CSV network to "not allow cluster network communication on this network"  -  By doing this if I pull both Host Management NICs (Testing) my VMs failover to their failover node.  However if I unplug the Live Migration\CSV nics (Testing) My VM's do not failover, they remain up but you can't ping them or access them.  How should I configure my Live Migration\CSV network if I want to use one network for both?

4 - If I set my Live MIgration\CSV network to "Not allow cluster network communication on this network" I don't believe the cluster is doing redirect through my csv network (Live Migration\CSV)  If I set this network to just "Allow cluster network communication on this network" it won't failover when I pull the Host Management Nics

Question by:smurteira
    LVL 41

    Accepted Solution

    I feel that a lot of the documentation and "best practices" go a bit overboard. I am using Hyper-V Server 2008 R2 in a 5 node cluster. Each node has 6 NICs, but only because I have VMs on 5 different physical networks. I have two NICs for my iSCSI network; every other network has just a single NIC per host. I share the NIC between host and VMs if the host needs access, otherwise I only give the VMs access. I don't do teaming, I don't dedicate networks for cluster heartbeat of Live Migration.

    If you share the NIC between host and VMs, there is a risk of losing contact with the host while the network reconfigures on that NIC while you are making changes to the virtual network on that NIC. On a daily basis there is no real difference unless you have a TON of network traffic from your VMs (I usually see my NICs in single digit % utilization), and sharing the NICs saves you switchports and cabling.

    Your VMs can use DHCP or static IPs. Whatever works best for you. I use a combination of both.

    If you don't micromanage the network connections in terms of live migration and cluster communication, I think that you'll find that things run better.
    LVL 2

    Author Comment

    Thanks Kevinhsieh.  Ok I tried your route and made the change to share the nics between the host  and vms.  I have one network for CSV + live migration, this network is supposed to be set to "allow cluster network communication on this network".  If I leave this setting, when I try a failover test by disabling the production nics failover does not happen (not good), if I set the setting to "do not allow cluster network communication on this network" then I get failover, but as I understand it this setting is supposed to stay at "allow cluster network comm on this network"  how would I go about setting up the csv + live migration network?
    LVL 38

    Assisted Solution

    by:Philip Elder
    We have four NICs in our Intel Modular Server based clusters.

    Each pair is connected to a dedicated Gigabit switch module that has a 10 Gb connection between the switch modules.

    We team using Switch Failover mode between one NIC on each switch giving us two paired teams.

    We set up VLANs on the teams for CSV and Heartbeat and trunk/route ports accordingly on both switches.

    We then leave management and one vSwitch on the management network untagged.

    For VMs that require exclusive access we VLAN, trunk, and route to the required networks and switches.

    Things perform quite well in this configuration.

    We are in testing mode for a 2 node cluster based on 1U/2U servers connected by SAS to a VTrak RAID subsystem. We will be using a similar network setup to the above.


    Featured Post

    Free book by J.Peter Bruzzese, Microsoft MVP

    Are you using Office 365? Trying to set up email signatures but you’re struggling with transport rules and connectors? Let renowned Microsoft MVP J.Peter Bruzzese show you how in this exclusive e-book on Office 365 email signatures. Better yet, it’s free!

    Join & Write a Comment

    #Citrix #XenApp #Citrix Scout #Citrix Insight Services #Microsoft VMMAP #Microsoft ADEXPLORE #Microsoft RAMMAP #Microsoft TCPVIEW #Microsoft AUTORUNS #Microsoft PROCESS EXPLORER #Microsoft PROCESS MONITOR
    This is similar to the error described in my previous Article, but with a different source problem and a different solution. When trying to scan and stage the ESXi 6.0 updates using VMware Update Manager, we can get "error code 90"
    This tutorial will give a an overview on how to deploy remote agents in Backup Exec 2012 to new servers. Click on the Backup Exec button in the upper left corner. From here, are global settings for the application such as connecting to a remote Back…
    How to install and configure Citrix XenApp 6.5 - Part 1. In this video tutorial we have explained step by step installation of Citrix XenApp 6.5 Server on Windows Server 2008 R2 is explained in this video. We have explained the difference between…

    755 members asked questions and received personalized solutions in the past 7 days.

    Join the community of 500,000 technology professionals and ask your questions.

    Join & Ask a Question

    Need Help in Real-Time?

    Connect with top rated Experts

    18 Experts available now in Live!

    Get 1:1 Help Now