Windows Server 2008R2 Hyper-V Failover Cluster Networking Setup

Posted on 2011-10-10
Last Modified: 2012-05-12
Hi All,

I am hoping someone could help me understand how I should configure my NIC's in this situation.

- Windows Server 2008R2 Failover Clustering
- 2 Node Cluster
- 4 Physical NIC's on each Node

NIC #1
Role: Management
Network: (Main LAN)

NIC #2
Role: Cluster Communications/Heartbeat
Network: 192.168.50.x (Private)
Cluster Network Settings: Allow Cluster Communications, Do not allow clients to connect through this network

NIC #3
Role: Live Migration
Network: 192.168.51.x (Private)
Cluster Network Settings: Allow Cluster Communications, Do not allow clients to connect through this network

NIC #4
Role: VM Network
Network: (Main LAN)
Hyper-V Network Settings: External, do not allow management OS to use this connection
Cluster Network Settings: Allow both Cluster and Client communications

Now my question...

Should I leave NIC#1 as a physical adapter only for Management purposes? Or should I create another External Virtual Switch from it, but with allow management OS to use this connection?

Searching around, reading various blogs, reading technet, etc. I keep getting conflicting answers. They all agree you should have a separate physical NIC for Hyper-V Management, but some of them say to leave it ONLY to the Parent Partition, while others say to make it an External Virtual Switch.

It seems to me I should make it an External Virtual Switch as that would add another route for client communications for the cluster, but then I lose a "dedicated" management NIC. Any advise would be appreciated.

Question by:nick-pecoraro
    LVL 41

    Accepted Solution

    IMHO, dedicated NICS for cluster communications are overrated. You also shouldn't need dedicated NICs for live migration either unless your NICs are already very busy (they probably aren't). You don't need NICs 2 or 3. NIC 4 is properly designed. If you shared NIC 1 with the VMs, that doesn't provide an alternate VM path unless you add a second vNIC to your guests, because you can't attach multiple physical NICs to a virtual switch. If you want multiple physical paths, you need to team the NICs on the host.

    Is your storage FC or SAS? I see no mention of iSCSI.
    LVL 2

    Author Comment

    Storage is and iSCSI SAN. Both nodes have a separate iSCSI HBA connecting them to the SAN, which is why I didn't mention them in the NIC configuration.

    So it sounds like I should leave NIC#1 dedicated to the parent partition? And if dedicated NICs aren't needed for cluster communication I could  team NICs 3 and 4 together to create a redundant physical path.

    The cluster is hosting 5 VM's split on the nodes: 2 Domain Controllers, Exchange, RDS, and a Print server. So you are right, it's doubtful the NICs are extremely busy.

    The consultants that setup the server created one teamed group of all 4 NICs, and attached it to a Virtual Switch. This is the only point of communication for the entire cluster and host management. Whenever we do large file transfers on one of the systems (like a Backup), a lot of our users experience poor network performance and strange delays.

    So I was hoping that by splitting out the NICs into dedicated roles it would help alleviate any bottlenecks in the system when backups or other large file transfers are happening. Unless that is something unrelated, then I can open a separate question for that issue.

    LVL 41

    Assisted Solution

    I would take the host management out of the NIC teaming. Your problems related to backups could be caused by the NIC teaming, or by poor storage system.

    Write Comment

    Please enter a first name

    Please enter a last name

    We will never share this with anyone.

    Featured Post

    Free book by J.Peter Bruzzese, Microsoft MVP

    Are you using Office 365? Trying to set up email signatures but you’re struggling with transport rules and connectors? Let renowned Microsoft MVP J.Peter Bruzzese show you how in this exclusive e-book on Office 365 email signatures. Better yet, it’s free!

    Lets start to have a small explanation what is VAAI(vStorage API for Array Integration ) and what are the benefits using it. VAAI is an API framework in VMware that enable some Storage tasks. It first presented in ESXi 4.1, but only after 5.x sup…
    VMware Update Manager(VUM) “error code: 15” during ESXi 6.0 Remediate update in VUM operation
    This tutorial will walk an individual through locating and launching the BEUtility application to properly change the service account username and\or password in situation where it may be necessary or where the password has been inadvertently change…
    This tutorial will walk an individual through the process of upgrading their existing Backup Exec 2012 to 2014. Either install the CD\DVD into the drive and let it auto-start, or browse to the drive and double-click the Browser file: Select the ap…

    794 members asked questions and received personalized solutions in the past 7 days.

    Join the community of 500,000 technology professionals and ask your questions.

    Join & Ask a Question

    Need Help in Real-Time?

    Connect with top rated Experts

    17 Experts available now in Live!

    Get 1:1 Help Now