• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 672
  • Last Modified:

Windows Server 2008R2 Hyper-V Failover Cluster Networking Setup

Hi All,

I am hoping someone could help me understand how I should configure my NIC's in this situation.

- Windows Server 2008R2 Failover Clustering
- 2 Node Cluster
- 4 Physical NIC's on each Node

NIC #1
Role: Management
Network: 10.0.0.0/24 (Main LAN)

NIC #2
Role: Cluster Communications/Heartbeat
Network: 192.168.50.x (Private)
Cluster Network Settings: Allow Cluster Communications, Do not allow clients to connect through this network

NIC #3
Role: Live Migration
Network: 192.168.51.x (Private)
Cluster Network Settings: Allow Cluster Communications, Do not allow clients to connect through this network

NIC #4
Role: VM Network
Network: 10.0.0.0/24 (Main LAN)
Hyper-V Network Settings: External, do not allow management OS to use this connection
Cluster Network Settings: Allow both Cluster and Client communications

Now my question...

Should I leave NIC#1 as a physical adapter only for Management purposes? Or should I create another External Virtual Switch from it, but with allow management OS to use this connection?

Searching around, reading various blogs, reading technet, etc. I keep getting conflicting answers. They all agree you should have a separate physical NIC for Hyper-V Management, but some of them say to leave it ONLY to the Parent Partition, while others say to make it an External Virtual Switch.

It seems to me I should make it an External Virtual Switch as that would add another route for client communications for the cluster, but then I lose a "dedicated" management NIC. Any advise would be appreciated.

Thanks!
0
nick-pecoraro
Asked:
nick-pecoraro
  • 2
2 Solutions
 
kevinhsiehCommented:
IMHO, dedicated NICS for cluster communications are overrated. You also shouldn't need dedicated NICs for live migration either unless your NICs are already very busy (they probably aren't). You don't need NICs 2 or 3. NIC 4 is properly designed. If you shared NIC 1 with the VMs, that doesn't provide an alternate VM path unless you add a second vNIC to your guests, because you can't attach multiple physical NICs to a virtual switch. If you want multiple physical paths, you need to team the NICs on the host.

Is your storage FC or SAS? I see no mention of iSCSI.
0
 
nick-pecoraroAuthor Commented:
Storage is and iSCSI SAN. Both nodes have a separate iSCSI HBA connecting them to the SAN, which is why I didn't mention them in the NIC configuration.

So it sounds like I should leave NIC#1 dedicated to the parent partition? And if dedicated NICs aren't needed for cluster communication I could  team NICs 3 and 4 together to create a redundant physical path.

The cluster is hosting 5 VM's split on the nodes: 2 Domain Controllers, Exchange, RDS, and a Print server. So you are right, it's doubtful the NICs are extremely busy.

The consultants that setup the server created one teamed group of all 4 NICs, and attached it to a Virtual Switch. This is the only point of communication for the entire cluster and host management. Whenever we do large file transfers on one of the systems (like a Backup), a lot of our users experience poor network performance and strange delays.

So I was hoping that by splitting out the NICs into dedicated roles it would help alleviate any bottlenecks in the system when backups or other large file transfers are happening. Unless that is something unrelated, then I can open a separate question for that issue.

0
 
kevinhsiehCommented:
I would take the host management out of the NIC teaming. Your problems related to backups could be caused by the NIC teaming, or by poor storage system.
0

Featured Post

Get quick recovery of individual SharePoint items

Free tool – Veeam Explorer for Microsoft SharePoint, enables fast, easy restores of SharePoint sites, documents, libraries and lists — all with no agents to manage and no additional licenses to buy.

  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now