Link to home
Start Free TrialLog in
Avatar of nick-pecoraro
nick-pecoraroFlag for United States of America

asked on

Windows Server 2008R2 Hyper-V Failover Cluster Networking Setup

Hi All,

I am hoping someone could help me understand how I should configure my NIC's in this situation.

- Windows Server 2008R2 Failover Clustering
- 2 Node Cluster
- 4 Physical NIC's on each Node

NIC #1
Role: Management
Network: 10.0.0.0/24 (Main LAN)

NIC #2
Role: Cluster Communications/Heartbeat
Network: 192.168.50.x (Private)
Cluster Network Settings: Allow Cluster Communications, Do not allow clients to connect through this network

NIC #3
Role: Live Migration
Network: 192.168.51.x (Private)
Cluster Network Settings: Allow Cluster Communications, Do not allow clients to connect through this network

NIC #4
Role: VM Network
Network: 10.0.0.0/24 (Main LAN)
Hyper-V Network Settings: External, do not allow management OS to use this connection
Cluster Network Settings: Allow both Cluster and Client communications

Now my question...

Should I leave NIC#1 as a physical adapter only for Management purposes? Or should I create another External Virtual Switch from it, but with allow management OS to use this connection?

Searching around, reading various blogs, reading technet, etc. I keep getting conflicting answers. They all agree you should have a separate physical NIC for Hyper-V Management, but some of them say to leave it ONLY to the Parent Partition, while others say to make it an External Virtual Switch.

It seems to me I should make it an External Virtual Switch as that would add another route for client communications for the cluster, but then I lose a "dedicated" management NIC. Any advise would be appreciated.

Thanks!
ASKER CERTIFIED SOLUTION
Avatar of kevinhsieh
kevinhsieh
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of nick-pecoraro

ASKER

Storage is and iSCSI SAN. Both nodes have a separate iSCSI HBA connecting them to the SAN, which is why I didn't mention them in the NIC configuration.

So it sounds like I should leave NIC#1 dedicated to the parent partition? And if dedicated NICs aren't needed for cluster communication I could  team NICs 3 and 4 together to create a redundant physical path.

The cluster is hosting 5 VM's split on the nodes: 2 Domain Controllers, Exchange, RDS, and a Print server. So you are right, it's doubtful the NICs are extremely busy.

The consultants that setup the server created one teamed group of all 4 NICs, and attached it to a Virtual Switch. This is the only point of communication for the entire cluster and host management. Whenever we do large file transfers on one of the systems (like a Backup), a lot of our users experience poor network performance and strange delays.

So I was hoping that by splitting out the NICs into dedicated roles it would help alleviate any bottlenecks in the system when backups or other large file transfers are happening. Unless that is something unrelated, then I can open a separate question for that issue.

SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial