Link to home
Start Free TrialLog in
Avatar of ptsmartin
ptsmartin

asked on

Ideal NIC config Hyper-V R2

I have a 3 node cluster running Server 2008 R2 Hyper-v w/ Failover Clustering w/Live Migration & Cluster Shared Volumes
I'm looking for some guidance on how to improve my current NIC setup:
Each server has 6 NICs (Cluster use)
2 are for iSCSI traffic (Disabled)
1 for Private/Heartbeat traffic (Internal)
2 are for LAN access (Teamed 1GB NICs that are also remote management) (Enabled)
1 for Server/VM backup traffic to external server (Disabled)

How could I improve?  Adding another NIC for remote management and disabling that feature on the LAN team?

Thank you



ASKER CERTIFIED SOLUTION
Avatar of msmamji
msmamji
Flag of Pakistan image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of ptsmartin
ptsmartin

ASKER

I have 2 Hyper-V guests that were converted from VMWare using the 'convert virtual machine' action in SCVMM Workgroup Edition.
I am able to connect from the guest to my network when using a DCHP for the NIC address.  When I change the NIC to a manual address, my network access is sporadic.
Would moving the management off of my LAN team that's also used, correct that?
I'll try to make that question clearer...
Would removing 'Allow management operating system to share this network adapter' in virtual network properties on VNM, remedy that issue?
When you select a physical NIC as Virtual network in Hyper-V network manager, it creates a VM Switch and Virtual NIC.
clearing "allow management operating syste to share this network adapter" will remove the Virtual NIC, thus making the network exclusive for VMs.
If I'm following correctly, I uncheck the 'allow management..." and of course I can no longer RDP to the Hyper-V host using the IP of adapter used by the Virtual Network.
And that adapter is my team of 2- 1GB NICs (1.x network) - host network
My 2 1GB iSCSI NICs (45.x network)
My 1 1GB NIC for heartbeat (42.x network) -HyperV servers only and will also do CSV/Live Migration
My 1 1GB NIC for Backup (41.x network)

I've just added a dual-port 1GB NICs to each server, I have the expectation of putting 6-8 VMs on each host.  the 41,42 & 45 networks are non-routing

To restore remote management of the servers using RDP, should I assign a new NIC with an IP on my 1.x network and call it a day?
Yes, assign another or the new physical nic the same ip of your lan and call it a day :

http://virtualizer.wordpress.com/2010/01/11/configuration-challenge-part-1-networking-in-hyper-v-2008-r2/
"If I'm following correctly, I uncheck the 'allow management..." and of course I can no longer RDP to the Hyper-V host using the IP of adapter used by the Virtual Network."
Well not exactly, doing the above will get rid of the V-NIC altogether and all you will be left with will be a V-Switch.
BTW on which N/W are you considering applying this?

"the 41,42 & 45 networks are non-routing"
I am not sure about 45 as it should be routable inside the iSCSI SAN Fabric. For 42, by non-routing, I am assuming that you will limit IP subnet only between the Nodes (routable only between nodes). for 41, routing should be available from nodes to backup machines (Just a suggestion: removing Gateways and adding persistant routes)

If you have NICs to spare, why not follow suggestion by PowerToTheUsers (ID:30792531) of separting CSV and live migration on different NICs and use backup NIC for management.
I removed "allow management" on the 1.x network with the NIC Team.  Removed it's IP on the 1.x network and re-attached it to my virtual network.
Added a NIC for remote management on the 1.x network

You are correct, limit routing between the nodes.

I will take the suggestion and use the last NIC to seperate CSV and live migration traffic.

Thanks
I am confused by this statement, can you elaborate.
"I removed "allow management" on the 1.x network with the NIC Team.  Removed it's IP on the 1.x network and re-attached it to my virtual network."
So you removed "allow management" from Host N/W NICs. What you did next, I didn't follow.

So this is what it might look like.
NICs      N/W BW      Purpose            Network Options
2      1 Gbps      Host N/W                      1.x      
2      1 Gbps      iSCSI            45.x      Routable on to the iSCSI SAN Fabric
1      1 Gbps      Heartbeat/CSV      42.x      Routing limited to nodes
1      1 Gbps      Backup             41.x      Routing limited to Backup VLAN
1      1 Gbps      Management      1.x      
1      1 Gbps      Live Migration      N/A      Routing limited to nodes
Sorry I was shot on my answer.
I went to Virtual network manager, unchecked 'allow management'.  Then I changed the adapter to connect to from my Team to non-assigned NIC and pressed apply.  Went to Network connections and opened the Team and changed the static IP to DHCP.
Returned to VNM, changed the adapter back to the Team and pressed apply again.  Went back to Network connections and assigned an IP to the 1.x network on a NIC to use for management.

Your chart above looks great and how I intend to use this network.
Ahh, you wanted to reuse the IP since you shifted the management to a different NIC.  And you also managed to make your teamed NIC exclusive to VM use.

filling in a few blanks and this might be the final picture, if I am not mistaken.
NICs       N/W B/W      Purpose              Network       Options
2      1 Gbps      Host N/W            DHCP      Only working as a VM Switch for exclusive VM use
2      1 Gbps      iSCSI                45.x      Routable on to the iSCSI SAN Fabric
1      1 Gbps      Heartbeat/CSV       42.x      Routing limited to nodes
1      1 Gbps      Backup               41.x      Routing limited to Backup VLAN
1      1 Gbps      Management           1.x      
1      1 Gbps      Live Migration      N/A      Routing limited to nodes

It really was a learning experience, following the thread.

Best of luck.
Regards,
Shahid

Suggestion: ptsmartin, you can consider making a blog or article out of your deployment.
Msmamji :

NICs N/W B/W  Purpose              Network       Options
2        1 Gbps      Host N/W            DHCP      Only working as a VM  Switch for exclusive VM use
2       1 Gbps      iSCSI                     45.x      Routable on to the iSCSI SAN Fabric
1       1 Gbps       Heartbeat/CSV   42.x      Routing limited to nodes
1       1  Gbps      Backup                 41.x      Routing limited to Backup VLAN
1        1 Gbps      Management       1.x      
1       1 Gbps       Live Migration      N/A      Routing limited to nodes

what does "N/W" means in that table - i am getting to understand this topology too :-)

so in this topology :

One is a management nic (v-switch - vsnp)
One is for iscsi
one is for heartbeat/csv
one is for backup vlan
one is for live migration

ptsmartin - you should create a diagram and write up a article / blogpost / post an article on EE and that will give you enough points here.
HI mutahir
N/W is network (N/W BW is network bandwidth)

The first entry in the table refers to a teamed N/W which has "allow management operating system to share this network adapter" unchecked in Hyper-V Network Manager thus making the NIC a V-Switch (There will be no V-NIC associated with it).

The second entry: refers to two separate iSCSI NICs (unteamed - there are issues with iSCSI used with teamed N/W). Routing can be limited to iSCSI SAN fabric only by using n/w config without gateways or a separate non-routable VLAN. Should use MPIO if both NICs are bound to be used on host machine. Suggestion: Here you can use one NIC to give iSCSI access to host and use other one for iSCSI access to VMs.

The thrid entry: refers to one NIC for heartbeat/CSV. N/W exclusive to cluster nodes only.

The Fourth entry: refers to one NIC for backup, again routing can be limited, if you assign a separate non routable VLAN to be used by nodes and backup devices/machines only.

The fifth entry: refers to one NIC for management, which in this case on a fully routable VLAN. Since this is for management only it should not be bound in anyway to a Hyper-V managed network. (No V-Switch here)

The sixth entry: refers to one NIC for live migration, network exclusive to cluster nodes only.

Hope I haven't missed anything.
Cheers.
Shahid
No you haven't - I was just wondering as the N/W B/W were appearing too close adn the table formatting was getting messed up, so just to clear in my head.

So :
The first entry in the table refers to a teamed N/W which has "allow  management operating system to share this network adapter" unchecked in  Hyper-V Network Manager thus making the NIC a V-Switch (There will be no  V-NIC associated with it).

This is the normal NIC (which is teamed at the host level) and then is used with a Virtaul Network and doesn't shares traffic with the management box (Hyper-V Host) -

Yes thats what I understood so far and It was right.
Thanks for clearing up though :-)
Thanks everyone.  And I'll write something up after the implementation is complete.  I hope it's helpful to those in the future going through the process.