• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1748
  • Last Modified:

Ideal NIC config Hyper-V R2

I have a 3 node cluster running Server 2008 R2 Hyper-v w/ Failover Clustering w/Live Migration & Cluster Shared Volumes
I'm looking for some guidance on how to improve my current NIC setup:
Each server has 6 NICs (Cluster use)
2 are for iSCSI traffic (Disabled)
1 for Private/Heartbeat traffic (Internal)
2 are for LAN access (Teamed 1GB NICs that are also remote management) (Enabled)
1 for Server/VM backup traffic to external server (Disabled)

How could I improve?  Adding another NIC for remote management and disabling that feature on the LAN team?

Thank you



0
ptsmartin
Asked:
ptsmartin
  • 6
  • 6
  • 3
  • +1
2 Solutions
 
msmamjiCommented:
teaming is providing you resiliency on the LAN Access, if you are using only for remote management than you might opt for disabling teaming and using the extra NIC for other stuff like CSV/Live Migration traffic. Considering that you have option to manage locally (interactively)

But if you have several client access the VMs or services on the VMs, then I would suggest keeping the teaming.

Are you facing issues with n/w performance or just assessing?

Regards,
Shahid
0
 
PowerToTheUsersCommented:
Which network are you using for LiveMigration and CSV now?

2 NICs for iSCSI is good, keep those (with MPIO)
2 NICs for the VMs is good, especially when there's much traffic to those VMs. Teaming will work fine, especially if the virtual servers are in the same VLAN. I'd suggest you link those NICs to the Hyper-V virtual network switch, without access for the parent, so these NICs will be used for the VMs ONLY, not for management of the parent.

"1 for Private/Heartbeat traffic (Internal) and 1 for Server/VM backup traffic to external server (Disabled)" isn't bad, but adding NICs gives extra flexibility: separating management from VM-traffic, a separate LiveMigration-network,...
My suggestion when adding NICs:
1 NIC for LiveMigration (dedicated as that link will be saturated during LiveMigrations)
2 NICs + teaming for CSV. Allow cluster heartbeat on this network too, as a failover if the next
1 NIC (or 2 NICs teamed) for management of the Hyper-V servers (management, backup,...) and primary cluster heartbeat network.

0
 
ptsmartinAuthor Commented:
I have 2 Hyper-V guests that were converted from VMWare using the 'convert virtual machine' action in SCVMM Workgroup Edition.
I am able to connect from the guest to my network when using a DCHP for the NIC address.  When I change the NIC to a manual address, my network access is sporadic.
Would moving the management off of my LAN team that's also used, correct that?
0
Problems using Powershell and Active Directory?

Managing Active Directory does not always have to be complicated.  If you are spending more time trying instead of doing, then it's time to look at something else. For nearly 20 years, AD admins around the world have used one tool for day-to-day AD management: Hyena. Discover why

 
ptsmartinAuthor Commented:
I'll try to make that question clearer...
Would removing 'Allow management operating system to share this network adapter' in virtual network properties on VNM, remedy that issue?
0
 
msmamjiCommented:
When you select a physical NIC as Virtual network in Hyper-V network manager, it creates a VM Switch and Virtual NIC.
clearing "allow management operating syste to share this network adapter" will remove the Virtual NIC, thus making the network exclusive for VMs.
0
 
ptsmartinAuthor Commented:
If I'm following correctly, I uncheck the 'allow management..." and of course I can no longer RDP to the Hyper-V host using the IP of adapter used by the Virtual Network.
And that adapter is my team of 2- 1GB NICs (1.x network) - host network
My 2 1GB iSCSI NICs (45.x network)
My 1 1GB NIC for heartbeat (42.x network) -HyperV servers only and will also do CSV/Live Migration
My 1 1GB NIC for Backup (41.x network)

I've just added a dual-port 1GB NICs to each server, I have the expectation of putting 6-8 VMs on each host.  the 41,42 & 45 networks are non-routing

To restore remote management of the servers using RDP, should I assign a new NIC with an IP on my 1.x network and call it a day?
0
 
Syed Mutahir AliTechnology ConsultantCommented:
Yes, assign another or the new physical nic the same ip of your lan and call it a day :

http://virtualizer.wordpress.com/2010/01/11/configuration-challenge-part-1-networking-in-hyper-v-2008-r2/
0
 
msmamjiCommented:
"If I'm following correctly, I uncheck the 'allow management..." and of course I can no longer RDP to the Hyper-V host using the IP of adapter used by the Virtual Network."
Well not exactly, doing the above will get rid of the V-NIC altogether and all you will be left with will be a V-Switch.
BTW on which N/W are you considering applying this?

"the 41,42 & 45 networks are non-routing"
I am not sure about 45 as it should be routable inside the iSCSI SAN Fabric. For 42, by non-routing, I am assuming that you will limit IP subnet only between the Nodes (routable only between nodes). for 41, routing should be available from nodes to backup machines (Just a suggestion: removing Gateways and adding persistant routes)

If you have NICs to spare, why not follow suggestion by PowerToTheUsers (ID:30792531) of separting CSV and live migration on different NICs and use backup NIC for management.
0
 
ptsmartinAuthor Commented:
I removed "allow management" on the 1.x network with the NIC Team.  Removed it's IP on the 1.x network and re-attached it to my virtual network.
Added a NIC for remote management on the 1.x network

You are correct, limit routing between the nodes.

I will take the suggestion and use the last NIC to seperate CSV and live migration traffic.

Thanks
0
 
msmamjiCommented:
I am confused by this statement, can you elaborate.
"I removed "allow management" on the 1.x network with the NIC Team.  Removed it's IP on the 1.x network and re-attached it to my virtual network."
So you removed "allow management" from Host N/W NICs. What you did next, I didn't follow.

So this is what it might look like.
NICs      N/W BW      Purpose            Network Options
2      1 Gbps      Host N/W                      1.x      
2      1 Gbps      iSCSI            45.x      Routable on to the iSCSI SAN Fabric
1      1 Gbps      Heartbeat/CSV      42.x      Routing limited to nodes
1      1 Gbps      Backup             41.x      Routing limited to Backup VLAN
1      1 Gbps      Management      1.x      
1      1 Gbps      Live Migration      N/A      Routing limited to nodes
0
 
ptsmartinAuthor Commented:
Sorry I was shot on my answer.
I went to Virtual network manager, unchecked 'allow management'.  Then I changed the adapter to connect to from my Team to non-assigned NIC and pressed apply.  Went to Network connections and opened the Team and changed the static IP to DHCP.
Returned to VNM, changed the adapter back to the Team and pressed apply again.  Went back to Network connections and assigned an IP to the 1.x network on a NIC to use for management.

Your chart above looks great and how I intend to use this network.
0
 
msmamjiCommented:
Ahh, you wanted to reuse the IP since you shifted the management to a different NIC.  And you also managed to make your teamed NIC exclusive to VM use.

filling in a few blanks and this might be the final picture, if I am not mistaken.
NICs       N/W B/W      Purpose              Network       Options
2      1 Gbps      Host N/W            DHCP      Only working as a VM Switch for exclusive VM use
2      1 Gbps      iSCSI                45.x      Routable on to the iSCSI SAN Fabric
1      1 Gbps      Heartbeat/CSV       42.x      Routing limited to nodes
1      1 Gbps      Backup               41.x      Routing limited to Backup VLAN
1      1 Gbps      Management           1.x      
1      1 Gbps      Live Migration      N/A      Routing limited to nodes

It really was a learning experience, following the thread.

Best of luck.
Regards,
Shahid

Suggestion: ptsmartin, you can consider making a blog or article out of your deployment.
0
 
Syed Mutahir AliTechnology ConsultantCommented:
Msmamji :

NICs N/W B/W  Purpose              Network       Options
2        1 Gbps      Host N/W            DHCP      Only working as a VM  Switch for exclusive VM use
2       1 Gbps      iSCSI                     45.x      Routable on to the iSCSI SAN Fabric
1       1 Gbps       Heartbeat/CSV   42.x      Routing limited to nodes
1       1  Gbps      Backup                 41.x      Routing limited to Backup VLAN
1        1 Gbps      Management       1.x      
1       1 Gbps       Live Migration      N/A      Routing limited to nodes

what does "N/W" means in that table - i am getting to understand this topology too :-)

so in this topology :

One is a management nic (v-switch - vsnp)
One is for iscsi
one is for heartbeat/csv
one is for backup vlan
one is for live migration

ptsmartin - you should create a diagram and write up a article / blogpost / post an article on EE and that will give you enough points here.
0
 
msmamjiCommented:
HI mutahir
N/W is network (N/W BW is network bandwidth)

The first entry in the table refers to a teamed N/W which has "allow management operating system to share this network adapter" unchecked in Hyper-V Network Manager thus making the NIC a V-Switch (There will be no V-NIC associated with it).

The second entry: refers to two separate iSCSI NICs (unteamed - there are issues with iSCSI used with teamed N/W). Routing can be limited to iSCSI SAN fabric only by using n/w config without gateways or a separate non-routable VLAN. Should use MPIO if both NICs are bound to be used on host machine. Suggestion: Here you can use one NIC to give iSCSI access to host and use other one for iSCSI access to VMs.

The thrid entry: refers to one NIC for heartbeat/CSV. N/W exclusive to cluster nodes only.

The Fourth entry: refers to one NIC for backup, again routing can be limited, if you assign a separate non routable VLAN to be used by nodes and backup devices/machines only.

The fifth entry: refers to one NIC for management, which in this case on a fully routable VLAN. Since this is for management only it should not be bound in anyway to a Hyper-V managed network. (No V-Switch here)

The sixth entry: refers to one NIC for live migration, network exclusive to cluster nodes only.

Hope I haven't missed anything.
Cheers.
Shahid
0
 
Syed Mutahir AliTechnology ConsultantCommented:
No you haven't - I was just wondering as the N/W B/W were appearing too close adn the table formatting was getting messed up, so just to clear in my head.

So :
The first entry in the table refers to a teamed N/W which has "allow  management operating system to share this network adapter" unchecked in  Hyper-V Network Manager thus making the NIC a V-Switch (There will be no  V-NIC associated with it).

This is the normal NIC (which is teamed at the host level) and then is used with a Virtaul Network and doesn't shares traffic with the management box (Hyper-V Host) -

Yes thats what I understood so far and It was right.
Thanks for clearing up though :-)
0
 
ptsmartinAuthor Commented:
Thanks everyone.  And I'll write something up after the implementation is complete.  I hope it's helpful to those in the future going through the process.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Has Powershell sent you back into the Stone Age?

If managing Active Directory using Windows Powershell® is making you feel like you stepped back in time, you are not alone.  For nearly 20 years, AD admins around the world have used one tool for day-to-day AD management: Hyena. Discover why.

  • 6
  • 6
  • 3
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now