Link to home
Start Free TrialLog in
Avatar of gmbaxter
gmbaxterFlag for United Kingdom of Great Britain and Northern Ireland

asked on

Hyper-V 3 (Server 2012) Networking configuration

Hi experts,

I'm looking to deploy Hyper-V 3 on Server 2012 onto our blades which currently run another hypervisor.

I'm finding conflicting advice when searching for networking information saying the 4 NICs on each physical host is reccommended. The thing is, we have blades, which have dual NICs via dual pass through modules into two top of rack switches.

The plan is to aggregate a port from each switch per blade giving a resilient link with the aggregate having access to production vlans. Is this the way to do it? I understand that 4 NICs gives mgmt, v-motion, plus dual data connectivity, however we don;t have 4 NICs.

For reference, storage is provided via Fibre Channel.

Thanks.
ASKER CERTIFIED SOLUTION
Avatar of msmamji
msmamji
Flag of Pakistan image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of gmbaxter

ASKER

Thanks for this, I'm just waiting to implement it before awarding points.
I've implemented a nic team, however I would like to create a hyper-V switch per vlan so o would end up with a switch for production, one for dmz and one for test. How is this achieved?

Thanks.
I am afraid you can't have more the one external virtual switch per team interface.
Details here.
A team can have many team interfaces, BUT the team must have only 1 team interface if you are going to use the team to connect an external virtual switch.  The NIC team must be dedicated to the external virtual switch – no exceptions; Microsoft and the NIC team don’t care what your budget or bosses demands are.
So I can't actually separate out the cluster and live migration networks? They all just reside within the teamed network?
Thanks for this. We've now gone into production with a 16 node Hyper-V cluster based on this and Aidan Finn's networking config.

Vlan config is clunky for us but it works. We create the VM in SC VMM, then go to failover cluster manager to assign the vlan, then refresh the VM's status in VMM. VMWare's networking setup is far superior.