This follows on from http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Hyper-V/Q_28591820.html
4 x Hyper-V hosts, each with 4 x 1Gbps NICs. 1 NIC on each host dedicated to management subnet, other 3 teamed using LACP. NIC team assigned to virtual switch.
Although we initially saw marked performance improvements after disabling VMQ - things began to slow down again as we started to put the system under more load as apps/users were migrated from the old system.
I have since found a Broadcom driver update for Win2012 which is supposed to fix VMQ issues. So I've installed this and enabled VMQ on all NICs. There has been reports of some performance improvements since doing this.
However, the most noticeable bottleneck is still between VMs on the same host using the same virtual switch.
I should point out that we're still in the testing/migration stage so the NIC teams are currently broken with 1 or 2 of their team members being used to provide connections from the old system. The NICs have been removed from the teams in the management OS on each host. So on one of the hosts there is a team with only one member.
So I understand untidy configuration may be causing problems at the moment - but speeds are still significantly slower than I'd expect from a single 1Gbps card.
I've been doing some research on VMQ, specifically these sources...
and I'm unclear on whether I should be explicitly assigning processor cores to team members - or is this a pointless exercise because I'm using an LACP on the switch? Or should I switch to using switch independent teaming in Hyper-v mode and then assign processors to NICs?
Also, I don't understand why any of this would affect traffic between VMs on the same virtual switch. Does the traffic still go to the network card? I thought it was handled within Hyper-V.
Any suggestions/advise appreciated.