Link to home
Start Free TrialLog in
Avatar of devon-lad
devon-lad

asked on

Poor network throughput in Hyper-V guests

Here's the situation...

3 x Windows 2012 R2 hyper-v host machines.

Each host has 4 x Gb NICs connected as follows:

Host A
nic1 - host management subnet
nic2 - vm subnet1
nic3 & 4 - vm subnet2 (teamed)

Host B
nic1 - host management subnet
nic2, 3, 4 - vm subnet2 (teamed)

Host C
nic1 - host management subnet
nic2, 3, 4 - vm subnet3 (teamed)

The teamed nics are aggregated on a layer 3 switch using LACP and setup in Windows using LACP with Dynamic load balancing.

The hosts and switch are not in production - so there is no/negligible background data transfer taking place.

If I copy data between the POSE on any two hosts, transfer speed is around 1Gbps - which is what I would expect as the single NICs connected to the management subnet would be used.

If I copy data between a VM on host B and a VM on host C I would expect speeds of >1Gbps given that LACP and dynamic load balancing are being used across 3 x Gb teamed nics.  However, the transfer speed is very erratic and jumps up and down from 0 to 24Mbps.  Pinging between the same VMs produces equally erratic results - 1ms, 180ms, 5ms etc.

Similar results are gained when copying between VMs on host A and host B that are on the same subnet.

If I copy data between VMs on the same host using the same subnet (which as far as I'm aware should never actually reach the physical switch and so be very fast) - speed is around 150Mbps.

Initially this appeared to me to be something to do with the NIC teaming.  However, if I copy data from a VM on host A connected to subnet 1 using a single NIC I still only get the slow/erratic speeds.  What I would say is this particular VM is multi-homed with subnet 2 using the teamed NIC - so could still be related to teaming.

All VMs are gen1 using synthetic NICs.

Any idea what is going on?
ASKER CERTIFIED SOLUTION
Avatar of Cliff Galiher
Cliff Galiher
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
You don't tell us anything about the guests. What OS are they? Are the Integration services installed? Newer m$ OS's have those included, but older ones don't. Besides, it always is a good idea to install them manually, even if they are included with the OS. OS updates can also help. With non m$ OS's they won't be installed by default, so there you must do that manually anyway.
Avatar of devon-lad
devon-lad

ASKER

Cliff...
Disk configuration -  it's all on a fibre channel SAN - slowest component is 6Gbps.  If I do a copy from one host to itself there are no speed issues - only when using the VMs.
Switch - it's an HP 1910 switch - would you see these as falling under the category of "LACP implementations leave a lot to be desired" ?
NICs - yes these are Broadcom ones, I had wondered about VMQ.  Surely a VMQ issue is related to VMs only and wouldn't affect copies between physical hosts?  I will try disabling to see if there's any effect.

rindi - guests are all Win 2012 R2 with integration services installed and all updates.
Cliff - after disabling VMQ on all NICs there is a marked improvement in transfer speeds.  Still slightly erratic - but getting near 1Gbps most of the time.  The LACP teamed NICs don't appear to be getting anything over this though.
Ah hangon, seem to remember that no single transfer process will ever get more than the maximum speed of a single NIC.  Only way to get more is if you have more than one process transferring data.  Is that right?
That depends on the LACP implementation (again.) Most LACP switches do load balance based on a hash of packet data that makes it per-stream/flow. So getting above the speed of a single NIC requires multiple flows. Higher end switches have smarter algorithms though so getting better throughput even with a single stream is possible. But if you are seeing the upper limit is 1Gb then you are probably dealing with a basic LACP balancer.