Link to home
Start Free TrialLog in
Avatar of A-p-u
A-p-u

asked on

Hyper-V networking slow

Dell PowerEdge T420, Windows Server 2008 R2 with Hyper-V installed and a single 2008 R2 virtual machine setup within the Hyper-V environment. The two NICs on the T420 are connected to the same Gigabit switch, one for the physical machine's use and the other for the virtual network.

The only thing being done in the VM is Remote Desktop Services and then end-users running a medical records program which is accessing a database on completely separate server. The physical server just has Hyper-V, a domain controller and DNS.

With no activity, ping'ing the guest from the physical machine or vice versa, or a separate database server from the guest machine, we get under 1 ms response times. However, if there is any activity in the guest OS, ping times vary drastically -- between 1 ms and 200 ms.

I have the various Offloads disabled in the virtual network adapters properties but that does not seem to make any difference. Looking at the Resource Monitor on the virtual machines, there is generally under 5% of the CPU and 10% of the memory being used, and network traffic when the application is running is under 100 Kbps (usually under 50 Kbps).

On the physical server, the CPU & network traffic are basically the same as the virtual machine, with 80% of the memory in use (since it is all allocated to Hyper-V).

Any thoughts on how to solve the slow networking?

Thanks in advance!
Avatar of Andrej Pirman
Andrej Pirman
Flag of Slovenia image

Seems like driver problem, or compatibility with network card. Read here, links to solution, too:
http://social.technet.microsoft.com/Forums/en/winserverhyperv/thread/29c669db-30fe-4196-9b95-a9d5e48ac318?prof=required
It doesn’t seem as a network problem to me with one condition: did you disable the “Allow management operating system to share this network adapter” option on the virtual NIC?

What is the storage setup? RAID, disks, volumes, where is the VHD file located, is it fixed or dynamic?

What is the total RAM of the physical server and how much is the RAM reserved for the VM and the Hyper-V host?

What is the size of the domain, how many users do you have?
Avatar of A-p-u
A-p-u

ASKER

Driver & firmware updated with the latest on Dell's website. (Broadcom NetXtreme Gigabit adapters).  All the "offloads" are disabled in the virtual machine, and I've tried with them enabled or disabled on the physical machine with no change.

"Allow management operating system to share this network adapter" is disabled on the one NIC used by the VM.

Physical machine has a RAID 5 array across three 500 GB SATA drives. One 80 GB system partition, the rest is a data partition that only has the virtual machine files. 32 GB of RAM in the server.

Virtual machine has a single disk (dynamic - currently using 10 GB out of 80 GB allocated to it) - and is allocated 24 GB of RAM.
ASKER CERTIFIED SOLUTION
Avatar of A-p-u
A-p-u

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanks for sharing.

However, I am little bit surprised by the solution. I have several Dell R710 servers running Hyper-V role of Windows Server 2008 R2 with Broadcom and Intel adapters and I do not have problems with this option enabled (by default; I didn’t know its existence until today). It actually should help to increase the throughput according to this document http://www.dell.com/downloads/global/power/ps1q10-20100101-Chaudhary.pdf

Did you have the latest NIC drives? Did you run other tests for real network throughput? For example, a free I/O benchmark utility from Microsoft called SQLIO simulates database requests with different block size.

Thanks again
Avatar of A-p-u

ASKER

Based on the description of the feature, it should definitely help being enabled. But, from a practical standpoint, disabling it immediately resolves the problem and enabling it recreates the problem.

I did install the latest drivers and firmware on Dell's site without any change prior to trying this. A different Dell server purchased around the same time (PowerEdge T620) has Intel NICs and had this setting disabled from the factory, and had no problems with a similar single Hyper-V VM running on it. (We did not try enabling it and seeing if that caused a performance problem on that server.)

No scientific benchmarking. Just the real-world performance of the client's medical records program, or doing a file copy in Windows Explorer (~ 2 MB/s when enabled, ~ 10 MB/s when disabled).

(P.S. One potentially significant correction to original post - the server's two NICs are connected to the same 100Mbps switch, not a Gigabit switch. The Dell VMQ white paper and some other documentation on Microsoft's site about it, only reference it with Gigabit and 10GE NICs so it might only work with Gigabit or better switches.)
Avatar of A-p-u

ASKER

Making this change on the driver's settings resolved the problem entirely.

Looking at a different Dell server bought around the same time, but which has Intel NICs shows the default setting there is already "disabled."

Many sites reference turning of Large Send Offloads or TCP Checksum Offloads but I haven't see any reference to this "Virtual Machine Queues" setting before.
"Virtual Machine Queues" worked for me, I also have a Dell server, same NIC, also defaulted to Enable.  

This was driving me NUTS because the data transfer rates to the guest system were so slow it calculated in kb/s.  I am in the process of migrating a 2003 physical machine to a 2008 R2 VM.  I need to move approx 300 GB of data and at 130 kb/s it was not going to happen any time soon.
Thank you!! This saved me on my network setup that I was doing for a customer!
I realize it's two years later, but thank you so much for this info.  I just installed a Dell T420 with Broadcom NIC's just as the A-p-u described (one NIC for host, one for VMs) and p2v'd a Server 2003 R2 machine.  Performance was terrible to the VM.  I left a 45000-byte ping running from another machine to the VM before I made the change.  The times before the change were from 2ms to 220ms.  After the change (broke the connection for 30 seconds), times went down to 1ms-2ms.  Thanks again!!
PS, for those of you reading this later - the setting is in Network and Sharing Center, then change adapter settings, properties on the adapter you want to change, then Configure on the adapter, then Advanced tab.
I had same problem on a T420 with 2012 R2, disabling Virtual Machine Queues, fixed for me