Link to home
Start Free TrialLog in
Avatar of Damian Gardner
Damian Gardner

asked on

Poor file copy speed from VMware Guests

We have a VMware Vsphere system running on HP Proliant servers, and are having trouble with slow file copy speeds across the Cisco 3850 switch they're connected to.  We have had many people look at this and it's still eluding everyone to the point we're desperate now for the breakthrough resolution!  We've had Cisco check the Catalyst 3850 switch, HP check the Proliant DL360 G4p servers, and are now having VMware look at the ESX 5.1 VSphere system for the 2nd or 3rd time now, in 2 months.  Every time we think we've found the problem - doesn't fix it.  So now we're trying to appeal to the open Experts forum here to see if anyone has anything new to suggest.  We've checked many things, and don't have enough space to list everything here, but maybe somebody will think of something we haven't yet.  What we're using to gauge the performance is a variety of things.  One is a LAN Speedtest utility that builds a 50MB file in memory and transfers it from the source machine to a destination UNC location, and measures throughput.  We also have been using IPERF, as well as doing a simple file copy operation from one VM to another, and watching the MB/s ratio.  Here's the latest results from those tests, from just this morning:


“LACOTS3” VM – Host, datastore VNX_DS_1
“LACOFRX” VM – Host, datastore VNX_DS_2
“LACOAXT” VM – Host, datastore VNX_DS_1


•      900M file copy: ~70MB/s
•      LAN_Speedtest: 50M file resulted in WR speed of 224Mbps / READ speed of 339Mbps
•      IPERF: 10 sec test, 12.6M copied at 10.4 Mbps


•      900M file copy: ~16MB/s
•      LAN_Speedtest: 50M file resulted in WR speed of 304Mbps / READ speed of 196Mbps
•      IPERF: 10 sec test, 152M copied at 127 Mbps

One thing that seems strange, is there's an inverse relationship between the IPERF results, and the file copy ratio.  

If anyone has any ideas of what could be causing the poor performance, we're all ears.  I realize that you may need more information before making a suggestion, and I'm ready to clarify more.

Thanks for your help.
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Are the DL360 G4 ?

that seems an old Proliant server for ESXi 5.1 ?

Are these local datastores on the Servers ?

If so, do you have a BBWC module installed on the Smart Array controller, and configured as 75% Write, 25% Read using the ACU.

A few things to check, are you using the VMXNET3 interface IN ALL the VMs, and not the E1000 legacy interface which should only be used for OS install! (ensure VMware Tools is installed for support for the VMXNET3!).
yeah...ESX 4 isn't even supported on that server; never mind 5
i'm surprised you are getting anything from vmware since it's not supported
@Seth, the model must be wrong! typo!

4.x does not even run on it! and 5.x certainly will not!
Avatar of Damian Gardner
Damian Gardner


Sorry gentlemen - I misspoke on the Proliant model - they're actually the G5 series.  But that is STILL old, I know.  Compatibility matrix shows its supported only thru ESX 5.0 U3.  We're running 5.1, however. VMware has been helping us on this, and have not said anything about the old servers, other than we needed to update all of the drivers and firmware, early on in the case.  To answer some of your questions, the datastores are currently on an EMC VNX 5200 SAN for both the Windows test machines, and the Linux machines.  I don't know what a BBWC module is, and am guessing we do not have one.  The VMware tools are running everywhere except for the Linux machines.  VMXNET3 is being used on some of the machines. When I test those, they do no better than the E1000 machines.  

I don't believe I mentioned that we did testing with IPERF and DD utilities using 2 Linux machines - both on the same host, and across the switch to another host.  The results were impressive.  When IPERF was used between 2 Linux VMs on the same host, it clocked at 1.8Gb/s.  That went down to 980Mb/s, when going over the switch, over a 1GB port speed.  The DD utility was then used to test the VNX bandwidth, and it clocked a READ speed of 225 Mb/s, which is pretty good.  It is only when testing in the Windows OS, that the performance is slow.
BBWC is for Smart Array Controller. Eg local disks.

I am surprised VMware Support are offering you support as your hardware is not certified for use with ESXi5.1. HP either did not certify it or it failed certification. This means your server may seem to work or not.

This would suggest a driver issue in the OS?
Ok.  How do I make sure the right driver is being used by the OS then?
If you use VMware Tools in Linux and VMXNET3 interface do the VMs suffer poor performance!
Windows 2008 R2 is the OS.  Have not tried vmtools in Linux, and the adapters are E1000 right now.  The VMware engineer set them up and wanted to see what type of performance we got without using Windows.
Have you looked at Received Size Scaling in Windows 2008 R2 on the network interface?
no. that's a parameter on the NIC?
VMware Support have not been very helpful this is one of the know issues recorded in their Knowledge Base with 2008 R2.

see here

Poor network performance or high network latency on Windows virtual machines (2008925)

The E1000 network interface should not be used, after installation, and replaced with the VMXNET3.

VMXNET3 resource considerations on a Windows virtual machine that has vSphere DirectPath I/O with vMotion enabled (2061598)

Have you also looked at TCP Checksum Offload feature ?
Interesting.  I'm checking on the Windows VM's and the Receive Size Scaling is not enabled.  I'll try enabling it and test.  standby
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Ok.  yeah - these are already VMXNET3.  I'll read the rest of it.  thanks and standby
No change after the Receive Size parameter change, btw.
It should be enabled by Default, lots of other parameters to check and change.
right - I just read that it's default.  Im surprised that it was disabled.  I also just saw theres a Jumbo Packet parameter, and it's set to 1500!  I'm changing it to 9000...
Be careful with jumbo frames because it needs to be enabled on all components of your network!

And that includes network physical switches!
You nailed it, Andrew.  After adjusting the NIC parameters, and disabling the TCP Stack Offload especially, performance is much improved.  What baffles me is how VMware support did not check this stuff straight away, and wasted weeks of our time looking at performance logs and such, just spinning their wheels.  Unbelievable, really.  Anyway - thank you very much, and I'm glad I reached out!  Take care.

Thanks for your kind words....

Glad I'm better than Cisco, HP and VMware...

Ah, because I'm brilliant, and VMware, well.......
welcome :)