Avatar of Damian Gardner
Damian Gardner
 asked on

Poor file copy speed from VMware Guests

We have a VMware Vsphere system running on HP Proliant servers, and are having trouble with slow file copy speeds across the Cisco 3850 switch they're connected to.  We have had many people look at this and it's still eluding everyone to the point we're desperate now for the breakthrough resolution!  We've had Cisco check the Catalyst 3850 switch, HP check the Proliant DL360 G4p servers, and are now having VMware look at the ESX 5.1 VSphere system for the 2nd or 3rd time now, in 2 months.  Every time we think we've found the problem - doesn't fix it.  So now we're trying to appeal to the open Experts forum here to see if anyone has anything new to suggest.  We've checked many things, and don't have enough space to list everything here, but maybe somebody will think of something we haven't yet.  What we're using to gauge the performance is a variety of things.  One is a LAN Speedtest utility that builds a 50MB file in memory and transfers it from the source machine to a destination UNC location, and measures throughput.  We also have been using IPERF, as well as doing a simple file copy operation from one VM to another, and watching the MB/s ratio.  Here's the latest results from those tests, from just this morning:

DETAILS ON GUESTS USED IN TESTING:

“LACOTS3” VM – Host 192.168.1.17, datastore VNX_DS_1
“LACOFRX” VM – Host 192.168.1.17, datastore VNX_DS_2
“LACOAXT” VM – Host 192.168.1.28, datastore VNX_DS_1


2 VM GUESTS ON SAME HOST (LACOTS3 TO LACOFRX)

•      900M file copy: ~70MB/s
•      LAN_Speedtest: 50M file resulted in WR speed of 224Mbps / READ speed of 339Mbps
•      IPERF: 10 sec test, 12.6M copied at 10.4 Mbps

2 VM GUESTS ACROSS 2 HOSTS (LACOTS3 TO LACOAXT)

•      900M file copy: ~16MB/s
•      LAN_Speedtest: 50M file resulted in WR speed of 304Mbps / READ speed of 196Mbps
•      IPERF: 10 sec test, 152M copied at 127 Mbps

One thing that seems strange, is there's an inverse relationship between the IPERF results, and the file copy ratio.  

If anyone has any ideas of what could be causing the poor performance, we're all ears.  I realize that you may need more information before making a suggestion, and I'm ready to clarify more.

Thanks for your help.
Damian
VMwareVirtualizationWindows Server 2008

Avatar of undefined
Last Comment
Damian Gardner

8/22/2022 - Mon
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

Are the DL360 G4 ?

that seems an old Proliant server for ESXi 5.1 ?

Are these local datastores on the Servers ?

If so, do you have a BBWC module installed on the Smart Array controller, and configured as 75% Write, 25% Read using the ACU.

A few things to check, are you using the VMXNET3 interface IN ALL the VMs, and not the E1000 legacy interface which should only be used for OS install! (ensure VMware Tools is installed for support for the VMXNET3!).
Seth Simmons

yeah...ESX 4 isn't even supported on that server; never mind 5
i'm surprised you are getting anything from vmware since it's not supported
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

@Seth, the model must be wrong! typo!

4.x does not even run on it! and 5.x certainly will not!
All of life is about relationships, and EE has made a viirtual community a real community. It lifts everyone's boat
William Peck
Damian Gardner

ASKER
Sorry gentlemen - I misspoke on the Proliant model - they're actually the G5 series.  But that is STILL old, I know.  Compatibility matrix shows its supported only thru ESX 5.0 U3.  We're running 5.1, however. VMware has been helping us on this, and have not said anything about the old servers, other than we needed to update all of the drivers and firmware, early on in the case.  To answer some of your questions, the datastores are currently on an EMC VNX 5200 SAN for both the Windows test machines, and the Linux machines.  I don't know what a BBWC module is, and am guessing we do not have one.  The VMware tools are running everywhere except for the Linux machines.  VMXNET3 is being used on some of the machines. When I test those, they do no better than the E1000 machines.  

I don't believe I mentioned that we did testing with IPERF and DD utilities using 2 Linux machines - both on the same host, and across the switch to another host.  The results were impressive.  When IPERF was used between 2 Linux VMs on the same host, it clocked at 1.8Gb/s.  That went down to 980Mb/s, when going over the switch, over a 1GB port speed.  The DD utility was then used to test the VNX bandwidth, and it clocked a READ speed of 225 Mb/s, which is pretty good.  It is only when testing in the Windows OS, that the performance is slow.
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

BBWC is for Smart Array Controller. Eg local disks.

I am surprised VMware Support are offering you support as your hardware is not certified for use with ESXi5.1. HP either did not certify it or it failed certification. This means your server may seem to work or not.

This would suggest a driver issue in the OS?
Damian Gardner

ASKER
Ok.  How do I make sure the right driver is being used by the OS then?
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

If you use VMware Tools in Linux and VMXNET3 interface do the VMs suffer poor performance!
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

What are the Windows OS?
Damian Gardner

ASKER
Windows 2008 R2 is the OS.  Have not tried vmtools in Linux, and the adapters are E1000 right now.  The VMware engineer set them up and wanted to see what type of performance we got without using Windows.
I started with Experts Exchange in 2004 and it's been a mainstay of my professional computing life since. It helped me launch a career as a programmer / Oracle data analyst
William Peck
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

Have you looked at Received Size Scaling in Windows 2008 R2 on the network interface?
Damian Gardner

ASKER
no. that's a parameter on the NIC?
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

VMware Support have not been very helpful this is one of the know issues recorded in their Knowledge Base with 2008 R2.

see here

Poor network performance or high network latency on Windows virtual machines (2008925)

The E1000 network interface should not be used, after installation, and replaced with the VMXNET3.

VMXNET3 resource considerations on a Windows virtual machine that has vSphere DirectPath I/O with vMotion enabled (2061598)

Have you also looked at TCP Checksum Offload feature ?
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
Damian Gardner

ASKER
Interesting.  I'm checking on the Windows VM's and the Receive Size Scaling is not enabled.  I'll try enabling it and test.  standby
ASKER CERTIFIED SOLUTION
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

Log in or sign up to see answer
Become an EE member today7-DAY FREE TRIAL
Members can start a 7-Day Free trial then enjoy unlimited access to the platform
Sign up - Free for 7 days
or
Learn why we charge membership fees
We get it - no one likes a content blocker. Take one extra minute and find out why we block content.
Not exactly the question you had in mind?
Sign up for an EE membership and get your own personalized solution. With an EE membership, you can ask unlimited troubleshooting, research, or opinion questions.
ask a question
Damian Gardner

ASKER
Ok.  yeah - these are already VMXNET3.  I'll read the rest of it.  thanks and standby
Damian Gardner

ASKER
No change after the Receive Size parameter change, btw.
Experts Exchange has (a) saved my job multiple times, (b) saved me hours, days, and even weeks of work, and often (c) makes me look like a superhero! This place is MAGIC!
Walt Forbes
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

It should be enabled by Default, lots of other parameters to check and change.
Damian Gardner

ASKER
right - I just read that it's default.  Im surprised that it was disabled.  I also just saw theres a Jumbo Packet parameter, and it's set to 1500!  I'm changing it to 9000...
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

Be careful with jumbo frames because it needs to be enabled on all components of your network!

And that includes network physical switches!
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
Damian Gardner

ASKER
You nailed it, Andrew.  After adjusting the NIC parameters, and disabling the TCP Stack Offload especially, performance is much improved.  What baffles me is how VMware support did not check this stuff straight away, and wasted weeks of our time looking at performance logs and such, just spinning their wheels.  Unbelievable, really.  Anyway - thank you very much, and I'm glad I reached out!  Take care.

Damian
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)

Thanks for your kind words....

Glad I'm better than Cisco, HP and VMware...

Ah, because I'm brilliant, and VMware, well.......
Damian Gardner

ASKER
welcome :)
This is the best money I have ever spent. I cannot not tell you how many times these folks have saved my bacon. I learn so much from the contributors.
rwheeler23