Poor file copy speed from VMware Guests

We have a VMware Vsphere system running on HP Proliant servers, and are having trouble with slow file copy speeds across the Cisco 3850 switch they're connected to.  We have had many people look at this and it's still eluding everyone to the point we're desperate now for the breakthrough resolution!  We've had Cisco check the Catalyst 3850 switch, HP check the Proliant DL360 G4p servers, and are now having VMware look at the ESX 5.1 VSphere system for the 2nd or 3rd time now, in 2 months.  Every time we think we've found the problem - doesn't fix it.  So now we're trying to appeal to the open Experts forum here to see if anyone has anything new to suggest.  We've checked many things, and don't have enough space to list everything here, but maybe somebody will think of something we haven't yet.  What we're using to gauge the performance is a variety of things.  One is a LAN Speedtest utility that builds a 50MB file in memory and transfers it from the source machine to a destination UNC location, and measures throughput.  We also have been using IPERF, as well as doing a simple file copy operation from one VM to another, and watching the MB/s ratio.  Here's the latest results from those tests, from just this morning:

DETAILS ON GUESTS USED IN TESTING:

“LACOTS3” VM – Host 192.168.1.17, datastore VNX_DS_1
“LACOFRX” VM – Host 192.168.1.17, datastore VNX_DS_2
“LACOAXT” VM – Host 192.168.1.28, datastore VNX_DS_1


2 VM GUESTS ON SAME HOST (LACOTS3 TO LACOFRX)

•      900M file copy: ~70MB/s
•      LAN_Speedtest: 50M file resulted in WR speed of 224Mbps / READ speed of 339Mbps
•      IPERF: 10 sec test, 12.6M copied at 10.4 Mbps

2 VM GUESTS ACROSS 2 HOSTS (LACOTS3 TO LACOAXT)

•      900M file copy: ~16MB/s
•      LAN_Speedtest: 50M file resulted in WR speed of 304Mbps / READ speed of 196Mbps
•      IPERF: 10 sec test, 152M copied at 127 Mbps

One thing that seems strange, is there's an inverse relationship between the IPERF results, and the file copy ratio.  

If anyone has any ideas of what could be causing the poor performance, we're all ears.  I realize that you may need more information before making a suggestion, and I'm ready to clarify more.

Thanks for your help.
Damian
Damian_GardnerAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Are the DL360 G4 ?

that seems an old Proliant server for ESXi 5.1 ?

Are these local datastores on the Servers ?

If so, do you have a BBWC module installed on the Smart Array controller, and configured as 75% Write, 25% Read using the ACU.

A few things to check, are you using the VMXNET3 interface IN ALL the VMs, and not the E1000 legacy interface which should only be used for OS install! (ensure VMware Tools is installed for support for the VMXNET3!).
0
Seth SimmonsSr. Systems AdministratorCommented:
yeah...ESX 4 isn't even supported on that server; never mind 5
i'm surprised you are getting anything from vmware since it's not supported
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
@Seth, the model must be wrong! typo!

4.x does not even run on it! and 5.x certainly will not!
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

Damian_GardnerAuthor Commented:
Sorry gentlemen - I misspoke on the Proliant model - they're actually the G5 series.  But that is STILL old, I know.  Compatibility matrix shows its supported only thru ESX 5.0 U3.  We're running 5.1, however. VMware has been helping us on this, and have not said anything about the old servers, other than we needed to update all of the drivers and firmware, early on in the case.  To answer some of your questions, the datastores are currently on an EMC VNX 5200 SAN for both the Windows test machines, and the Linux machines.  I don't know what a BBWC module is, and am guessing we do not have one.  The VMware tools are running everywhere except for the Linux machines.  VMXNET3 is being used on some of the machines. When I test those, they do no better than the E1000 machines.  

I don't believe I mentioned that we did testing with IPERF and DD utilities using 2 Linux machines - both on the same host, and across the switch to another host.  The results were impressive.  When IPERF was used between 2 Linux VMs on the same host, it clocked at 1.8Gb/s.  That went down to 980Mb/s, when going over the switch, over a 1GB port speed.  The DD utility was then used to test the VNX bandwidth, and it clocked a READ speed of 225 Mb/s, which is pretty good.  It is only when testing in the Windows OS, that the performance is slow.
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
BBWC is for Smart Array Controller. Eg local disks.

I am surprised VMware Support are offering you support as your hardware is not certified for use with ESXi5.1. HP either did not certify it or it failed certification. This means your server may seem to work or not.

This would suggest a driver issue in the OS?
0
Damian_GardnerAuthor Commented:
Ok.  How do I make sure the right driver is being used by the OS then?
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
If you use VMware Tools in Linux and VMXNET3 interface do the VMs suffer poor performance!
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
What are the Windows OS?
0
Damian_GardnerAuthor Commented:
Windows 2008 R2 is the OS.  Have not tried vmtools in Linux, and the adapters are E1000 right now.  The VMware engineer set them up and wanted to see what type of performance we got without using Windows.
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Have you looked at Received Size Scaling in Windows 2008 R2 on the network interface?
0
Damian_GardnerAuthor Commented:
no. that's a parameter on the NIC?
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
VMware Support have not been very helpful this is one of the know issues recorded in their Knowledge Base with 2008 R2.

see here

Poor network performance or high network latency on Windows virtual machines (2008925)

The E1000 network interface should not be used, after installation, and replaced with the VMXNET3.

VMXNET3 resource considerations on a Windows virtual machine that has vSphere DirectPath I/O with vMotion enabled (2061598)

Have you also looked at TCP Checksum Offload feature ?
0
Damian_GardnerAuthor Commented:
Interesting.  I'm checking on the Windows VM's and the Receive Size Scaling is not enabled.  I'll try enabling it and test.  standby
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Damian_GardnerAuthor Commented:
Ok.  yeah - these are already VMXNET3.  I'll read the rest of it.  thanks and standby
0
Damian_GardnerAuthor Commented:
No change after the Receive Size parameter change, btw.
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
It should be enabled by Default, lots of other parameters to check and change.
0
Damian_GardnerAuthor Commented:
right - I just read that it's default.  Im surprised that it was disabled.  I also just saw theres a Jumbo Packet parameter, and it's set to 1500!  I'm changing it to 9000...
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Be careful with jumbo frames because it needs to be enabled on all components of your network!

And that includes network physical switches!
0
Damian_GardnerAuthor Commented:
You nailed it, Andrew.  After adjusting the NIC parameters, and disabling the TCP Stack Offload especially, performance is much improved.  What baffles me is how VMware support did not check this stuff straight away, and wasted weeks of our time looking at performance logs and such, just spinning their wheels.  Unbelievable, really.  Anyway - thank you very much, and I'm glad I reached out!  Take care.

Damian
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Thanks for your kind words....

Glad I'm better than Cisco, HP and VMware...

Ah, because I'm brilliant, and VMware, well.......
1
Damian_GardnerAuthor Commented:
welcome :)
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
VMware

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.