VMware Write Latency per Virtual Machine Disk

Screen ShotHello. I've recently migrated 24 VMs to a Windows Storage Server (RAID 10) 2008 R2 NFS datastore. Previously, I had them spread over 3 ESXi 5.0 servers. Now, those three servers are still providing resources (RAM, CPU, NIC) to the VMS, but the datastore has been moved from locally on them to the NFS store. I've been monitoring the IOPS and Latency on the new datastore and I'm wondering if there is a way to lower the write latency for the VMs? They're hovering around 50ms (SS attached). Obviously, that was a lot lower when they were split up and I'm concerned about server response time to my clients. I would also like to know what latency is considered "high" for VMs? Any help would be greatly appreciated! Thank you.
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Have you tested with Jumbo Frames being enabled or disabled?

Did not know, EE also has VMware Articles? Checkout my EE Articles

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

How many disks make up the RAID 10, and what disk technology, SATA?
jmchristyAuthor Commented:
We have not tested the Jumbo Frames being enabled, but that was in our notes to try.  Our switch does support it, is there anything we need to check on the network cards on the NAS before enabling?  We are using Intel Quad Port ET gigabyte NICs.

We have an 8 disk RAID 10, 450GB 15K SAS drives on an H700 RAID controller.
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
check NAS, NICS, network switch are ALL enabled.
Determine the Perfect Price for Your IT Services

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden with our free interactive tool and use it to determine the right price for your IT services. Download your free eBook now!

jmchristyAuthor Commented:
We actually just experienced some issues today under heavier workload.

We checked the virtual machines using performance monitor, and the average disk queue was spiking up to 100% every now and then, sometimes for a few seconds.  I'm assuming this indicates that it can't get the data quick enough from the NFS storage device.  When we had these same virtual machines on local storage, there were no issues.

It's a 2003 terminal server, 16GB of RAM, with 8CPU only servicing 20 people so I imagine the bottleneck would be the storage.  I'm hoping enabling Jumbo Frames helps!  Any other suggestions after we enabled Jumbo Frames, if we still have issues?
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Looks like NFS cannot deliover the performance you require, certainly local disks will always be faster. As you are using Storage Server, I would test iSCSI Target and see if you get better performance using iSCSI compared to NFS.
jmchristyAuthor Commented:
Yes, we are using Storage Server. I was going to try iSCSI but apparently using a Microsoft iSCSI target and ESXi 5.x hosts isn't supported by VMware. I've enabled jumbo frames and moved the terminal servers back to local storage. I'm going to test it by moving one over at a time.
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
even though not supported, I would test for performance, you could also try starwind ISCSI SAN Software which installs on a Windows OS.
jmchristyAuthor Commented:
I've requested that this question be closed as follows:

Accepted answer: 0 points for jmchristy's comment #37703179

for the following reason:

Answer was correct.
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Could you explain Which Answer was correct?
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
I've suggested enabling Jumbo Frames, and provided an EE Article to help you enable Jumbo Frames. Could you expand on "Answer was correct".

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
jmchristyAuthor Commented:
The terminal servers were moved to local storage and Jumbo Frames has been enabled on NFS store and ESXi hosts. The performance issues appear do not appear to be occuring on the terminal servers now. I will test by moving one terminal server back to NFS storage. If I am able to move all TSs back to NFS, Jumbo Frames would be the solution.
jmchristyAuthor Commented:
NFS storage turned out to not be the right solution for us.  Our chart above as the indicator, we checked the VM's using Perf Mon and looking at the average disk que and current disk que counters for the physical disk.  Most of our VM's were pegged, registering between 20 and 120 for disk que....sometimes higher.

Enabling Jumbo Frames didn't seem to help make a difference - so over the past week we switched to iSCSI storage.  We simply uninstalled the NFS Service from our Windows 2008 Storage server and installed Starwind.

We created 3 seperate LUN's, mapped our ESXI hosts to them and moved all of our VM's so that storage target.  Since then, our storage performance view looks MUCH BETTER.  All of our VM's are well under the 25ms mark on that same chart posted in our original question.

I'm not sure what our issue was with the NFS Storage, because all I've seen on the net is that "NFS Storage is great" yadda yadda yadda.  Perhaps it was simply an issue with the Windows NFS Service on 2008 R2 and choosing a different NFS solution may have fixed it.  

Our organization is small, under 125 users and only 25 VM's.  I was honestly shocked at how poor the NFS performance was compared to iSCSI
jmchristyAuthor Commented:
I came across this blog where someone wrote out and excellent comparison of NFS vs. iSCSI

This is the exact conclusion we came too, just on a larger scale.

Who are these people that are homering for NFS storage?  It's FAR LESS SUPERIOR than iSCIS

It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.