Solved

VMware Write Latency per Virtual Machine Disk

Posted on 2012-03-09
13
1,976 Views
Last Modified: 2012-03-20
Screen ShotHello. I've recently migrated 24 VMs to a Windows Storage Server (RAID 10) 2008 R2 NFS datastore. Previously, I had them spread over 3 ESXi 5.0 servers. Now, those three servers are still providing resources (RAM, CPU, NIC) to the VMS, but the datastore has been moved from locally on them to the NFS store. I've been monitoring the IOPS and Latency on the new datastore and I'm wondering if there is a way to lower the write latency for the VMs? They're hovering around 50ms (SS attached). Obviously, that was a lot lower when they were split up and I'm concerned about server response time to my clients. I would also like to know what latency is considered "high" for VMs? Any help would be greatly appreciated! Thank you.
0
Comment
Question by:jmchristy
  • 7
  • 6
13 Comments
 
LVL 117

Expert Comment

by:Andrew Hancock (VMware vExpert / EE MVE)
Comment Utility
Have you tested with Jumbo Frames being enabled or disabled?


Did not know, EE also has VMware Articles? Checkout my EE Articles

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

How many disks make up the RAID 10, and what disk technology, SATA?
0
 

Author Comment

by:jmchristy
Comment Utility
We have not tested the Jumbo Frames being enabled, but that was in our notes to try.  Our switch does support it, is there anything we need to check on the network cards on the NAS before enabling?  We are using Intel Quad Port ET gigabyte NICs.

We have an 8 disk RAID 10, 450GB 15K SAS drives on an H700 RAID controller.
0
 
LVL 117

Expert Comment

by:Andrew Hancock (VMware vExpert / EE MVE)
Comment Utility
check NAS, NICS, network switch are ALL enabled.
0
 

Author Comment

by:jmchristy
Comment Utility
We actually just experienced some issues today under heavier workload.

We checked the virtual machines using performance monitor, and the average disk queue was spiking up to 100% every now and then, sometimes for a few seconds.  I'm assuming this indicates that it can't get the data quick enough from the NFS storage device.  When we had these same virtual machines on local storage, there were no issues.

It's a 2003 terminal server, 16GB of RAM, with 8CPU only servicing 20 people so I imagine the bottleneck would be the storage.  I'm hoping enabling Jumbo Frames helps!  Any other suggestions after we enabled Jumbo Frames, if we still have issues?
0
 
LVL 117

Expert Comment

by:Andrew Hancock (VMware vExpert / EE MVE)
Comment Utility
Looks like NFS cannot deliover the performance you require, certainly local disks will always be faster. As you are using Storage Server, I would test iSCSI Target and see if you get better performance using iSCSI compared to NFS.
0
 

Author Comment

by:jmchristy
Comment Utility
Yes, we are using Storage Server. I was going to try iSCSI but apparently using a Microsoft iSCSI target and ESXi 5.x hosts isn't supported by VMware. I've enabled jumbo frames and moved the terminal servers back to local storage. I'm going to test it by moving one over at a time.
0
Why You Should Analyze Threat Actor TTPs

After years of analyzing threat actor behavior, it’s become clear that at any given time there are specific tactics, techniques, and procedures (TTPs) that are particularly prevalent. By analyzing and understanding these TTPs, you can dramatically enhance your security program.

 
LVL 117

Expert Comment

by:Andrew Hancock (VMware vExpert / EE MVE)
Comment Utility
even though not supported, I would test for performance, you could also try starwind ISCSI SAN Software which installs on a Windows OS.
0
 

Author Comment

by:jmchristy
Comment Utility
I've requested that this question be closed as follows:

Accepted answer: 0 points for jmchristy's comment #37703179

for the following reason:

Answer was correct.
0
 
LVL 117

Expert Comment

by:Andrew Hancock (VMware vExpert / EE MVE)
Comment Utility
Could you explain Which Answer was correct?
0
 
LVL 117

Accepted Solution

by:
Andrew Hancock (VMware vExpert / EE MVE) earned 500 total points
Comment Utility
I've suggested enabling Jumbo Frames, and provided an EE Article to help you enable Jumbo Frames. Could you expand on "Answer was correct".
0
 

Author Closing Comment

by:jmchristy
Comment Utility
The terminal servers were moved to local storage and Jumbo Frames has been enabled on NFS store and ESXi hosts. The performance issues appear do not appear to be occuring on the terminal servers now. I will test by moving one terminal server back to NFS storage. If I am able to move all TSs back to NFS, Jumbo Frames would be the solution.
0
 

Author Comment

by:jmchristy
Comment Utility
NFS storage turned out to not be the right solution for us.  Our chart above as the indicator, we checked the VM's using Perf Mon and looking at the average disk que and current disk que counters for the physical disk.  Most of our VM's were pegged, registering between 20 and 120 for disk que....sometimes higher.

Enabling Jumbo Frames didn't seem to help make a difference - so over the past week we switched to iSCSI storage.  We simply uninstalled the NFS Service from our Windows 2008 Storage server and installed Starwind.

We created 3 seperate LUN's, mapped our ESXI hosts to them and moved all of our VM's so that storage target.  Since then, our storage performance view looks MUCH BETTER.  All of our VM's are well under the 25ms mark on that same chart posted in our original question.

I'm not sure what our issue was with the NFS Storage, because all I've seen on the net is that "NFS Storage is great" yadda yadda yadda.  Perhaps it was simply an issue with the Windows NFS Service on 2008 R2 and choosing a different NFS solution may have fixed it.  

Our organization is small, under 125 users and only 25 VM's.  I was honestly shocked at how poor the NFS performance was compared to iSCSI
0
 

Author Comment

by:jmchristy
Comment Utility
I came across this blog where someone wrote out and excellent comparison of NFS vs. iSCSI

This is the exact conclusion we came too, just on a larger scale.

Who are these people that are homering for NFS storage?  It's FAR LESS SUPERIOR than iSCIS

http://blogs.jakeandjessica.net/post/2012/03/11/StarWind-iSCSI-vs-Microsoft-iSCSI-Part-1.asp
0

Featured Post

Free Gift Card with Acronis Backup Purchase!

Backup any data in any location: local and remote systems, physical and virtual servers, private and public clouds, Macs and PCs, tablets and mobile devices, & more! For limited time only, buy any Acronis backup products and get a FREE Amazon/Best Buy gift card worth up to $200!

Join & Write a Comment

It Is not possible to enable LLDP in vSwitch(at least is not supported by VMware), so in this article we will enable this, and also go trough how to enabled CDP and how to get this information in vSwitches and also in vDS.
HOW TO: Connect to the VMware vSphere Hypervisor 6.5 (ESXi 6.5) using the vSphere (HTML5 Web) Host Client 6.5, and perform a simple configuration task of adding a new VMFS 6 datastore.
Teach the user how to delpoy the vCenter Server Appliance and how to configure its network settings Deploy OVF: Open VM console and configure networking:
Teach the user how to join ESXi hosts to Active Directory domains Open vSphere Client: Join ESXi host to AD domain: Verify ESXi computer account in AD: Configure permissions for domain user in ESXi: Test domain user login to ESXi host:

728 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

10 Experts available now in Live!

Get 1:1 Help Now