Link to home
Start Free TrialLog in
Avatar of johnnyt29
johnnyt29Flag for Canada

asked on

ESXi 5.0 VM Performance problems - suspect RAID controller or hard drives

Am using VMware ESXi 5.0 with a recently purchased Z68 based motherboard, an Intel i7 CPU (hyperthreading enabled), 16GB DDR3 RAM and an LSI MegaRAID 9265-8i RAID controller.

Plenty of processing horsepower,  yet with only 2 VM's running - a 1 vCPU Win 7 32bit w/ 1.5GB RAM and a 1 vCPU Windows Home Server 2011 w/ 3GB RAM - system response time seems remarkably slow.

The vSphere client is only reporting that the VM's are using 400 - 600 MHz, out of a possible ~11,000 MHz available. Adding another core (2 vCPUs) doesn't make any noticeable difference. This suggests to me a lot of waiting for disk IO.

Looking at disk performance stats after a server start up and boot up cycle for both VM's, the max read rate is 7650 Kbps and the average read rate is 4440 Kbps. Latency is 20 ms on average and goes up to 60 ms.

The LSI MegaRaid 9265-8i controller I'm using supports SATA at 6Gb/s speeds and has 1GB of caching. Although my hard drives are not the fastest, they are SATA3 and I would have expected latency closer to 12 ms on avg and perhaps more throughput? Certainly I would have expected better overall performance out of my ESXi server.

Problem is, I'm not sure what I should be getting and where to look other than at the disk sub system for performance problems.

Ideally I would like stats others are getting from the same or similar class of controller and advice / places to look for more speed and/or sources that would be causing performance bottleneck(s).
ASKER CERTIFIED SOLUTION
Avatar of PSGITech
PSGITech

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
how many disks?

SATA is slow, and when the I/O is virtualised is even slower.
Avatar of johnnyt29

ASKER

Am using RAID 1 with two 1TB drives for now. When I need more space was figuring I would go to RAID 5, although I read it's a little slower.

Although I did pay the premium for a good RAID controller because I think it was a key piece, this for a  home lab and budget is a big consideration - SAS drives cost quite a bit more.

I do have about 8 VM's configured, a few of which have 4 GB RAM so I want to keep RAM for those. I realize over provisioning is possible and could add some RAM to them but the stuff running on those machines doesn't need that much RAM. Also, with VMware's RAM re-allocation algorithm, I assume I lose some disk caching anyway so more RAM won't help that way (?)

The particular controller I have has a feature coming out soon for it called CacheCade that is will allow me to add an SSD between the drives and the OS so that should help, but I do want to be sure I'm not looking at some hardware or config issues before I count on that for speed improvements. Plus the feature is not out yet, isn't that cheap (I may not get it) and I don't plan to go with it first day it's out since I do have a day job and it isn't configuring RAID arrays and VMware servers...
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
..meant to say I have two 2TB drives...

I guess I'll look at the cost of getting (likely smaller) SAS drives and reassigning the ones I have now to backups and infrequently accessed data. Might even stripe instead of mirror and rely on regular backups for data protection. Anything I should beware of if I do that?

Didn't really get an answer to the read rates and latency I should be getting (assuming those are good indicators to use.) I'd be particularly interested in those specs for striped 7200 RPM SAS drives if anyone has those because I'm not sure I want to pay the 12x premium per GB (over SATA) for 15k SAS.
You will not get those statistics in VMs because the I/O is virtualised
The figures I got are from vSphere client under the performance tab
Can some folks post their SAS or SATA based ESXi disk read and write rates and latency (avg and max) from vSphere client or vCenter? Ideally with the host under some load, e.g. with 1 -2 VMs booting up, or better yet with the host itself restarting.

My avg latency can be a reasonably low (I think) 5ms or less when system is under light load for a while but I'm seeing disk latency reach 150ms when I boot up the host and 2 VMs. I have nothing to compare to.