I have a lab env with a Dell server that has 2 CPUs and 10Gb of RAM and ESX 3.5 install. I have some users connecting and doing some testing, nothig big just some stuff with Microsoft Office and running some other apps but all simple stuff. I have about between 8 to 10 VMs running all the time and most of the time they are idle and perhaps sometimes I can get between 2 or 4 computers with connected users (so 4 VMs being used an 4 or more up to 6 running but idle).
The performance of those machines sometimes is pretty bad and even the management of the ESX server with VC is also really slow.
I was wondering if the cpu over utilization might be the cause of the issue.
In the current situation we are running 10 virtual machine on a host which has only 2 physical processors.
Because of this only 2 virtual machines can execute their processes on the physical processor.
Rest of the 8 virtual machine has to wait till the processor is free.
Does that make sense?
I have ran "esxtop" I have attached the results on a screenshoot. I think the performance looks pretty sluggish specially I am concerned about the %wait time (miliseconds) for accessing the CPU.
The name of the VMs have been removed basically each line represents a VM that is turned ON.
Other information about the server:
What type of CPU' do you have (single, dual or quad core, what speed) - Single CPU - Processor, 80556K, Xeon Woodcrest, 5130, LGA771, Burn 2
The server is a Dell PowerEdge 1950 / Service tag: 2H7QQC1
What type of disks do you have (SAS, SATA) -> The VMs are in local storage and the disk is a Serial Attached SCSI, 3, 10K, 3.5
Do you have a RAID configuration (RAID1, 1+0, 5) - Yes
Does the RAID controller have battery backed write cache ? No
How many vCPU's did you assign to the VMs - 1 vCPU per machine