Link to home
Start Free TrialLog in
Avatar of silterra
silterraFlag for Malaysia

asked on

Poor Exchange 2003 performance on ESX4.1

I have recently moved 900 mailboxes from this physical server
Dell Blade PE1955 2 x Dual Core Intel Xeon PROC 5050 2x2MB 3.00GHz

To this VM
Dell PE R710 4 x vCPU Intel Xeon X5560 L3 8 MB(2x4MB) 2.8GHz

CPU utilization on the physical server with >1000 mailboxes was 20-40%(during peak hours)
CPU utilization on the VM with 900 mailboxes is now 100%(during peak hours)
I have stopped the AV totally, and it is still 100%(during peak hours)
Storage IOPS/network/latency/etc were all healthy

Anyone knows how to fix this? User generated image
Avatar of silterra
Flag of Malaysia image


Exchange version:
2003 STD SP2 (on physical and VM servers)

OS version:
Windows 2003 ENT SP2 on physical server
Windows 2003 STD R2 SP2 on VM server

VMware version:
ESX 4.1
Can you post any of the performance monitoring data from vSphere related to the virtual machine? Looking at the task manager in the VM can be a skewed view of what's really going on. A few other points to consider:

1. How many vCPU's have you assigned to the VM?
2. How many other VM's are on the R710
3. If resource pools exist, what is your resource pool structure?
1. 4 vCPU assigned to VM.
2. As far as I can see the Host is <50% average CPU utilization, pls see attached images.
3. 8 host, all <50% CPU utilization, DRS enabled, fully automated, resource is not a problem. User generated image User generated image User generated image User generated image
Avatar of yelbaglf
Things to look at...

1) Ensure that your total number of vCPUs assigned to all the virtual machines is equal to or less than the total number of cores on the ESX host.
2) What type of disks are you using?  SATAII or SCSI?  Fibre Channel or iSCSI?
3) Is Exchange on its own datastore/LUN?  What about the page file, log files, and database?
4) What type of SCSI controller are you using for your virtual disk?
5) Disk read/write performance?
6) What does the network utilization look like during this time?
1. The host server have 8 physical cores, that means I can only have 8 VMs with 1 vCPU each? Doesn't sound right to me..
2. iSCSI Dell Equallogic PS6000x 10k SAS 16 x 600GB
3. Exchange shares the datastore/LUN with many other VMs, however it is not a problem on Dell Equallogic PS6000x as all disks are utilized and auto-managed by the storage firmware.
4. Dell Equallogic PS6000x
5. R/W 21%/79%, I/O Load=medium, IOPS R/W 1200/1500, Latency R/W 11ms/<1ms - in general the storage is healthy
6. Pls see picture below
User generated image
Avatar of yelbaglf
Flag of United States of America image

Link to home
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
If you put it in place and you have issues, you can always change it back.  Options for this, if things go south:

1) copy your startup config to your run config (because you'll want to test the changes in the run config first, meaning your startup config is unchanged until you write the changes)
2) remove the service policy from the interface, make the changes, and add it back
I apologize for this last post, please disregard, since I was posting in the wrong window. :-)
Thank you for the paravirtual scsi controller suggestion, I will need to study how to get that done first.
In the mean time I have actually created a resource pool for the Exchange VM, blocking 4 vCPU equivalent of CPU resource, and it seems to have improved a lot. I'm monitoring its performance for a few days now.
Thank you ..
You are most welcome!  I think you'll be most pleased with it!
Link to home
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I have resolved it myself.