• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2798
  • Last Modified:

VMWare CPU Allocation Compared to Physical Cores?

I am spec'ing out a new VMWare environment running vSphere 5 (all new hardware). I am trying to figure out how many VM's I can run per physical host and what sort of growth I will have down the road. Accounting for RAM and SAN Storage space is simple and straight forward. What I am a little confused about it processor allocation. I am looking at using the following hardware configured server:

Dell PowerEdge R710            
96 GB RAM (12x8GB 1333MHz)            
2x Intel Xeon E5620 (Quad-Core 2.4GHz 1066Mhz) (8 Cores)      

Tthere are two CPU's each Quad Core so that is 8 physical cores. Now say I want to run  11 VM's on this virtual host. Say I run the following CPU's in the servers:

8 - Servers with 1 CPU
2 - Servers with 2 CPU's
1 - Server with 4 CPU's

The above server's total 16 vCPU's total and physically I only have 8 Core's in the host. Obviously if some of these systems hit max CPU utilization there will be a bottleneck (CPU wait time) but I don't expect that with the type's of systems running. Is this a bad setup or unsupported setup? Why wouldn't I want to do this?

In our current virtual environment we are upgrading from (running Virtual Iron platform) some of the virtual hosts have CPU's over allocated almost 3 to 1 vCPU's versus Physical Cores. Right now *knock on wood* we almost never see CPU bottlenecks but I dont want to continue bad practices.

Thank you!
0
AIC-Admin
Asked:
AIC-Admin
3 Solutions
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
We have seen we can run 5 to 6 VMs per Core.

CPU Cores is not often the bottleneck, but MEMORY is, check what versions of vSphere 5.0 you require because of the vRAM limit per host.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
11 VMs on a Dual Processor Quad Core no problems, we have had 56 VMs, on servers of this spec, SQL, IIS, Web Clusters, DCs, Exchange.

11 VMs is nothing.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Also note, only add additional vCPU if the applications and OS can use additional processors, vSMP can sometimes slow down a virtual machine because of the vSMP sceduler. My advice, check and add 1vCPU at a time, and reduce if performance is no better.
0
Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

 
Lee W, MVPTechnology and Business Process AdvisorCommented:
I'd rephrase the above to note that everyone has a different environment and your environment may require a different config. That said, mist typical environments wont find the CPU to be the major bottleneck. Further, you should be able to prioritize VMs to ensure those most critical get the most access to CPU time.
0
 
coolsport00Commented:
Official VMware Design specs reference two specs for VMs per Core -> 3-5 VMs per core on a dual core CPU, or 6-8 VMs per core on a quad-core. So, don't take CPU like RAM. You do spec out physical host RAM on a 1 to 1 basis, meaning if you allocate 4GB RAM for a VM, that is exactly 4GB taken off of the physical host RAM. Virtual Machine CPUs (vCPUs) are allocated differently - about the amt of VMs per core as I listed above. I personally have a greater ratio than what VMware specs...I'm about 8 or more per core probably.

One, of many, benefits of VMware virtualization is overcommittment. VMs on ESX/i don't utilize resources (RAM or CPU) continuously most of the time. So, resources are transferred to other VMs that need those resources when other VMs are sitting idle. 2 great resources I *highly* recommend you looking at are VMware's Resource Management Guide and the CPU Scheduler whitepaper:
Res Mgmt Guide: http://www.vmware.com/pdf/vsphere4/r41/vsp_41_resource_mgmt.pdf
CPU Scheduler whitepaper: http://www.vmware.com/files/pdf/techpaper/VMW_vSphere41_cpu_schedule_ESX.pdf

I shared version 4.1 because I personally recommend going with that for the time being over vSphere 5. IMO, you have better option to utilize memory resources.

Hope that helps...let us know if you have more questions.

Regards,
~coolsport00

0
 
jrhelgesonCommented:
I never recommend allocating a single core to a box. I always provide dual cores or better.  I've had too many problems where we've maxed out a single core and the machine just bogged down. Dual cores enables you to max one core out doing a process, with the 2nd available to keep things responsive.

If it is a server, I usually just give it 4 cores and be done with it. If it needs more processor resources, I'll give it more - but you really lose nothing by giving a system 'too much proc' as it is all used in an as-needed basis across all VM's.

Keep in mind that your dual - quad core processors also support hyper-threading, which gives you 16 logical cores.
0
 
AIC-AdminAuthor Commented:
Thank you everyone for all of the assistance! It's really looking like CPU's/Cores versus vCPU's is not a concern with what I am looking to run. I beleive now my next area of trouble is iSCSI or FC. Thanks!
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now