best way to set up cpus and sockets for vmware


I have a vmware ESXi 5.5 virtual system going into production shortly.  The hardware is a Dell PE R720 with 96 GB RAM, and dual Xeon 8-core CPU's with hyperthreading, so I believe that gives me a theoretical 32 vcpu's available for the VM's.  I am running an Exchange 2010 VM, SQL 2012 VM, a DC, a WIn 2008 file server, and a couple of other VM's on this server.  It's pretty well loaded.  My question is what is the most efficient way within the individual VM"s to configure the cores and sockets to avoid a given VM monopolizing available vcpu's?  As an example, say that I design the SQL Server VM to have 8 CPU's, and within the vm settings I configure that to be 2 sockets with 4 cores per socket.  To what extent does that tie up available virtual cpu's, cores or sockets?  Is it better to specify more vs. less sockets?  Additionally is there a way to specifically tell the VN's to "share" vs. "grab"? A similar concern I have is whether specifying 8 CPU"s for the SQL Server VM may lead to a situation, depending on the configuration, where the VM running SQL server says something like "I need 8 vpu's to myself and only 6 area available, so I'm going to pause all operations until a full 8 become available".  Apologies for the somewhat banal examples, but hopefully I'm communicating this with some degree of clarity.  Ultimately I'm hoping there are a few guidelines for setting up the VM"s with best overall efficiency towards utilizing the available vcpu's .

Many thanks in advance for your thoughts.
Who is Participating?
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Always allocate sockets (vCPUs) unless you have a specific License Restriction based on cores.

Whether you allocate Sockets or Cores, performance is the same, do not over allocate vCPUs, start small, with a single vCPU, and add vCPUs, very few VMs in our experience require more than 2!

vSMP (virtual SMP) can affect virtual machine performance, when adding too many vCPUs to virtual machines that cannot use the vCPUs effectly, e.g. Servers than can use vSMP correctly :- SQL Server, Exchange Server.

This is true, many VMware Administrators, think adding lots of processors, will increase performance - wrong! (and because they can, they just go silly!). Sometimes there is confusion between cores and processors. But what we are adding is additional processors in the virtual machine.

So 4 vCPU, to the VM is a 4 Way SMP (Quad Processor Server), if you have Enterprise Plus license you can add 8, (and only if you have the correct OS License will the OS recognise them all).

If applications, can take advantage e.g. Exchange, SQL, adding additional processors, can/may increase performance.

So usual rule of thumb is try 1 vCPU, then try 2 vCPU, knock back to 1 vCPU if performance is affected. and only use vSMP if the VM can take advantage.

Example, VM with 4 vCPUs allocated!

My simple laymans explaination of the "scheduler!"

As you have assigned 4 vCPUs, to this VM, the VMware scheulder, has to wait until 4 cores are free and available, to do this, it has to pause the first cores, until the 4th is available, during this timeframe, the paused cores are not available for processes, this is my simplistic view, but bottom line is adding more vCPUs to a VM, may not give you the performance benefits you think, unless the VM, it's applications are optimised for additional vCPUs.

See here

see here

also there is a document here about the CPU scheduler

see also this EE Question
Hyperthreading is not a CPU core. If you exceed 2 vCPUs per core (hyperthreaded or not)  you see some slowdowns.

Deep in Intel documentation you see that hyperthreading is not 50:50 split, more like 80:20 depending on load in each side. You can have 32 vCPUs, but you must support vNUMA to get past 8 and keep performance, and total performance will be equal to 16 vCPUs

You should keep 1 core per socket in VM, then you can hotplug any amount of CPUs later
Since vCPUs are exclusively allocated to VM there is no chance to somehow prevent access to them for that VM
Same applies to SQL configuration

Maximum number of vCPUs per CORE is 20-64 depending on vmware ESXi version. Once you exceed that a new VM will not start (by the time running VMs will be slow as hell)

I suspect you miss something with terms vCPU vs CPU.
jkirmanPrincipalAuthor Commented:
Thanks much to you both for your explanations and extensive referenced links.  I will research what you've provided, but I get the basics of what you're both saying, which is start small, test, and understand how the Scheduler really works.  Cheers and many thanks again.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.