I have a vmware ESXi 5.5 virtual system going into production shortly. The hardware is a Dell PE R720 with 96 GB RAM, and dual Xeon 8-core CPU's with hyperthreading, so I believe that gives me a theoretical 32 vcpu's available for the VM's. I am running an Exchange 2010 VM, SQL 2012 VM, a DC, a WIn 2008 file server, and a couple of other VM's on this server. It's pretty well loaded. My question is what is the most efficient way within the individual VM"s to configure the cores and sockets to avoid a given VM monopolizing available vcpu's? As an example, say that I design the SQL Server VM to have 8 CPU's, and within the vm settings I configure that to be 2 sockets with 4 cores per socket. To what extent does that tie up available virtual cpu's, cores or sockets? Is it better to specify more vs. less sockets? Additionally is there a way to specifically tell the VN's to "share" vs. "grab"? A similar concern I have is whether specifying 8 CPU"s for the SQL Server VM may lead to a situation, depending on the configuration, where the VM running SQL server says something like "I need 8 vpu's to myself and only 6 area available, so I'm going to pause all operations until a full 8 become available". Apologies for the somewhat banal examples, but hopefully I'm communicating this with some degree of clarity. Ultimately I'm hoping there are a few guidelines for setting up the VM"s with best overall efficiency towards utilizing the available vcpu's .
Many thanks in advance for your thoughts.