I am spec'ing out a new VMWare environment running vSphere 5 (all new hardware). I am trying to figure out how many VM's I can run per physical host and what sort of growth I will have down the road. Accounting for RAM and SAN Storage space is simple and straight forward. What I am a little confused about it processor allocation. I am looking at using the following hardware configured server:
Dell PowerEdge R710
96 GB RAM (12x8GB 1333MHz)
2x Intel Xeon E5620 (Quad-Core 2.4GHz 1066Mhz) (8 Cores)
Tthere are two CPU's each Quad Core so that is 8 physical cores. Now say I want to run 11 VM's on this virtual host. Say I run the following CPU's in the servers:
8 - Servers with 1 CPU
2 - Servers with 2 CPU's
1 - Server with 4 CPU's
The above server's total 16 vCPU's total and physically I only have 8 Core's in the host. Obviously if some of these systems hit max CPU utilization there will be a bottleneck (CPU wait time) but I don't expect that with the type's of systems running. Is this a bad setup or unsupported setup? Why wouldn't I want to do this?
In our current virtual environment we are upgrading from (running Virtual Iron platform) some of the virtual hosts have CPU's over allocated almost 3 to 1 vCPU's versus Physical Cores. Right now *knock on wood* we almost never see CPU bottlenecks but I dont want to continue bad practices.