Hey all, not sure if this is the best place for this but figured I would start here.
We currently have 2 ESX hosts connected to a SAN
Total # of hosts - 2
Total # of cores - 16 (2 x Intel X5550 per host, so 8 cores per host)
Total # of VMs - 20 (10 on each host)
Total # of vCPUs - 44 (total for both hosts, some VMs have 2 vCPUs, some have 1 vCPU, some have 4 vCPUs, some have 8 vCPUs)
We are seeing some performance issues which could be due to CPUs but also drives (which are 7200rpm drives in Raid 6)
CPU usage is at 20-35% for 80% of the day and 50-60% for the other 20% of the day
We also have 4 Physical servers with a total of 42 cores (all servers are ~ 5 years old), some performance issues likely due to a combo of CPU/disks
New environment plans
Question is regarding the new environment, I am planning on purchasing 1-2 new hosts with 2 x Intel E5-2697v3 CPUs (so 28 cores in each host, which I think equates to 56 threads, latest generation), each server will also have 256GB DDR4 memory
I am reading that the latest generation of CPUs are awesome and handles CPU usage much better
That being said.....I don't know how to calculate # of cores to vCPUs, how does that work? How many vCPUs equal 1 core/thread?
Secondly, I plan on converting the physical servers into VMs.
Will the CPUs that I purchased be sufficient? I am thinking of splitting up the VMs and physical servers 50/50 on to the two new hosts, but I may want to actually put everything on one host and have the second for failover using Veeam or Unitrends. Note that in either case both hosts will be at the same time connected to the same switches.