Capacity/CPU planning for VMWAre new hosts

Hey all, not sure if this is the best place for this but figured I would start here.

Current environment
We currently have 2 ESX hosts connected to a SAN
Total # of hosts - 2
Total # of cores - 16 (2 x Intel X5550 per host, so 8 cores per host)
Total # of VMs - 20 (10 on each host)
Total # of vCPUs - 44 (total for both hosts, some VMs have 2 vCPUs, some have 1 vCPU, some have 4 vCPUs, some have 8 vCPUs)
We are seeing some performance issues which could be due to CPUs but also drives (which are 7200rpm drives in Raid 6)
CPU usage is at 20-35% for 80% of the day and 50-60% for the other 20% of the day

We also have 4 Physical servers with a total of 42 cores (all servers are ~ 5 years old), some performance issues likely due to a combo of CPU/disks

New environment plans
Question is regarding the new environment, I am planning on purchasing 1-2 new hosts with 2 x Intel E5-2697v3 CPUs (so 28 cores in each host, which I think equates to 56 threads, latest generation), each server will also have 256GB DDR4 memory
I am reading that the latest generation of CPUs are awesome and handles CPU usage much better

That being said.....I don't know how to calculate # of cores to vCPUs, how does that work? How many vCPUs equal 1 core/thread?

Secondly, I plan on converting the physical servers into VMs.
Will the CPUs that I purchased be sufficient? I am thinking of splitting up the VMs and physical servers 50/50 on to the two new hosts, but I may want to actually put everything on one host and have the second for failover using Veeam or Unitrends.   Note that in either case both hosts will be at the same time connected to the same switches.

Thanks
s aitAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Mohammed KhawajaManager - Infrastructure:  Information TechnologyCommented:
Answer to your question for conversion is it all depends. It all depends on your app/environment. In most cases you should try to 1 have one core equal to 1 vcpu but i have seen much higher.

Don't have all your hosts on one host. Instead configure HA and distribute your vms across both hosts.

HA is not redundancy but it will allow VIs to power on a different host in an event of a single host failure.   in a catastrophic failure, you could only lose 50% of your VMs.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
SATA 7200rpm disks, if that's what they are do not perform well, you've not got may IOPS on the datastore, if the performance is Disk I/O.

CPU is not often the bottleneck, Memory and DIsk I/O often are the bottlenecks.

We generally work on 5-6 VMs per Core!

see my EE Article

HOW TO:  Performance Monitor vSphere 4.x or 5.0

download and install vCenter Operations Manager (vCOPS) for FREE 60 days trial, it's easy to install, as it's just two Appliances that you Import via OVF, it will import the data from vCenter DB, give it two weeks, and then run the Reports!

Hey Presto, it will tell you, the efficiency of your hosts!
0
s aitAuthor Commented:
Thanks Andrew....but is there an actual formula or statistic of how many vCPUs "come in" a a core?

Also, I am reading that enabling Hyper Threading slows things down a bit? Is that true? And if you dont enable HT then you don't get the extra threads (2 threads per core)?

In your example of 5-6 VMs per core, how many vCPUs are they total? Are any of them SQL or semi heavy use application servers?

The new servers will have a combo of SSDs in Raid 50 and 12GB/s SAS 10k RPM drives in Raid 10...thats why I think ALL the VMs from both old hosts will hit in a single new host...thoughts?

We have Ess Plus licensing, I understand that we can only get 8vCPUs per VM, is there a limit on memory per VM?

Thanks
0
Determine the Perfect Price for Your IT Services

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden with our free interactive tool and use it to determine the right price for your IT services. Download your free eBook now!

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
It's a rule of thumb, we've been using for over 17 years! (average!).

HT does not slow things down.

Do not be concerned with CPU, make sure you have enough RAM and Disk IOPS.

Lots of different VMs, SQL, Exchange, Oracle, Terminal Servers, Citrix Servers, VDI, Workstations.

I would not use RAID 50, I would not use SSDs. I would use 10k or 15k SAS disks, in RAID 10 or RAID 6.

No limit per VM in memory.
0
s aitAuthor Commented:
Hi Andrew, why no ssds or raid 50?  Aren't ssds supposed to make things go insanely fast? And isn't raid 6 super slow?

I also posted this topic on vmware forums and someone there said that HT slows things down...
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
RAID 50 is no longer considered appropriate for enterprise installations.

It depends how you use your SSDs, most VMware Admins, that try and use them, when used in RAID are surprised they are slower than SAS in RAID.

As detailed here on EE.

Did they give any actually performance statistics as to how much it slows the host down by.....

We have Hyperthreading - Active on all our installations.

Also important to configure you servers, not to use CPU power management!

Page 21....

If the hardware and BIOS support hyper-threading, ESXi automatically makes use of it. For the best performance we recommend that you enable hyper-threading,

Source
Performance Best Practices for VMware vSphere® 5.5 VMware ESXi™ 5.5 vCenter™ Server 5.5
0
Dawid FusekVirtualization Expert, Sr B&R, Storage SpecialistCommented:
dealstrike,

Andy give you a lot of good tips and info mate, I will add some from my also long and hard Virtual Infrastructure (VI) implementation and administration career.

1. Questions about vcpu to cores ratio, sure it depends, but in 80-90% of environments the ratio is 4-6:1 (CPUs from Xeon 5600 min 2GHz), and more important is a number of cores and HT (and newest processor) than frequency (GHz)
2. HT - in 95% of VI installations HT should be always enabled for VI, it just give CPU's a more threats to vcpu's, so better ratio, from my experience HT add 1 "vcore" per 4 cores to the vcpu to cores ratio (so adds 25% of a "virtual cores"), so when you have 14 cores CPU + HT it mean that you may run (14+14*0.25)*5=87.5 vcpu (5:1 ratio). You need to understand that this ratio is rather limited (and calculated) not to occupation of CPU/cores by VM's but rather by serving CPU accessibility time by a CPU and their cores, so because it DEPENDS, but I may calculate that 4:1 is for 12% cpu usage machines, 5:1 is for 10% usage machines and 6:1 is fo 7% usage machines, sure if your machiens/apps usage more cpu just reduce this multipier.
3. You have to consider that single VM will generally eat 250MHz of core in idle, doing nothing, sure with very fast CPU and very low vcore:core ratio it will eat lower, soemtimes as low as 100MHz/idle vm, but generally 250MHz is a normal value for single idle VM on standard loaded VMware ESXi host, if host is overloaded all VMs will eat more MHz at idle up to be unresponsible (I test 150x vcores per my 2x6 cores+HT 2GHz 2x L5638 servers and it's too much for them, 100 vcpu's/server is maximum acceptable for that server's by me while my performance and response tests).
4. RAID 6 on 7k SATA disks (if not cached by SSD cache) will be for 95% an issue with performance of VI (you may recognize it as slow working and booting VMs, and higher peak CPU usage on VMs and also from SSH on ESXi host with esxtop tool by press "u" you will notice high DAVG, GAVG or/and KAVG latencies/delays), even fastest array's can't help it (without caching hot data on SSD), I have done hundreds of configs with lot of RAID's on lot of arrays and even storages appliances with ZFS and RAID 5/6 is a very bad option for VI when we consider to using it for OS boot storage or/and DB storage. You can use RAID 6 when you have an option to cache it with enough SSD cache (min 10-15% of used RAID 6 space by VI) and if you are using 15k SAS/FC drives, but for 7k drives RAID 5/6 and VI is a killer and something that will almost never work.
5. RAID 50 as Andy inform You is not a good option too, I understand you like RAID 5/6 but it's really not a good option for VI environments, only for backups, fileshare and less accessible user's data, never use it for VM OS and DB on VI, especially not on disk slower than 15k SAS/FC
6. RAID 10 or RAID 1 (mirror) is a best choice for 7k SATA drives for VI if you have to use it, remember that always use write cache (min 512MB, 1-2GB max) and if possible use SSD cache for reads, and you have to know that read caching working almost only when you have hit ratio from 90%, so it mean 90% of your read data should fit in the cache, single OS require 5-6GB of cache for boot, any DB and APPS require additional read cache so summary for 30 VM's the minimal value of SSD cache should be 30x5GB+DB+apps=200-300GB minimal to be efficient, write cache in most cases 1-2GB is enough.
7. Do your old hosts (42 cores) will fit on new 2x hosts (28 cores/56 HT ... x2 mean 56 cores/112 HT) under VI, yes it will fit in most probably (you don't give us any CPU occupation/load info), but not see new CPU's as an miracle, they are faster of course but not more than 3-4 times faster that old ones, so redundancy with 2 hosts should also work but still we don't know CPU load on the HW hosts so it's hard to estimate it. So buy 2 new ESXi hosts and put everything on both of them and if one fail, rest of VMs will start on the second ESXi host and if they are not too heavy occupied it will run smoothly.
8. I know that some DCs which offer cloud VPS sometimes share vcores:core ratio up to 8:1 (with HT), they do that with less occupied VMs. It may give you some idea where is the limit, so I think that 8:1 is a limit for today's CPUs.
9. When we talk about read cache for VI use, one should consider and compare that caching to CPU cache, L1 is a disk built-in cache (fast, small and important for general disk read/write operations, 8-64MB), L2 is a RAID/Storage controller cache (larger than L1 but mainly used for write cache and RAID operations, not read cache, for read is in most cases used in 20-25%, rest is a write cache, 512MB-8GB), and L3 cache on SSD (much much larger than L2 but still with really very good speed, used for read and sometimes write cache too, 32GB and much more), and what is really important for read caching in VI, what really speed up reads is a L3 cache but as I mentioned above one need to have min 10-15% cache of used RAID storage to be really effective, so min 87-90% of hot data should resist on L3 cache (SSD cache), and from that point it's starting to blazing performance.

regards
NTShad0w
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
VMware

From novice to tech pro — start learning today.