Link to home
Start Free TrialLog in
Avatar of Declaro
Declaro

asked on

Hyper-V VM Virtual Processor Allocation

Hi all,

Just a quick question on a big subject and would like some thoughts...

I have a server I plan to use as a basic Hyper-V host and I don't know how many virtual processors to allocate across the VM's

The Host has two XEON 8 Core Hyperthreaded CPU's and plenty of RAM, I would like to know how many vCPU's that equates to roughly

I plan to run two Exchange servers in a DAG and would like recommendations for how many to allocate each of them? There will also be a number of Server 2016 and 2012R2 machines running various small loads and some Windows 7 and 10 clients.

I have read all sorts of conflicting answers and I'm unsure of what to do.

Thanks for reading and participating in this question.

Dave
ASKER CERTIFIED SOLUTION
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Declaro
Declaro

ASKER

Thanks for the link to that article, it’s excellent!

I still have a question on vCPU though…

Max vCPU per VM I understand, so in an 8 physical core CPU max vCPU=7 per VM but how many VM’s could or should you provision at once?

Could I?…

1 x 8 core physical CPU could I have…

2 x 7 vCPU VM’s
4 x 4 vCPU VM’s
6 x 2 VCPU VM’s

Or is that over provisioning too much? The above is considering RAM and Disk is broadly in line with Phillip’s guidelines.

I know it's a bit like how long is a piece of string...

Thanks
Well it is a piece of string..... because ALL VMs are not equal, and you will of course be monitoring CPU HOST process, to keep a check on how much spare resource you have.

CPU is never the bottleneck, RAM is the bottleneck, because we also work of rule of thumb, 8GB per core in the server, that's what we spec!

But we've been in this game, a long time now, and we understand what our VMs do... and what the applications require.

So it's very difficult to gage, but if we had a Dual Xeon 8 Core Host, that's about 32 cores...(vCPUs)

I think the Dual and Quad VMs are fine, but the 7 vCPU VMs..... you would have to check performance...
Avatar of Declaro

ASKER

Thanks Andrew.

I will be monitoring it quite closely as its a test setup but want to have some guidelines to start...

Can I be cheeky and ask a question about the disk subsystem. It has 10 x 900GB 10K SAS, if possible would I be better configuring it with 1 x RAID 1 for Host OS and 2 x RAID 10 for VM's or 2 x RAID 6 for everything. or indeed is there a better way in your opinion?

Dave
Most Hyper-V implementations like to keep the OS and VMs separate.

So ideally you would install your OS on a RAID 1 (mirror). So that then leaves 8 remaining disks, you have then got a choice or storage and performance, RAID 10 will give you the best performance for some workloads, but RAID 6 will give you the best storage.

So it's a compromise between the two. (RAID 10 or RAID 6).

More Disks in the RAID, more spindles ,more performance = more IOPS!
Avatar of Declaro

ASKER

Thanks for all your time and input, it's appreciated

Dave
I'm not sure you understood Philip's article.  Hyper-V or VMWare doesn't matter - you want to assign a MINIMUM number of CPUs to each VM.  This is because of how the hypervisor allocates time to the VMs and their CPU requests.  A good article (albeit old and directed at VMWare, but still applicable to Hyper-V is here:
http://www.zdnet.com/article/virtual-cpus-the-overprovisioning-penalty-of-vcpu-to-pcpu-ratios/
It's fine to have 16 VMs with 2 vCPU each (I personally RARELY provision less than 2 vCPU because I've seen too many instances of a Windows system having a process go nuts and eat a CPU worth of processing power leaving the system all but unusable).
But it's NOT fine to have 4 VMs with 8 vCPU each.  The processor has to wait for 8 idle CPUs (in an 8 CPU system, it's hard to get that) before it can execute ONE clock cycle for a single 8 vCPU VM.  But on an 8 CPU system it's fairly easy to find 2 vCPUs that are idle at any given time, therefore the system often executes 2 threads MUCH faster than 8 threads can be executed.
Avatar of Declaro

ASKER

Hi,

Thanks for that, it makes sense. is 2vCPU enough for an average server load, say remote gateway and VPN access or a simple file server or DC? would it be wise to allocate 4 vCPU to a load like a small Exchange server?

Dave
@Andrew Hancock Thanks for the reference! :)

Our rule of thumb for deployment is to default to 2 vCPUs for most average workload related VMs.

So, for a RDS Broker/Gateway/Web VM we'd do 2 vCPUs with 2GB to 3GB of vRAM depending on service loads.

For Exchange 2016 on Server 2016 there were some issues that I'm not sure are resolved as of yet. For Exchange 2013 running on 2012 R2 we could start at 2 vCPUs and 8GB vRAM for anything up to around 25 mailboxes. For 25-50 mailboxes we'd bump that up to 10GB vRAM leaving 2 vCPUs. 75-100 mailboxes we'd go to 12GB vRAM to 16GB vRAM depending on mailbox sizes and total mailbox volume (GB/TB).

SQL and any database server service are very I/O dependent as well as CPU bound. We'd baseline the setup in its existing environment before putting together a virtualized version.

DCs get 1 vCPU or 2 vCPUs and 2GB vRAM as they are not doing much. File and Print VMs get 2 vCPUs and 2GB to 4GB vRAM.

Please take note, overcommitting the setup is not as important as knowing the disk subsystem load. That has been, and until solid-state becomes more common, will always be the primary bottleneck in any virtualization setup.
As Philip implies, START with 2 vCPU and monitor.  If you need more, add more.  But most non-database systems will be fine with 2 vCPU.  And in general disk is often a huge bottleneck if not configured appropriately.  To me, disk is by far the most critical and important thing to get right in terms of physical hardware in order to support the workload.  Every virtual server is an additional load on what is typically the slowest overall component of a system.
Avatar of Declaro

ASKER

The details you've given add context to Andrew's answer. Phillip's article has given me a lot of information and a good starting point to do more research.

Thank you both for your input.

Your time and patience is welcomed

Dave