Link to home
Start Free TrialLog in
Avatar of Aamer M
Aamer M

asked on

VDI Sizing

need help in VDI sizing for a citrix vdi solution.

we need to purchase hardware to host 300 concurrent sessions.

we will have 4 Xenapp servers to host 200 users- task workers

70 pooled windows 10 virtual desktops- medium load

30 dedicated windows 10 desktops- high performance users

please do not consider the resources required to host infrastructure components. need sizing only for VDI
Avatar of David Johnson, CD
David Johnson, CD
Flag of Canada image

Light users 1 vcpu 1.5GB 4-6 IOPS 8 Users/Core
Power Users 2 vcpu 3GB 20 IOPS 5 users/core

Server RAM - Hypervisor Overhead / Average Ram Per Desktop = # of desktops per server
Cores/Server -1 * Average Users per Core = # of Desktops per server


Shared Desktop Light 4 vDisk 200 Users 200 Concurrent 0 Personal Vdisk = 22 vcpu's 11 Cores 89 GB Ram 400 IOPS Needed
Pooled Desktop Normal 4 vDisk 70 Users 70 Concurrent 100GB Personal Vdisk = 140 vCPU's 9 Cores 140 GB Ram 1,050 IOPS
Dedicated Desktop Heavy 30 Concurrent Users 100GB Personal Vdisk 0 Vcpu's 8 Cores 120GB Ram 1,200 IOPS

Planning Guide Info  https://www.citrix.com/blogs/2012/04/18/calculate-your-hardware-and-storage-needs/
Planning Guide XLSX  https://www.citrix.com/static/successaccelerator/hardware_storage_calculator.xlsx
Planning Guide PDF https://support.citrix.com/article/CTX127277
Avatar of Aamer M
Aamer M

ASKER

we will have 4 hosts of which one will be the maintenance host

each server has 32 cores and 128 GB RAM so total of 96 cores and 384 GB RAM.

we have a msa storage ( 600 GBx20) so 18000 GB RAW storage

the workloads to run on the three hosts are 33 servers and 100 windows 10 desktops

considering an average of 2 Vcpus and 12 gb ram per server for infrastructure servers + 100 windows 10 desktops.

I know memory is less. it is 384 GB now, it should be 384 per server. I will raise this but want to confirm the other parameters
Avatar of Aamer M

ASKER

sorry we have a msa storage ( 900 GBx20) so 18000 GB RAW storage
50 Users/Host is the recommended value,, with only 100GB/power and extreme user plus the other overhead you are already at approx 12TB. This leaves only 6 TB for remainder..
20 * .9 TB = 17.1TB Raid 5 (not recommended 16.2 TB RAID 6 (Better)
IOPS will also be a killer best case with spinning rust is 2000 IOPS SAS 15K  SSD gives you 105,263 IOPS (better) (RAID5)
What are these 10K drives, 15K drives, or SSD drives? I don't see this working well with anything other than SSD.

What do your high performance users need? More RAM? More CPU? Lower storage latency? GPU graphics? What numbers are you trying to hit?
ASKER CERTIFIED SOLUTION
Avatar of David Johnson, CD
David Johnson, CD
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Aamer M

ASKER

The disks we have in the MSA storage are HPE MSA 900GB 12G SAS 15K SFF ENT HDD.

based on the hardware specs we have already purchased, what additional would be required optimize the solution.

just a quick reminder:

we have 4 HP Proliant 380 G10 servers with 192 GB Ram on each server that will be part of the cluster
we will be hosting 32 server virtual machines as part of infrastructure services like AD, Ecghange, SCOM, SCCM that would approximately take up 350 GB of RAM and 9 TB of storage.

on top of this we are planning to host 4 xenapp servers with around 200 users having simultaneous sessions. the memory assigned to each of the xenapp server is 4 vCores and 32 GB RAM.

then we will have 70 normal users hosted on Pooled VDI

lastly we have 30 high users hosted on dedicated desktops. you can consider the 30 high as normal users also for sizing.

based on these workloads do you think the resources available are not enough?

if so what additional resources should we add to make it a workable solution.

appreciate your response. this is the sizing proposed by the vendor to us.
Avatar of Aamer M

ASKER

just an additional important note.

this environment will be used for training purposes and there will not be a need to have Pvdisks for users other than the 30 dedicated desktop users. moreover we will have a file server to which the user profiles will be redirected for them to be able to store their work.

considering 4 physical hosts we are talking about running 10 infrastructures and 25 windows 10 desktops per server. I am worried that after the solution is deployed the load on the servers will degrade the performance or worse the solution may not be working at all.

please I need to identify the shortcommings before we start implementing the solution and make the management aware of the risks

can someone help me fill up the VDI calculator for sizing. everytime I fill this one I am getting different values.
Avatar of Aamer M

ASKER

any comments please
I see this failing due to the storage performance of the MSA. As u understand HPE product line. MSA is entry level storage, and you will get about 2300 IOPS max using RAID 10 with 50% read write ratio and 8 TB usable.

Step up to HPE Nimble or HPE 3Par with significant SSD, or VMware VSAN on all flash and you can make this work.
Avatar of Aamer M

ASKER

well I need to present some numbers and calculations to prove to the management the storage could be  a bottleneck. can we use some king of a calculator to simulate the load and then compare it with the performance of the storage or the hardware. this will be really helpful.