Remote Desktop Services setup guide for physical and/or virtual deployment. We've been building RDS environments in both all-in-one and TS/.RD Farm mode on Terminal Services and then Remote Desktop Services with RD Gateway in Server 2008. What follows are some of the key takeaways. Enjoy!
This article stems from a question on this forum. So, the following is something to keep in mind:
* User Count: 10
* On-Premises: Thin Client Requested
* User Usage: Browsing, Zoom, Office Apps
I suggest an Intel NUC with i3 processor, Windows 10 Pro for an OS, and the appropriate Group Policy structure to manage them down to thin client status.
The cost will be as good or better than a dedicated thin client with all of the Windows goodness for driver compatibility and overall management experience.
Browsers are killers in an RDS Session Host shared environment. Microsoft's CrEdge is probably one of the better ones for management and user resource usage.
Bandwidth and video quality will be an issue. The Internet pipe needs to be a consideration here even with 10 users.
RD Physical Server or Virtualization Host
We virtualize. We don't do standalone and have not done so in years, though as a caveat we don't work with high transaction SQL where bare metal is preferred.
You could do an all in one RDS Role setup:
* Broker, Gateway, Web, and Session Host
While this may seem like a good idea, it's not best practice to do so.
Plus, if something hangs that requires a reboot you lose your RD Gateway for a minimum of reboot times (physical hosts BIOS post times are huge in today's servers so keep this in mind if going physical), plus the delay before the RD Gateway service is started.
We set up two virtual machines as follows:
* RD Broker, Gateway, Web
* RD Session Host(s)
Either way, a single Collection is the place to start with your Session Host(s).
RD Single Sign-On and RSS
Another benefit to using the Intel NUC is the ability to set up RD Single Sign-On and publish the RemoteApp RSS Feeds in Active Directory via Group Policy. This makes the entire user experience while in the office seamless.
The RSS Feed is also available via Internet and is device agnostic though using RemoteApps on iDevices can sometimes "feel" like mousing through Jello.
_ publish any port to the Internet for an RD Listener. Period. Full-Stop. None. Nada.
Remote Desktop Gateway is the only way to properly, and securely, publish a Remote Desktop Services setup.
is an excellent way to secure access via multi-factor authentication. There are others out there, but DUO is our preference.
For resources, considering the various environments we support:
* A minimum of 8 vCores (8 pCores if physical)
* A minimum of 8GB vRAM/pRAM per user
** So, 96GB minimum whether physical or virtual
* A smallish SATA SSD RAID 1 should more than satisfy storage, IOPS, and throughput needs in that environment.
** Intel SSD DC-S4610 (D3-4610 new name) in 1.9TB or larger if needed
** Host OS can go down on 128GB Intel SSD RAID 1
With the above in mind, we would actually start off with four virtual processors (vCPUs) and 4GB of virtual RAM assigned to the RD Session Host(s). We would then tune both settings to the actual user usage patterns over a week or two.
Virtualzed Session Host NOTE: Assigning more virtual CPUs/RAM does not necessarily translate to increased Session Host performance!
Virtualizing leaves the option to add more virtual servers if needed now or in the future.
For example, if we need to set up a dedicated Collection for a resource hungry app that kills user's Session Host Desktop experience, we can set up a new virtual machine, prep it with the needed app, and publish it at the Broker then back to the required users via Group Policy.
Disk latency is a key metric to user experience. The following is an overview chart as far as what happens as disk latency goes up:
* 0ms to 25ms: Awesome user experience
* 26ms to 35ms: Great user experience
* 36ms to 50ms: User experience will lose parity with local machine experience. Some possible complaints.
* 50ms to 75ms: User experience is definitely impacted. Expect complaints/tickets.
* 76ms+: Time to investigate where the bottleneck is.
In a virtual setting, keeping an eye on CPU consumption is important. Disk latency may be well below the 25ms mark, but having four virtual CPUs (vCPUs) saturated at most points throughout the day will bring about user complaints/tickets.
RD Session Host tuning is a very experiential thing. Over time, folks managing RD standalone and farm environments will pick-up on what apps cause the most grief and what virtual machine setup will bring about the best user experience.
In a small setting breaking up the RD Roles is a good idea. In fact, as already mentioned, it is a best practice to keep them separate.
* RD Broker, Gateway, and Web (with NPS)
* Session Host 1, 2, 3, ETC
** Custom Collection configuration breaking up Session Host(s) to apps
Some benefits to configuring in RD Farm mode:
* Reboot the Session Host without losing RD Gateway
** Leaves user's connections to other on-premises endpoints intact
** Allows users to reconnect to the Session Host that much quicker due to delayed start of RD Gateway
* We can scale out by adding more Session Hosts as user count goes up
** Never underestimate the impact that the setup will have on management
** Once folks see for themselves the benefits of an RDS setup they start asking for more.
* We can contain resource hog apps in their own Collection/Session Host(s)
** Helps to keep user experience up
* We can tune our virtual machine setup to the actual user environment in play
And finally, the elephant in the room. ;)
Remote Desktop Services CALs come in User and Device flavours. Device CALs are great for shift work environments where two or three, or more, users may use the same terminal to access RDS resources.
An RD License Server can be set up either on a dedicated virtual machine, or with the Broker/Gateway/Web VM, or on a domain controller which is not our preference. We generally place it on the Broker.
RD High Availability
In our larger RD Farm settings we tend to be deploying on a Hyper-V cluster. When we do, we set up node affinity for the RD Session Hosts in the farm so that a host failure does not impact all users. This requires a bit of planning as far as Session Host allocation as user's sessions will try and reconnect to the now reduced Session Host count in the Collection.
If the host the Broker/Gateway/Web is running on goes down, the VM will spool up on a different cluster host and be back online within a few minutes keeping in mind that the RD Gateway service has a bit of a delay before it kicks in.
Of note, we have not had the requirement to deploy a highly available RD Farm where each component in the farm itself is highly available. This requires a lot of additional server resources that are beyond the scope of this article.