• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 12508
  • Last Modified:

Terminal Server (Remote Desktop) hardware sizing

Hello, I need to size a Remote desktop server for a client. I've not done this before, ergo I am unsure what is recommended server sizing for processors, spindles, RAM, NICs etc. What are the key metrics?
So, here are the requirements:
Server 2012R2 (running as a backup AD server and DNS)
Currently 30 Users in Active Directory, 25 will be TS users
Client OS: Vista, Win7, Win8
Users run Office applications, webbrowsing, and low overhead LOB apps.
May switch to thin clients or retask older machines
2TB user data (growing)

Thank you very much for your thoughts and recommendations.
Johannes Banck
Johannes Banck
Johannes Banck
  • 4
  • 4
  • 2
  • +2
3 Solutions
Michael MachieFull-time technical multi-taskerCommented:
I would never recommend having your DC, primary or secondary, to run a RDS role (the new name for Terminal Services is Remote Desktop Services). A dedicated Server is recommended.

For 25-30 people, min 24 gigs RAM (always the more the better), 32 preferred.
- Single NIC should suffice but if you notice lag then add an additional one.
- You will need a Server 2012 License as well a RDS License for that Server.
- You will also need an individual RDS CAL for EACH User that will be connected, regardless of whether or not you have an AD User CAL.  
- Each User will require a license for the software they use - IE Office
- Dual quad- or six- core processors (once again, the more the merrier).
- Storage: go with an array if you can, budget can sometimes dictate what you can get.
- Server hardware: I recommend HP DL360p Gen8. Fantastic piece of equipment that will keep you in budget. If going with local storage you can do (5) 900GB SAS drives to give you 3.3TB RAID5.
- If using an array you will need to purchase a SCSI controller for it.
Hope this helps a bit
Jaroslav MrazCTOCommented:

it depends on apps you will use and the actual connoted people in same time.

Office is usually to 200MB ram per user, typical accounting app is 30-50MB. Simple way is to look on your desktop in task manager how many ram it is consuming.

Disk space is up to you but you need to have 25% free for optimal performance.

NIC you can have one. In TS server you can have one connection policy per NIC but usually you need just one.
Bear in mind that 16GB sticks of RAM are best value for size and they only cost £160 each. Although you would have 4 DIMM channels on a Xeon 2600 CPU you don't need to populate every channel.

I would second Machienet's suggestion of a ProLiant but if you're going for a rackmount model I'd go for the DL380 rather than DL360 so you have more disk bays if needed. Alternatively a tower may be better suited especially if you want to do good old tape backups.
Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

You have come to the right place with a valid question.

This kind of questions are hard to answer and sometimes impossible to get a real answer.

By experience I will tell you:


   HP DL380p G8 E5-2640

   RAM: 32GB (64GB recommended)

   HD: (Read below)

   NICs: 4 - 1GB


Server 2012 Standard: 1

User Cals: 1 per Client (OEM cals packs in 5)

Remote Cals (RDS): 1 per Client (this you need to enter on the server running the role)

Certificates: 1 (up to 5 names allowed - should match ISP DNS)

Public IP:  At least 1 (DNS for each contracted).


I had run Server 2012 on a very basic Server with only 2GB of DDR2 and it works but that is not your case.  For Server 2012 to run properly with the needed roles you will need 12GB just for the OS leaving you enough left for users.

For the HDs on that machine well I most say based on your description... get a HD for your system enough to hold your sessions cause you mentioned the files are somewhere else already so I'm guessing they will have a map to that drive already. As always, System in one virtual HD, data in another; Therefore, I think 2 of 500 or 600 GB will do to start.

The machine comes with 4 that you can team up as you wish 2 - with one ip, the other 2 as backup in case of failure or put them all together for a whopping 4GB bandwidth.  For this type of solution (terminal server) I wont use a heavy raid I will try to give more through output and a better backup solution for users sessions data.

Also, the cache of this equipment will increase performance between CPU - RAM with 1GB standard.

Now, all this might be kind of standard for some and for other not quite enough, budget is important yes! but also growth capacity for the future.  Go check both machines, the one I suggested and the one Machienet  suggested.

Good Luck
Johannes BanckCTOAuthor Commented:

Thank you for your thoughts on this.
I was looking at the HP ML350p Gen 8 with two of the low-end processors with each 16GB RAM (total32GB)
P420i controller board (builtin) with 1GB smartcache
2 SAS 10K 146GB mirrored drives for OS
3 SAS  10K 300GB RAID5 drives for data gives easy expandability
As configured without OS its $7K

Somehow, I am thinking it's overkill.
The company has been in business for a long time and currently have about 600GB data. The users are not high activity - how much can there be with Office apps and email?

Maybe 1 processor could be enough but with 32GB. I am almost thinking that 16GB could be enough: 5GB (x2) for User apps and 6 to 10 GB for OS. I calculated 30 Users x 200MB ea (high average).

I think I rather throw the money into Disk IO (the420i controller w. smart read/write  cache).

What is your take?
Two CPUs is definitely overkill, people run about 20 virtual servers on a single CPU. Don't go for high clockspeed, you pay a lot for the testing and the slower marked processors will still clock at high speed for short bursts. Thread count matters since they're all waiting on disk I/O so many cores are good.

Just a couple of disks for the OS will do, or is this your fileserver too? A pair of 1TB 7.2K disks would probably outspin 3 off 10K 300GB for data just because the RAID 5 write overhead is eliminated although admittedly the FBWC improves RAID 5 speed dramatically.
Johannes BanckCTOAuthor Commented:
Hello hecgomrec:
It is an interesting thought about going really thin (not even RAID5) on server harddrives and instead rely on backup in case of catastrophic failure of the server drives. In order to do this one must be truly confident in the DR solution.
I am a dattobackup reseller and as such I am supremely confident in the reliability of the technology - but despite that, I would still be reluctant to take safety out of the server.
Have you actually implemented such a thin solution?
What do you mean "not even RAID 5"? RAID 1 or 10 is far more reliable and faster than RAID 5, it just doesn't give the highest capacity so was useful a while ago when disks were maxed out at 36GB. With today's huge capacity disks you should be looking at multiple disks mirrored in case one fails and another has bad sectors on it.
Johannes BanckCTOAuthor Commented:
Hello andyalder:

Thank you for seconding my suspicion that the Disk IO is more important than CPU on TS and file servers.

I usually go with dual RAID1 setups for reduced risk as you suggested in cases where I have a firm handle on data-growth. In cases where data growth is uncertain, RAID5 seems more elegant.

There is in my opinion no speed advantage in RAID5 arrays with less that 5 disks.
RAID 5 certainly is an elegant algorithm, but I would say it has had its day.

Data growth is just as well covered by RAID 10 as with RAID 5 although admittedly you have to add two disks at a time to the array to expand it. RAID 1 and 10 are identical on Smart Array controllers so you can start off with just 2 and add 2 more later to avoid the RAID1->RAID5 migration of adding a single disk to the array.
To JBanck.... yes I don't waste resources in a server that won't hold mission critical data!

Like users stations!.... if their data is in a well configured hardware failure server why should I bother to have such an expensive solution when I can just restore/copy their stations config.

Anyways!! this is only for a small environment... bigger the environment bigger the possibilities of failure just by high access levels to the hardware... so go on that...
Johannes BanckCTOAuthor Commented:
Thank you all, I have now a better handle on the hardware requirements for a TS server.
Regards, Johannes Banck
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Network Scalability - Handle Complex Environments

Monitor your entire network from a single platform. Free 30 Day Trial Now!

  • 4
  • 4
  • 2
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now