It is supposed to handle between 3 terminal servers (with about 8-10 users per, each terminal server has a different set of applications but the users are using OWA for email, Word, Excel, and different web based applications not hosted by us but these web apps do run in Java though) and 10 single users VMs that will be running Office 2016 standard and some applications delivered over the web (Electronic Health Records and a billing software). From a processing perspective the CPU load would be fairly low. I was thinking it would really be the hard drives arrays that would be most taxed. which runs me to my question,
I was thinking of building it off of this platform
then pairing it to handle 8 drives per card (so two cards and 16 drives)
My idea is the terminal servers will be together on an single 8 drive RAID 10. The other individual machines will be on the second card, but not sure if I should put them all on a second 8 drive RAID 10, or split that in half (2 containers of 4 drive raid 10). Speaking to some other local vendors, it seems they all lean to larger single partitions unless there are database server type applications on the VM. I know the I/O for outlook can be somewhat intensive at times, but based on our current VMs, there doesn't seem to be a problem with the current server and wanted to hear from someone who actually had it running in a production server they maintain.
Second question is managing the arrays, I don't have any other servers that have more than one RAID controller. Will two or three adaptech cards simply just show as controller 2, 3, etc? Is there an upper limit to how many cards should be in a single server?