Avatar of gmbaxter
gmbaxter
Flag for United Kingdom of Great Britain and Northern Ireland asked on

Number of users per server

I'm migrating 2250 users from 5 apple xserves onto a windows server 2008 r2 backend - main reasons for this are 1: AFP is useless, 2: xserve is now eol.

I've specced the following server:

Dell R710

2 x Intel Xeon X5650 Processor (2.66GHz, 6C, 12M Cache, 6.40 GT/s QPI, 95W TDP, Turbo, HT), DDR3-1333MHz
1 Riser with 2 PCIe x8 + 2 PCIe x4 Slots
1 PE R710 Rack Bezel
1 16GB Memory for 2CPU (4x4GB Dual Rank LV RDIMMs) 1333MHz
2 300GB SAS 6Gbps 15k 3.5" HD Hot Plug
1 PERC H700 Integrated RAID Controller, 512MB Cache, For x6 Backplane
1 16X DVD-ROM Drive SATA
2 2M Rack Power Cord C13/C14 12A
1 High Output Redundant Power Supply (2 PSU) 870W, Performance BIOS Setting
1 Broadcom NetXtreme II 5709 Quad Port Gigabit Ethernet NIC PCIe x4
1 Embedded Broadcom GbE LOM with TOE and iSCSI Offload HW Key
1 iDRAC6 Express
1 Sliding Ready Rack Rails with Cable Management Arm
1 C3 MSS R1 for SAS6iR/PERC 6i/H200/H700, Exactly 2 Drives

Previously users were spread over 5 xserves to limit the impact of the afp process failing. I have 1000 computers, so that is around the max concurrent connections.

Storage is provided by a 4gb/s FC SATA 7.2k san on raid 10 (10 spindles) for around 2000 users, the remainder are on a 4gb/s FC SATA 7.2k san on raid 5 (8 spindles) remaining ~250
I know the san is not the best - it was purchased without my input, so i have to work with it.

Plan is to team all 4 NICs into the core network.

Clients are around 80% Mac 20% Windows. Macs use pure network home folders, windows use home folders and roaming profiles with redirected desktops, documents etc.

Educational environment, so users are moving around, logging in up to 8 times per day.

My initial thought was to get 2 of these servers and split the users between them.

average iops across the 5 xserves added together are ~1400

Any thoughts / suggestions?

Much appreciated.
Server HardwareWindows Server 2008

Avatar of undefined
Last Comment
gmbaxter

8/22/2022 - Mon
ASKER CERTIFIED SOLUTION
Adam Brown

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
GET A PERSONALIZED SOLUTION
Ask your own question & get feedback from real experts
Find out why thousands trust the EE community with their toughest problems.
SOLUTION
brwwiggins

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
GET A PERSONALIZED SOLUTION
Ask your own question & get feedback from real experts
Find out why thousands trust the EE community with their toughest problems.
gmbaxter

ASKER
I think it's something we will work towards. Mac network homes are more intensive than windows homes, due to the library folder (think app data) residing on the network - lots of io on pref files.

We plan to implement on 2 servers, evaluate performance and then move to a active /active cluster after half a term for example. The cluster would ideally be split between a 200m fiber run as we have 2 server rooms. Unsure on the practicalities of this however, due to the heartbeat etc.

gmbaxter

ASKER
I might take a look at MS fsct for working out my max concurrent users for my setup.

Onto a related question then;

I've got 2 x 300GB SAS 15k drives:

C: 100 GB for OS (2008 R2) - Is this enough ?
V: VSS ~ 180 GB
E: External FC San Storage ~ 2.4 TB
 - my understanding of vss is that it will do snapshots based on change since the initial lot of data, so if i copy over all my data, then enable VSS from E to V, i should be able to store changes only to supplement regular backups? correct?

Thanks,
SOLUTION
brwwiggins

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
⚡ FREE TRIAL OFFER
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.
Dusty Thurman

I would also be careful to consider peak points. Rather than average IOPS, for what you are describing, it is probably more important to consider the peaks. Typically 8 am, lunch and 5 pm tend to cause issues with roaming profiles etc, due to retrieving or saving profiles at those peak points. I would recommend benchmarking your loads at the times you know are peaks and then comparing that to your throughput with the new systems as well as considering the suggestions others have made.
I started with Experts Exchange in 2004 and it's been a mainstay of my professional computing life since. It helped me launch a career as a programmer / Oracle data analyst
William Peck
SOLUTION
aleghart

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
⚡ FREE TRIAL OFFER
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.
gmbaxter

ASKER
Hi, yeah its battery backed. I think the difference between express and enterprise was quite significant.

As the storage is external i initially selected small drives but then decided on 300Gb for VSS.

We backup daily to tape and disk, so vss would just be for periodic snapshots throughout a single day, no more than 1 days required, due to the nightly backup.

I hadn't thought about hot spares... i may well add 2 drives onto the order thanks for that!!
gmbaxter

ASKER
Thanks for all of your help!

I did cluster the two together, bu unfortunately the mac operating system (10.5) cannot address a 2008R2 cluster by dns name, just IP address. This meant that it simply would not work in a cluster which was a shame.

I plan to cluster them together when we have migrated from 10.5 as an OS though.

Thanks again.