Will this equipment support 22 virtual machines for Citrix XenDesktop 4?

Hello, I am designing a new VDI system for a client, the equipment needs to run 20 windows 7 machines and 2 windows server 2008 (DC and Provisioning server) and 1 windows server 2003 (DDC).

I would like all the VM's to have 2GB of memory. The equipment I have chosen:

2x HP DL360G7 E5630 Server's with 2x E5630
Specification for the E5630 CPU:  Intel® QPI Speed 5.86 GT/s, L3 Cache 12MB,      Processor
Base Frequency 2.53 GHz, MAX 2.8 GHz, Power 80 W, Cores 4, Threads 8.

Each server containing 12x 2GB PC3-10600R Ram total on 24GB RAM Per Server.

I am thinking of using a NAS with iSCSI for the storage and was thinking of a used HP Server with FreeNAS. FreeNAS has a software based iSCSI protocol.
Otherwise I would be looking at: HP StorageWorks X1000 Network Storage System.

1: Will these 2 HP server's be powerful enough to handle the 22 VM's or should there be a 3rd server?
2: Will iSCSI on a 1GB NIC handle 20 VM's of windows 7?
3: Will a used server with FreeNAS be sufficient?
4: What are the advantages on the HP StorageWorks X1000 system? and would the X1000 be an overkill?

Thanks in advanced for any advice.
Who is Participating?
nappy_dConnect With a Mentor There are a 1000 ways to skin the technology cat.Commented:
I think youvare definitely short changing RAM.  Microsoft recommends a minimum of 1 GB of RAM for 32-bit versions of the operating system and a minimum of 2 GB for 64-bit versions. If you plan on taking advantage of the Windows XP Mode feature, you should bump those requirements up to include an additional 1 GB of RAM.

For Windows 2008 server, 2GB RAM or greater.

I think you need to double the amount of ram that you have in each server. Also go with 4 or 8GB ram. This will allow space for ram expansion.

Consider the fact that businesses tend to grow. Better to give more now for growth than to have to go back to the client in six to twelve months for more money to expand.

Also I would suggest something the the Dell MD1000 which allows up to 4 servers for concurrent connections to shared storage instead of iscsi.

Lastly what are your plans for redundancy for the hosts incase one of them fails? This is where the RAM considerations needs to be increased. If server A fails and server B takes over for your guest OS's, then you would be out of RAM.

Your processors will be able to handle approximately 6-8 hosts per core(conservatively) so that's not a problem. I currently use HP G6 boxes. If your Network switch supports it, also consider teaming your nics and creating a trunk port for increased thru put to your vm environment for better performance.  
well depends on what the machines will be doing. you are covered for the RAM and most probably for CPU. the main problem would be I/O specially if you are hoping to run the VMD's from the ISCSI storage that would cause lot's of I/O operations, Even the best ISCSI devices gets around 100Mbps. It would look a bit better if it was a fiber attached SAN.
baycompAuthor Commented:
What if the storage server has 2x 1GB NIC's with link aggregation that would give a total of 2GB will this be enough?
Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

oztrodamusConnect With a Mentor Commented:
" Even the best ISCSI devices gets around 100Mbps" is completely inaccurate to say the least. I just saw Equallogic push 1Gbps throughput last week. Granted it was a hybrid SSD/15k RPM SAS array, but the point is made. There is nothing wrong with using iSCSI. Low end FC is highly expensive for what little it offers.

NAS for VDI? I'm not sure how that's going to work. NAS is terribly slow given that it's a file system protocol designed for storage. At a minimum I think you'll need some type of DAS with 15k SAS drives to get the required IOPS.

Since it sounds like money is an issue I think you would be better off with thin clients connecting to a couple of virtual terminal servers.
baycompAuthor Commented:
Money is not the biggest problem, I just don't want to over kill / over price the job.
How would an HP StorageWorks P2000 G3 MSA FC/iSCSI Dual Combo Controller SFF Array (AW568A) go? Using the iSCSI ports.
Looking at the quick specs for that unit there isn't much difference in performance between FC and DAS, with DAS being slightly better. The iSCSI performance specs are terrible and look like an after thought.

Now whether you decide to go FC or DAS is up to you. I don't know anything about XenDesktop, but I don't see why the DAS option wouldn't work. It would certainly be the cheapest option and the easiest to administrate over the long run. Just parcel out the drives based on the RAID type for projected I/O demand. VDI workstations get RAID10, servers and storage get RAID5


Page 10
baycompAuthor Commented:
Yes I can see what you mean from the HP Specifications, iSCSI is very slow. I don't know much about DAS can you recommend one?
nappy_dThere are a 1000 ways to skin the technology cat.Commented:
Look at something like the Dell MD1000 or an HP EVA.

I will not recommend DAS as it is not shared storage. What this means is that if one of your two hosts fail, all the machines running on that host are not reachable.

If you configure a failover solution using a SAN type solution you can fail guests over between virtual hosts.
baycompAuthor Commented:
Thanks for your help nappy_d. As DAS in only direct storage, and iSCSI is to slow. Would be HP StorageWorks X1000 Network Storage System with FC be ok? (P2000 G3 MSA Fibre Channel Dual Controller SFF Array System (AP846A)) See: http://h10010.www1.hp.com/wwpc/us/en/sm/WF06b/12169-304616-241493-241493-241493-4118559-4118563-4118565.html
baycompAuthor Commented:
Sorry not the X1000 the HP StorageWorks P2000
nappy_dThere are a 1000 ways to skin the technology cat.Commented:
Definitely is a good option for shared storage between hosts and fast!  When using an option like this, all your Hosts require are two 146GB drives. You could even use two 72GB drives and mirror them.  This is just for the XenServer OS.
oztrodamusConnect With a Mentor Commented:
If shared storage is what you're truly after you should also look at the Dell MD3000i or better a Dell Equallogic PS4000VX. The MD3000i would be a much much cheaper option. You can add up to 2 additional MD1000 arrays to the MD3000i if you need the IOPS or storage space.


I suggested using DAS earlier, because it was never mentioned that failover was a requirement.

Also, there is absolutely nothing wrong with iSCSI as a technology. The only thing slow about iSCSI is the vendors implementation of if. There are Equallogic units that make some EMC Clariion's look like paper weights.
nappy_dThere are a 1000 ways to skin the technology cat.Commented:
The Dells are great choices but one thing I always try to avoid is mixing of technologies in an end to end solution. By not mixing vendors(when possible), you limit or remove the finger pointing that may occur as to who the issue lies.

That said, if go with Dell storage, buy Dell servers. The same can be said for HP products.
QlemoBatchelor, Developer and EE Topic AdvisorCommented:
This question has been classified as abandoned and is being closed as part of the Cleanup Program.  See my comment at the end of the question for more details.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.