starting a VDI (vmware view) project. need some advice regarding hardware

Posted on 2009-04-21
Last Modified: 2012-05-06
our company is considering to go full out VDI for all our clients, we allready use vmware for all our servers, and i hope our current hardware (with ram and storrage upgrade) can handle the clients.

here are our specs.
1x  HP Blade C3000 with 3xBL460C G1
      every BL460C G1 have 2x gigabit lan ports, 2x intel quad 2,33 ghz, (only) 4 giga ram.
the Blade is connected to a
1x HP AIO600 with 1,2 TB harddisk space, this AIO is our primary (and only) storrage, it is using ISCSI.
all this is connected with a single HP procurve 2810-24G (J9021A) (gigabit)

we have a basic vmware enterprice licence ( it was a bundle together with the blade)
i use, Vmotion, HA, virtualcenter, update manager, without problems and all is working.

we are planning to run about 90 winXP VDI on this system, i know we need more ram and more storrage space.
all clients should be able to run MSOffice(any version), surf the internet, and run a very small homemade journal system.

can this even be done, ??  and what do i need to make this work.
any pointers for software, and hardware problems i will encounter would be greatly apriciated.

thanks for your time.

Question by:canadus
    LVL 8

    Accepted Solution

    Without going into the Blade verse server discussion or understanding the capacity of you existing Blade frame you really need to start capturing performance stats.
    Where I would start.
    Look at the current utilisation of your ESX farms CPU, Memory, Disk IO, Network through put. At most average these to a 80% of their possible utilisations. (you rarely get 100% utilisation without seeing significant performance degradation) What I mean by this is if you have a 1Gb NIC assume it's 800Mb capable.
    Perfmon 10 to 20 PC''s in your environment for the same metrics being, CPU, Memory, Disk IO, Network through put.
    This is a very rough routine to determine where your shortfalls are and what needs to change.

    Of course then there's the entire discussion around redendancy. As your obviously aware You don't have redundancy in a number of core points of failure.
    I would think you would want at least another paur of 2800 switches, at least another 2 paths to your storage array, Buket loads of ram (you need to avoid balooning as it creates IO and over iSCSI you'lll already be IO limited)

    If you want a personal opinion
    I perferr servers to blades.
    I've want 4x DL385g5 or DL360g6,
    Per server 2x quad core, 32GB RAM, P400/512 Controllers, 2x 72GB SAS hdd, 2x NC360t duel NIC, redundant Power and fans
    a 5th server (can be an older server but physical) for the VC and VCB functions.
    Each server would address minimum 3 VLANs.
    1 VLAN for Management and vMotion using 2 NIC ports "1 port to each switch"
    1 VLAN for IP Storage (use NFS is you SAN is capable) using 2 NIC ports "1 port to each switch"
    1 VLAN for guest session network traffic using 2 NIC ports "1 port to each switch"

    But before I even though about my wish list I'd get those performance stats together so I could talk to the bean couters about budget to achive the target.


    Author Comment

    thanks alot for your ansver.
    your time is apriciated.

    i have allready started collecting performance data for the clients, and are working on doing the same for the servers.
    i will be using blades, as i allready have those (not that i dont like normal servers mind you) :)

    i am hoping the servers will be able to host the 90 clients, after getting some more ram, and meaby a Lan card more.

    you guys have an idea of what problem i will encounter regarding "hosting" the 90 VDI's ?
    again they will be a farly small with only office and internet exploring.

    thanks markzz,, your post got me thinking :)


    Assisted Solution

    If you want this to work and not suck, I would buy at least 2-4 more blades with identical specs.

    If you are low on funds, look on Ebay for blades.

    Your current setup will probably run 90 XP Machines, but the experience will be painful at times.

    I also suggest getting one more network switch to place vMotion / iSCSI traffic on (use 2 vlans).

    I hope you are charging your clients enough to pay for your infrastructure upgrade. :)

    Author Closing Comment

    thanks for your help, you got me started om this project.
    it is still rolling and i hope it will be good..
    thanks mates

    Featured Post

    Free Trending Threat Insights Every Day

    Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily.

    Join & Write a Comment

    Does your iMac really need a hardware upgrade? Will upgrading RAM speed-up your computer? If yes, then how can you proceed? Upgrading RAM in your iMac is not as simple as it may seem. This article will help you in getting and installing right RA…
    This article is an update and follow-up of my previous article:   Storage 101: common concepts in the IT enterprise storage This time, I expand on more frequently used storage concepts.
    This Micro Tutorial steps you through the configuration steps to configure your ESXi host Management Network settings and test the management network, ensure the host is recognized by the DNS Server, configure a new password, and the troubleshooting…
    This video shows you how easy it is to boot from ISO images for virtual machines with the ISO images stored on a local datastore on the ESXi host.

    730 members asked questions and received personalized solutions in the past 7 days.

    Join the community of 500,000 technology professionals and ask your questions.

    Join & Ask a Question

    Need Help in Real-Time?

    Connect with top rated Experts

    15 Experts available now in Live!

    Get 1:1 Help Now