SAN recommendations for small Hyper-V VDI deployment

Hi there

We are in the process of setting up a new VDI environment based on Hyper-V 2012 R2.  Not very big - 40-60 desktops maximum, using standard desktop apps (bit of google earth & mapping, etc, so we cannot use RDS session hosts).  No huge IO generating apps.

I haven't bought a SAN for a number of years so am after some recommendations.  Typically the VMs would be about 25GB.  Budget ideally as cost-effective as possible, but I have earmarked £20k ($30k) as a ceiling.  We currently have a Dell MD3200i for an RDS deployment and I have been relatively impressed by it, but I'm sure things have come on since then so would be looking for other options.

Thanks in advance for any pointers.

Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Do you need to purchase a SAN, most organisations are moving away from SANs, and investing in Direct Attached Storage, which provides better and higher IOPS, especially for VDI deployments.

e.g. SanDisk Fusion-IO Cards (flash).

or look at Tegile, Nimble, Tintri storage which are flash based arrays, or hybrid arrays, rather than traditional spinny slow disks!

If you server has to communicate across the LAN to storage, there is the latency.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
John TsioumprisSoftware & Systems EngineerCommented:
What ever your decision is try to get an actual demonstration of the product working...specs are nice but real life is even better...
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
We have a number of SMB/SME setups in this blog post: Our SBS Options with Standalone and Cluster Hardware Considerations.

Your best option would be number four: Asymmetric Cluster with 2 nodes and 1 JBOD via the Intel Server System R1208JP4OC 1U E5-2600v2 single socket series.

Why this setup? Because no Tier 1 out there offers a single socket E5-2600 series server. They'll be more than happy to sell a dual socket unit with a single processor in it though.

We have _a lot_ of these asymmetric setups out there.
Big Business Goals? Which KPIs Will Help You

The most successful MSPs rely on metrics – known as key performance indicators (KPIs) – for making informed decisions that help their businesses thrive, rather than just survive. This eBook provides an overview of the most important KPIs used by top MSPs.

Bryant SchaperCommented:
I would personally say to stay with a san, direct attached will have limitations.

Nimble, EMC and Coraid make some great products

We chose EMC, because of their large ownership of VMware, no vendor pointing.  Obviously with Hyper-V that would not be the case.

Andrew, if your clients are opting for das, how are they handling HA?  Just curious, seems to be the biggest shortfall of not using shared storage.
Philip ElderTechnical Architect - HA/Compute/StorageCommented:

Please outline the limitations?

In our experience iSCSI is painful and very latent even on a 10GbE setup.

DAS = SAS directly connected via four cables in the case of a two node cluster. There are two cables and two SAS HBAs per host in our cluster setups.

Each cable contains four 6Gbps SAS lanes for a sum total of 24Gbps _per_ SAS cable. That's 96Gbps of virtually zero latency I/O for a two node cluster via four SAS cables.

Pair of SAS Cables Saturated
The above image is a single node with dual 6Gbps SAS HBAs and one Intel JBOD (JBOD2224S2DP) with dual expanders, and 24 SAS SSDs (HGST SSD400a) all in a Storage Spaces cluster configuration. That number, that is the 377K IOPS, is the actual saturation limit of the pair of SAS cables at 6Gbps in our testing.

For simplicity in hardware layout and actual configuration, IMNSHO, nothing compares to SAS based DAS for clustered storage.
Bryant SchaperCommented:
I ma not questioning performance, how do you handle fault-tolerance, high availability and failover?

this certainly a design worth looking into.
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
The setup is fault tolerant all the way through to the disk which becomes the single point of failure.

Two servers
Two HBAs per server
Two cables per server
Two Expanders in the JBOD
Each drive is dual ported

Disk arbitration is taken care of by Storage Spaces within a hyper converged configuration. Both Scale-Out File Server and Hyper-V clusters run on the two nodes. SAS disks are mandatory in this configuration.
Bryant SchaperCommented:
Thanks, I was doing a bit reading now too, how well does it scale out.

Say I have 10 ESX Hosts, what would be expectation be, I see a lot of setups that are dual servers, but not a small data center worth.  I am loving the concept however, especially with a lower price point that a small san has.
chris3879Author Commented:
Thanks everyone.  We ended up going with a Tegile T3100 in the end as we got a cracking deal on it and it demoed better than the Nimble box in a similar price range.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.