Link to home
Start Free TrialLog in
Avatar of mhentrich
mhentrich

asked on

Best External RAID Controller for large Virtualization Move

Experts,

Need some advice this time.  I'm trying to write a proposal for a massive virtualizaton move for our medium size business.  We have about 100 desktops and 20 servers running here, and we've already virtualized a few production servers onto ESXi 4.1.

I'm looking at asking the company to purchase some very heavy duty servers so we can virtualize pretty much everything.  Those would look as follows:

Dell PowerEdge R815
128 GB RAM
4x 12 Core AMD Opteron 6172 Processors
SFP+ NIC with Transceivers

Then, we come to storage.  I'd like to go this route:
Habey DAS DS-1280 (we currently use one of these, and I like it)
12 600 GB 6 Gbps 15k SAS Drives
LSI SAS Switch

That brings me to the big Q's: external RAID cards.  Currently we're using the relatively cheap Highpoint 4322 to connect to our Habey DAS, and it works fairly well.  If we're going to do such a large scale intiative, though, I'd like something more robust.  We're talking about hosting up to 100 Desktop VMs on one host, and 20 Server VMs on another, all on a 10G backbone.

So, my Q's are:

- First, given the hardware specs I listed above, am I overextending by trying to host this many VMs on a given host?  This is the kind of thing I'd like to demo first, but can't, because it would be too expensive to round up the hardware for a solid test run.

- Second, if the host hardware is good to go, what external RAID card would best facilitate communication between the DAS and the host?

I appreciate your comments and opinions, with a caveat: I don't need to hear that you think I should use Citrix, Hyper-V, etc.  My questions are really hardware specific, not software.

Thanks!
Matt
Avatar of David
David
Flag of United States of America image

First, you are throwing your money away buying a SAS switch.  Secondly, you just don't decide how many spindles you need based on the number of systems.  You decide it based on the load.  Are these servers doing database work, or are they compute servers, as example.

Run some numbers using perfmon or something, and see what the average throughput and I/Os per second each of the servers require.   That is a good starting point.
Avatar of mhentrich
mhentrich

ASKER

Gotcha, thanks.  Out of curiosity, why would I be throwing money away on a SAS Switch?  All the reviews I've read have been positive; my thinking is that if we start connecting our DAS's and Servers to a SAS switch, we'll give ourselves expandibility later (we can just add another DAS to the switch if needed, then patch it through the server or servers that need the storage).

Follow up Q: I'm more concerned about the host with the 100 Desktop VMs than the server, because none of the servers in question are getting hammered very heavily.  We do have 2 SQL Servers that take a hit, but I'm not sure we'd be virtualizing those just yet.

So, does the concept of basing on load still apply to the host running the desktops?  We're talking about user PCs, mostly running Win 7 Pro, just doing daily office tasks.

Lastly, any thoughts on a good external RAID card?

Thanks!
Matt
I would consider using SSDs, Fusion-IO drives if you are going to be hosting 100 Virtual Desktops.
Hey, good to hear from you again.  I'd love to use SSDs, but we're talking about around 10 TB of data total, so we'd have to spend 50k just on the drives (out of budget).  My original plan has us at 24 15k SAS drives in two RAID 6s, which I'm hoping would give us enough speed.  Your thoughts?
Why would you be using 10TB of data for Desktop Virtual Machines, if you were using VMware View and Linked Clones you would save storage for virtual machines, that's why LInked Clones are good.

They are linked to a single Parent VM! Hence reducing storage requirement.
Also remember using Fusion-IO drives, you are NOT going to be able to provision a SAN/DAS with that many IOPS, also no requirement for switches, cables, and less power, air con and electricity are required in your overall costs.
What you need to consider is the IOPS required for

1. 100 Desktops
2. 20 Servers

The profile for both are different, Servers usually just sit there serving....(you do not often restart servers)

100 Desktops, with users logging on and off, restarting is a completely different IOPS workload than 20 servers. Be careful that your datastores can cope with this load.

Personally RAID 6, not good for read and write performance, RAID 10 is a better selection.

Thanks for the input Hanccocka, I'll look into VMWare View (unsure of the licensing costs).

Q's:
- I don't need 10 TB for the desktops, I need 6 TB approx (60 GB per desktop, 100 desktops).  So, I misspoke earlier: the desktop host would be connected to 12 600 GB 15k SAS drives (and the server host would be connected to the other DAS with the same specs).  So, while you've provided me some good info regarding linked clones and SSDs, I'm still searching for an answer to my original question of whether or not the 12 15k 6 Gbps SAS drives could handle the load.

- I'm confused on your second post: if we can't build a SAN/DAS using SSDs, why would we want to use them for a large VM rollout?

- On your most recent post, you rightly stated that there would be different IOPS requirements on our two hosts (desktops and servers), however I'm still looking for information on as to whether or not the quoted servers could handle those IOPS requirements, generically speaking.

Thanks!
Matt
Note: Disregard question 2 Hanccocka, I misunderstood what you were saying!
Checkout Fusion-IO drives. (http://www.fusionio.com/)

I think you really need to look at why, you need 60GB per desktop!

They are not conventional SSDs, they are RAM on PCI-E cards!

What you propose is find for servers, but I think you'll not have the IOPS to support your100 desktops.
Okay for your servers, although RAID 10, 12 x 600GB 15k drives.

100 Desktops - No.
I know you do not want to discuss software.

But how are you going to provision 100 Desktops on VMware?

What Desktop OS ?

32 bit or 64 bit?

How much RAM per VM Desktop?
Desktop OS would be Win 7 32 bit for most, Win XP SP3 32 bit for others.  1-2 GB per VM.  Is there a reason ESXi can't support 100 VMs, or are you saying the hardware won't support it?  The host would have 48 cores and 128 GB of RAM.
ASKER CERTIFIED SOLUTION
Avatar of aleghart
aleghart
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Aleghart: interesting, could you clarify/expound on that?  Thanks!

Hanccocka: I'll be looking into your suggestions, but my first go arounds are coming up a bit cost prohibitive: we'd have to drop 25k just on the VMWare View licenses, without starting in on the SSDs, which I can't find a listed price for (never good).
Okay, these are some real world stats taken from our College based deployment of VDI

Dell R810 128GB 24 Cores handle 80 - 100 concurrent VMs per host server.
Software deployed Windows 7, Microsott Office to College Students.

Dell R610 48GB 8 Cores - 20 desktops comfortably.
Software deployed Windows XP 1 vCPU, 1G RAM, Microsoft Office to College Students

We are just upgrading and testing the above environment, to Windows 7, 2 vCPU, 2GB RAM, and the load has dropped to 10 desktops per server.

You need to decide on whether to do, Thin Client or VDI solution.
If they're only browsing the web, it's less of a problem.

Booting up one user with cached Outlook 4GB+ is a lot of strain already.  Multiply that by 100 and you really need to know your IOPS for different times of the day.  If everyone boots up and opens email from 8:00 - 8:15am, you need some serious help from SSD, or disable the ability to cache.

Add some SQL queries that shuffle hundreds of MB of data to the client.  It will all add up.
Hancocka: Please expound; I'm unsure what you mean.  We'd be using thin clients to access the virtual desktop infrastructure, I'm confused about the need to choose.

Aleghart: We don't use cached Exchange, so that's no biggie, but we have some other medium-range intensive items.  So why would a traditional terminal server environment be superior?
Memory is the bottleneck on your server Dell R810, and you would be much better with smaller servers, to spread the loading, rather than all on one server.

So if you intend, to have 100 Desktops and 20 Servers, you would need more memory.

You can use Thin clients to access VDI or a Thin Client Server session! (Citrix or Terminal Services)

but you did not want to discuss software!
Hancocka: True enough, I did not want to talk about software, however I thought it was being implied that ESXi had some limitation that would cause it to not be able to handle 100 VMs.  The plan is to use RDP to access the VMs.

Out of curiosity, if I wanted to use SSDs, do they have to be Fusion IO?  If we did use these super SSDs, would the RAM still be enough of a bottleneck to cause problems?
>why would a traditional terminal server environment be superior?

Boots up once a month, if even that.  I was envisioning full OS startup cycles for that many machines every morning, versus starting or resuming a TS session...lots more work if it's a full OS.
No, but Fusion-IO is the best, you could use OCZ Revo Drives, cheaper!

Storage and Memory are different.

Memory will be your bottleneck.

No, ESXi has NO limitation HARDWARE does!

Umm....RDP to access 100 Desktop Virtual Machines?

how will you manage them all?

Hanccocka: The same way we always have, WSUS, etc.


Guys, thanks for your input!  I'll mull over everything you've thrown out here.  I'm going to give this post a few more days for additional input before I award points as well.

Thanks!
Matt
I think you should investigate VMware View.
The new HP Thin Clients we just received can connect via VMWare View in addition to RDP.  Pretty good little no-frills admin box. Wide screen-capable (and rotation) with multiple remote sessions.  You can leave a whole list of remote servers, plus have a basic web browser...all without having a hard drive.
Guys,

I've slept on everything thrown out here and I'm starting to think that a remote desktop environment might be more ideal for us than a VDI, given our cost constraints.  I hadn't really considered that as an option, but the more I think on it the more I like it.

I also am trying to now think along the lines of minimizing our storage requirements so we can afford faster storage.  So, new Q's, for anyone listening:

1. Pretending we have a server like the one described above (between 32 and 48 cores, 128 GB RAM), but with a 1.2 TB 2nd Gen SSD in place rather than being connected to a DAS as previously described.  How would that play with a 100 user Remote Desktop Environment?  Specifically, I'm looking at:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820227724

Of course it's listed as having very fast read/write and high IOPS, but I'm unsure if they'd be enough.

2. If not, would there be any advantage to getting, say, two of these:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820227524

and configuring them in a RAID 0?  Also, how does one configure SSDs in a RAID 0?  Given that they are PCI-E, I can't imagine how one would make use of a controller card.

Thanks!
Matt
If you do not do Virtual Desktops, your storage solution is fine, but use RAID 10.

What you've got to realize with 100 Desktop Environment, is you WILL NOT get ALL 100 Users Concurrently on one Single Virtual Server, you will need 4 or 5 Terminal Servers, plus the 100 Remote Desktop CALS.
Good deal.  Any thoughts on what hardware requirements I'd have if I were setting up 5 terminal servers for 100 users?  I've been reading that Microsoft suggests 6 users per core given a medium workload, so if I'm only hosting 20 users per server, does that mean I only need 4 cores per server?  That seems light to me.
You environment is different, it's vritual, so I would start with Dual CPU Virtual Machines, and 8GB per virtual machine, for all your users.

But you should be able to guage, workload and performance, once you built and tested the first terminal server.
Hanccocka: Let me just confirm I understand your thinking - build up 1 beefed up host server, connect it to a DAS with 12 15k SAS drives in a RAID 10, run 5 virtual servers on it (2 vCPUs, 8 GB RAM each), and have 20 or so users connect to each Virtual Terminal Server.  That, in your opinion, would be sound, yes?

If so, any thoughts on a good controller card to connect the DAS to the host?  Do you think it would still be beneficial to pursue a second gen SSD in place of the host, if the storage requirements are small enough?

Thanks again!
Matt
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I consider my questions answered!  Thank you guys so much for your input, I know it got long winded :P