Link to home
Start Free TrialLog in
Avatar of stevieko
stevieko

asked on

Virtual Server transition hardware, raid ssd recommendations

Hi,
  We were wanting to transition our companies 5 MS servers to a heftier virtual server. I started studying Vsphere a while back, but wanted advice on which route to take versus EsXI.
I know if we pack everything onto one machine, we'd better have at least one backup. So two monster esxi or vsphere servers? We don't want to hold back on performance when there's so many available options these days. So there will be some expansion backplanes units using a good Raid card.
We've been with Areca for some years now. Never problems with them, only complaint is the CPU layout. When you perform actions on a raid volume, you're generally logged in to the web interface, and they leave no cpu resources for the web interface! So you're lucky if it lets you login. And initializing/reconstructing volumes is painfully slow on Raid6 (40+ hours at least on arc1680i and ar1880. Otherwise a high performer..
The other advantage to the virtual server is utilizing 10gb or 40gb network backbone.

Once this transition is complete, we'll be needing to upgrade our domain controller to something more recent from 2003. Not so much in scope of this question, but just fyi.

I was thinking of building something using a hefty supermicro motherboard with lots of ecc ram. If not Areca, maybe LSI for the raid controller. Stability is priority number one (pretty typical I guess). And I'd really like to know what people are using for SSD Raid layouts this late year. Capacity is really not up there. I'd like to have the OSs on SSD and we may only need a terabyte or two for all servers. Backup volumes, we have seagate enterprise conventional drives and could continue to expand with those. We do use macrium reflect server edition currently (I think it's great).

So, main question is how to mirror the server in the event of a server crash. We'll have cisco managed vlans on 10gbit lines. Depending what route either esxi or vsphere I will go through (or finish) the cbt nuggets or trainsignal series to adopt.

Looking forward to the advice. Thanks in advance.
SOLUTION
Avatar of gmbaxter
gmbaxter
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of stevieko
stevieko

ASKER

That is super. Very insightful write ups you made there, Hanccocka.
I hadn't come across the iodrives before. Are they as stable as they are fast? Do you use two of these in a raid 1 config?
I take it, use these for the OSs, and let loose the raid controller for data storage and backup volumes? Possibly share this backplane with both servers if on a budget?
A few minutes down time is just fine if comes down to that. It's not a super mission critical environment. There's no phones running through this either...yet. Possibly CUCM later.

I've got to match the last server I built a decade ago that is being replaced (the DC). It's never been down, and never lost a byte of data. Oh, the pressure is on.. haha
Good thing on the esxi, I started a trainsignal on it and will now finish it.

Does the later version of esxi support GPT/EFI and Luns greater than 2tb now? I read somewhere version 4 was pretty limited there.

I was looking at supermicro motherboards that have 10gbit NICs on them and with IPMI. What a great value if those NICs are any good, as they run over $300 just for these alone.
(Also assuming in HCL)
Do you build your own esxi monsters? I have intent to do so, but may digress (thanks nonetheless gmbaxter)
If not, any particular unit you are fond of for this routine?


Much obliged !
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
there is no need to use them in a RAID 1 configuration, they are lighting fast, there is nothing on the market as fast!

We actually use them for VDI solutions, or VMs that require massive IOPS, or clients, that do not require the complexity of a SAN, or money, or storage engineers, or have the money to run SAN and AirCON!

If a few minutes of downtime, are okay, split the "massive" server into two, and use VMware HA.

Max virtual disk (vmdk) is still 2TB, but RAW LUNs can now be supported up to 64TB.

We do not build our own servers, because of VMware support, we purchase HP and Dell.

in house for Labs, Research and Development, we build our own solutions.
Has anyone had a bad experience with iodrive cards?
Can you explain what you mean by not building your own server because of vmware support?
The HCL has components in the support list?

Thanks
A bunch of HCL components does not make a server on the HCL!

If you have a whitebox, which you are building, if you have a VMware Support contract, and you experience an issue with VMware ESXi and your whitebox, VMware Support first checks to see if your server is supported on the HCL, e.g. has it been tested and verified for use with ESXi.

You whitebox server will not have been tested and verified, because it does not exist on the HCL, and therefore VMware Support, will advise, and not be able to help you fully.

It's a risk, and we have many EE questions here, why does this not work, with whitebox servers.

If this is a production system, we would not recommend this approach, whiteboxs are suitable for Labs, Research and Development, but have no place in production environments for business.
Yikes..
So does HP or Dell put the above IOdrive in and build SANs etc. and warranty it all?
Then VMware adds their layer of support based on the above warranty?

Or are you saying it's ok to add devices like the IOdrive after purchase of OEM machine, as long as these are in the HCL. Then VMware will review this setup and determine support scope.

And one other recommendation if possible: 10gigabit SAN suggestion to the scale of impressiveness of the iodrive ? :D

How esxi-ting
If you're worried about $300 for a 10Gb NIC you'd best look up the price of HP 673642-B21 before you get excited about the IOdrive!
I never said I was worried.
You sent the HP at about 5k.
The above IO drive I was looking at 10K
...we need at least two.

There's a difference between NEEDING something and letting money fly by for nothing.
the IODrive component is on the HCL, but if used in a non-HCL server, VMware have their hands tied when it comes to supporting it.
So... its ok to add the iodrive in?
Or OEM needs to for warranty of vmware?
IODrives are okay, but see my point about the risks of non verified and supported servers.

If you are bothered about taking out a VMware Suppot and Subscription.
I hear ya.
I'm going through the trainsignal training right at this moment as a matter of fact. I will start understanding the needs soon. (I aborted CBT nuggets version on this. I couldn't stand the way the guy taught that one. No offense to him)

Not sure how much support we'll need for esxi. Hoping not, really.
It is basically only going to be 2 ESXI host servers with HA with about 100 users one a domain controller. The other 2 needed servers are a terminal server, and SQL server on another instance of MS server 2003 standard. And as mentioned, we'll upgrade the DC to server 2012 once this vsphere setup is complete. So in this case I was thinking the 320 or 640gb SLC IOdrive models for these.
 Haven't decided about a central storage (mainly for users documents and backups). I do have the existing server that has a good RAID card connected to a backplane full of enterprise scsi drives. I could drop a 10gb network card and install openfiler. But I may actually open another question just about that.

Any potential hicups I'd like to uncover now, find a solution and outline this whole thing. I'm think I'm convinced about buying OEM servers with the IOdrives so far. May even go blades while thinking about it. If we end up needing a large surveillance system or something bandwidth/cpu hungry, I could pick up another blade or two.. Not so important right now. Just the thought

I'm trying to outline licensing and costs for esxi. Good lord, there's so many modules, plugins, yada yada.
you often need support when it all goes wrong, and VMware will not be able to help, if you use non certified or tested server hardware. It's a risk you will have to accept.
Ok. I'm going to build a test lab. I picked up a fusion IO duo for 330 bucks (one bad 320gb module of two -otherwise should be great for lab)
Found this article for this card: for installing this card in esxi 5.1

Also picked up 2 HP DL160 G6 servers for about 500$.
Everything was listed in the HCL. Though the fusionIO is not HP branded, I'll give it a try for S&G :)

Thanks. There's enough for me to get started. I'll mark this one complete