Link to home
Start Free TrialLog in
Avatar of eugene20022002
eugene20022002Flag for South Africa

asked on

Advice - Vmware Hardware

Hi

I have about 60 servers that I need to virtualise , ranging from exchange 2010 , many sql 2008 R2 servers etc.

The current diskspace usage is 21.6TB

What I would like to Hardware Advice.

preferably HP hardware because we changing all our equipment to HP.

What would be good Vmware certified servers? Make and model + any add ons required like extra 4 port cards and fiber cards to connect to san etc.

With regards to Storage hardware.
I was thinking 2 sans. 1 with fast disks and 1 with cheap sata disks
The fast san will be for OS's LUNs and servers requiring fast disk access.

The slow SAN will be for large servers not requiring that fast access.

What SANs would you experts recommend and please specify add ons required , cards , configuration advice etc etc.
Im looking for a complete solution.

Thanks.
Avatar of Randy_Bojangles
Randy_Bojangles
Flag of United Kingdom of Great Britain and Northern Ireland image

The HP servers bit is pretty easy - anything that you buy new now (G7) is on the VMware HCL as are their associated options

http://www.vmware.com/resources/compatibility/search.php

No point in trying to spec exactly at the moment as you need to do some capacity planning investigation with that amount of servers to VM

A lot depends on if you want to scale up (big boxed such as DL500 or DL700 series) or scale out by using more of "lesser" boxes such as DL300 or entry level DL500 series

Then you need to consider Intel or AMD and many other things

If you want HP SAN then you're looking at either a P4000 (formerly LeftHand) iSCSI solution or an EVA (FC) solution depending on price, replication, capacity, speed etc

Avatar of eugene20022002

ASKER

Which is better in you experts opinion? AMD or Intel? Which is more reliable or does it not make much of a difference going either?

Do you know of any good tools that will do capacity planning? Like a box i put down and gathers information from all my servers for the purpose of virtualisation?
Intel and AMD both have pros and cons. Reliability is IMHO about the same and with a fully redundant VM cluster then youre less concerened about failure anyway

Lots of DB people seem to say that AMD are better for complex calculations but I'm a VM tech so cant really comment on performance of specific apps with specific CPUS

Capacity Planner is a VMware tool http://www.vmware.com/products/capacity-planner/overview.html and you should try and find a local partner to look at it for you

I could do you a "rule of thumb" type guess but for that size of environemnt you may want to spend the money on doing it right?
I would appreciate a "rule of thumb" .
I would have loved to get a partner in to do a proper analysis but unfortunately all my efforts to have that done has been denied and its been left to me to do it myself.

I made a list of all servers with their max and average cpu , memory max and average, and disk usage, used etc.
I could take out all private info and attach my list here, that could help and would be greatly appreciated.
have you considered blades?
blades? in what sence?
Blades ? in what sense? You mean server blades?
I do mean server blades - just would cut down on a lot of cabling etc if you're starting from scratch
I would not go down the blade route as effectively esx vsphere has replced most of the advantage of blades

I have done an article on building a esx whitebox which is a self built esx server
for instance I bought an intel server board dual xeon 1366 socket for £200 and an quad core xeon for £200 with hyperthreading so in esx its an 8 core cpu and 16gb ram the board will take 6 core xeons and that would mean a 12 core cpu in esx and it can take two cpu's for a 24 core esx server it will take 144gb ram as well. You can also create virtual esx servers inside of esx so you can create a cluster for ha, drs and vmotion
VSphere most definitely still has a place on blades - if you need to have more than 2 or 3 hosts then blades save you a fortune on interconnect, cabling etc

Theyre not for everyone nad do require an investment up front but of you're starting from scratch as the poster indicates he is then they are a sensible possible option

A Whitebox VM set up is fine in the lab for testing etc but for production?????
With "many SQL 2008R2" server?
With 20TB + of storgae??????

This is not the environemnt to be doing things on the cheap!

Again ESX inside ESX is fine for testing (though of course with VSphere 5 due this summer it will be ESXI with ESXi only as ESX wont be supported) but is not supported in production and will run like a dog anyway even if it was!

The poster needs some serious hardware for a production setup - not a lab environment
I agree with Randy.
Im talking production specs here. Im looking at serious servers , not really a blade system but more like HP  G7 range with like 96GB Ram per host etc.

So I was looking for like a complete suggestion like try
5 of x servers
you will need this cards and add ons
then this storage for that amount of space etc.

When it comes to SANs Im nto that clued up, havent reli worked on them much.
why use a blade then doesn't drs and ha mean the blade is not viable

vmware esx always uses san , iscsi for instance so the storage is not inside the esx server I use openfiler for my san as I could get a netapp filer but not for my test lab

my xeon whitebox is a current intel server costing £3000- £15000 so why is it not for production the only thing missing is redundant psu I could make that happen but not for my test lab

why is virtual esx inside esx server only for testing it just means you can do a cluster as a physical cluster the servers need to be exactly the same so using virtual esx servers will work
HA guards against a physical failure of the box - if its a blade then it can still fail
DRS is simply a load balancing mechanism (it isnt simple, its very clever but thats its job in a nutshell) so has nothing to do with what type of hardware you have

VSphere (note again that ESX is dead as from this summer) needs shared storage to do the clever stuff (it actually doesnt HAVE to be a SAN as NFS can be sued for low level jobs) but you can have a shared storage blade or attach the blades to a SAN fabric - and its easier to cable than doing it with physical hosts

Your Xeon whitebox you say is a £15000 box and in the post before you said it ws £400 for the board and chip - that's lot of cash on a case and RAM? The big issue is that HP/Dell/IBM/whomever are supported on the HCL - whitebox isnt. If youre looking at production environment then being unsupported on the hardware platform you have chosen is pointless unless money is really tight (which is why it works in a lab which isn't production)

ESX inside ESX will run like a dog as I said - its fighting for resource with itself

If you have it in your config an dthe host box goes pop then youve lost the lot until you fix the hardware

With a proper cluster (which you need in a production environment) then separate physical servers (DLs, Blades, whatever) mean that a failure of one piece of hjardware will allow HA to kick in and use the other host(s) in the cluster

And to say that the physical servers have to be exactly the same is to miss the whole point of Virtualisation - they emphatically do not

Im not saying that blades are the snwer necessarily - most of my VM solutuions dont use thjem as it happnes, just wanted to propose them as an option if its a green field site, big savings on power, cooling, cabling sprawl etc





ASKER CERTIFIED SOLUTION
Avatar of Randy_Bojangles
Randy_Bojangles
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanx Randy that was what I was looking so that I can get some quotes.
Sweet
Glad to be of help - if I was in SA i'd do you one myself :-)
I was just stating I feel a blade setup is far too expensive these days as they put into blades a lot that done in software like vcenter
If youre buying a lot of physical boxes then blades are often cheaper - just less and less people need a lot of physical boxes now due to VM

Progress eh :-)