[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 464
  • Last Modified:

Hardware for Server Virtualization

We are looking at putting in VMWare and have looked at the Dell PE R710 for the hosts, but now are considering the Cisco UCS B-Series Blade Servers.  Any thoughts on which way you'd go, pros/cons any recommendations from those experienced?
0
bergquistcompany
Asked:
bergquistcompany
  • 7
  • 6
  • 5
  • +2
3 Solutions
 
Prashant ShrivastavaSolutions ArchitectCommented:
It depends how much you are willing to spend. Blade is only an option when you are planning to deploy more than 14 servers as comparative cost is too high.

Only advantage you will receive there for blade is space.
0
 
ddawson100Commented:
A bit of general question so I'll toss out a general answer. Either are supported by VMware's HCL. Look to configuration specs - types of processors, amount/type of RAM to start. Then look at the vendor. Which vendor's products you and your IT staff are able to support best? If you don't use Cisco products or don't use Dell products in general, stay away for this.

There are great advantages to both. Both offer fantastic support, lots of options for processor, RAM and storage options. Cisco devices are closer to your switching fabric, of course, and Dell is a pure hardware player so has lots of options there. If you're going with the higher end bundles of ESX/i you'll have virtual Cisco switches available but since you're asking I imagine you don't have a commitment to Nexus line yet. If you're starting small or don't anticipate going too far (budgetwise, scaling, etc) and you don't have a Cisco infrastructure in place it will definitely be simpler to go with Dell hardware. If you are scaling large you can stay with Dell as well, obviously, but the entry level isn't going to be a Cisco platform.

In general, you're going with virtualization to make the hardware disappear. There are a lot of considerations besides hardware specs that you'll want to consider. What's your network like now? IT staff size and skill sets? Are you going to use DAS or SAN? What % of the servers are Dell now? What percentage of the switching environment is Cisco?
0
 
Luciano PatrãoICT Senior Infraestructure Engineer Commented:
Hi

As Prashantmona added above it depends. Depends of you budged, and also depends what you plan to do.

Many companys use Cisco UCS B-Series for vCloud services. Using EMC UIM and vCloud Director(work very good on both) is a very good choice.

But is very expensive. And only needed if you need several VMware hosts.

If you plan to build only a cluster with 2 or 4 Hosts, the Dell R710 works just fine, and the budged will very low, compared with the CISCO Blades option.

Jail
0
Put Machine Learning to Work--Protect Your Clients

Machine learning means Smarter Cybersecurity™ Solutions.
As technology continues to advance, managing and analyzing massive data sets just can’t be accomplished by humans alone. It requires huge amounts of memory and storage, as well as high-speed processing of the cloud.

 
Luciano PatrãoICT Senior Infraestructure Engineer Commented:
Hi

Also this question should be on VMware area, not Exchange,

@ddawson100 Dell R710 and also Cisco UCS are supported on VMware. Cisco UCS is one of the products that VMware, EMC and CISCO, use to build the product VCE vBlock

http://www.vce.com/vblock/

http://virtualgeek.typepad.com/virtual_geek/2010/09/vcloud-director-is-here-with-emc-uim-right-behind-it.html

Of course if bergquistcompany have the budged, it can also by one of this. But this is... big money :)

Jail
0
 
kevinhsiehCommented:
I agree with the other experts. Every time I looked at blade servers they never made sense because I never needed enough servers to make blades work out. They are not cost effective if you only need a few servers.
The pricing on the 16 GB DIMMS from Dell have come down to the point where they don't have a large premium over the 8 GB DIMMS per GB. You can pretty cost effectively go to 192 GB RAM on the R710.

My experience is that you don't need high end processors for normal virtualization workloads like SQL, Exchange, IIS, file servers, etc. You need RAM, and low latency storage to run a bunch of VMs.
0
 
ddawson100Commented:
Virtualization is really just abstracted CPU, RAM and storage with an OS/hypervisor to allocate and prioritize access to guest OSs. The hardware you end up with is dependent on cost which is really just a way of saying what resources do we have and what best fits our goals.

If you're just getting started or won't be setting up a local "cloud" solution you're not going to have any regrets committing to a Dell-based solution. Give them a call. I bet they'd be more than happy to do some consulting to develop a one year, two year or more approach. You don't *need* high-end storage, by the way. DAS or the MD series are fantastic for starting. Even if funds are unlimited make sure you start with a good idea of where you'll be in two to three years and buy and implement in stages.
0
 
bergquistcompanyAuthor Commented:
We are a Dell shop at this point and are looking at the Dell PE R710 but somebody told us that we should really go with the Cisco UCS B-Series Blade Servers and I'm trying to determine if it can be justified to management.  Understandably the Blade uses less power, etc. I understand those benefits but would it be better to have all your hosts in one chassis or is there any key feature that would make you choose Cisco when we're used to Dell?

Just looking for opinions and those running VM who have gone through hardware selection.
Thanks for all feedback so far taken lots of notes for proposal.
0
 
Luciano PatrãoICT Senior Infraestructure Engineer Commented:
Hi

You need to inform what your company plan to do as VMware Infrastructure.

How many hosts? How many VMs will be created? How big the Storage needs to be?

What type of environment will be use this VMware Infrastructure?

And also do you have a good budged?

Without have all the information is just guessing.

VMware cluster with 2/4 hosts Dell R710 is very good and reliable.

Jail
0
 
bergquistcompanyAuthor Commented:
For consolidating Exchange, SQL and mainly for standalone server consolidation.
3 host, VMs created will be ongoing as we remove physical servers but 10-15 .
We are looking at 10TB storage SAN
0
 
Luciano PatrãoICT Senior Infraestructure Engineer Commented:
Hi

So if you want to buy a Cisco-UCS only for this, in my opinion this is a wast of money.

Whit 1 VMware host you can hold easily 10/15 VMs per host(depending on the RAM that you have on each host).

So for 3 host, you will have at least 45 VMs, but that work just fine with 20 or more VMs in each host if have the right RAM.

So why spending money on blades that you not use?

Jail
0
 
kevinhsiehCommented:
Agreed. No blade server system makes sense for you because you won't have enough physical hosts to justify. Cisco seems to have some interesting technology, but I don't see it as compelling in your case, and why switch to a vendor's product that you are not familiar with?

Beware the VMware tax that forces you into more expensive licensing options as you increase the RAM in your cluster. Hyper-V may be a better option for you.
0
 
bergquistcompanyAuthor Commented:
Agreed!  We just want to make sure going into it we are doing what is best for growth, etc.
We are familiar with Dell and from what I understand if we use Fault Tolerance we can only have 4 hosts on a server.  If that is correct we could be adding hosts to the thought with the Cisco is new technology, power savings and less cost to grow, but obviously new technology will be more complexity for administration so I'm not sure that is a compelling reason to propose one over the other.  Are power savings and less cost to add actually enough and is the FT of 4 a host truly the max in your experience per host?
0
 
Luciano PatrãoICT Senior Infraestructure Engineer Commented:
Hi

Even this is not directly question related. No FT you can only use 4 VM enabled FT on each host.

Check here:

http://communities.vmware.com/blogs/vmroyale/2009/05/18/vmware-fault-tolerance-requirements-and-limitations

Also the VMs FT enabled can only use 1 vCPU

Jail
0
 
bergquistcompanyAuthor Commented:
so another point might be that with 3 stand-alone hosts 4 VMs each v.s. a blade that could house 10/15 with only 4 VMs per host would really be under utilized for the money.
0
 
kevinhsiehCommented:
I am not very familier with VMware's FT feature. My understanding is that it is rarely used because of it's limitations (1 vCPU), cost, some additional complexity, and that regular HA available through the cluster really meets most customer needs. I have not seen a reference to a limit of the number of guests on a host when FT is in use.

I can buy a R710 with dual Xeon 5620 processors and 144 GB RAM, redundant power supplies, iDRAC 6 Enterprise, and rack rails for about $6000 (or less) before taxes. It sounds like two servers would more than cover you from a memory and CPU standpoint, based on thefact that you are virtualized 15 servers. Can you get a blade solution with at least 2 blades, the chassis, and 288 GB RAM for anywhere close to $12,000? Acutally, it is more appropriate to ask if you can get a solution with 2 or more nodes, and 144 GB total RAM available in n-1 nodes. If the answer is yes, then blades can make sense. If blades will cost you thousands of $$ extra to get started, how much electircity do you need to save for how long to make it worthwhile?
0
 
Luciano PatrãoICT Senior Infraestructure Engineer Commented:
Hi

@kevinhsieh you are very right. For that environment is a waste of money.

Is better to use that money on a high available infrastructure(Switch’s, Hosts, Storage or DR site, etc.) then wasting in a Cisco Blade that will be use about 20% in the near future.

Jail

0
 
ddawson100Commented:
You're right to stick with Dell then.

For my money this is the approach I'd take with this: 10/15 guests should be spread over 2 machines so you can always have one for failover. I would calculate full amount of RAM and CPU you'll want then buy 3 hosts, any one which should comfortably hold half the capacity needed. Then get your SAN with RAID1 for DBs (including Exchange) and RAID5 for file services and other slower access roles (DCs, print servers, etc). I'd go with the Equalogic line but usually do the MD3xxx line with perfectly fine results. Check out the VMware vSphere Essentials Plus kit to get started. Perfect for up to three hosts and you get the killer feature vMotion. You also get HA and DRS.
0
 
kevinhsiehCommented:
@ddawson100, EqualLogic does a single RAID level per member. I would start out with a 2 node cluster where everyting can fit on n-1 nodes instead of a 3 node cluster where everything fits on n-1 nodes, if a 2 node cluster is cheaper (probably is). That said, the vRAM entitlement for vSphere Essentials Plus is 32 GB, and 192 GB max for the cluster (I guess you can buy more than the 32 GB per node), so the VMWare vRAM tax is going to be an issue either way.
0
 
bergquistcompanyAuthor Commented:
So this may be the selling point if FT isn't used because of the limitation 1 vCPU and we use HA and can put 10/15 hosts on a computer, the blade would be underutilized.

Is this the case that people are not using FT?  So we are looking at the Dell R710 with 72gb RAM x3.
Sound ok?
0
 
kevinhsiehCommented:
When buying RAM, go at least 8 GB DIMMS so you have free slots to add RAM later. I also think that one 8 GB DIMM is cheaper than two 4 GB DIMMS. A 16 GB DIMM is still more expensive than two 8 GB DIMMS, but may be worth it.

Consider this:
With all those VMs you probably want to use Windows Datacenter to license the VMs. If you go dual processor boxes you need 6 Windows Datacenter processor licenses to cover all VMs on all hosts, but if you go with single processor hosts you only need 3 Windows Datacenter processor licenses, saving around $10-14K as I recall (depending on SA). If you go with single processor hosts, you really want to use the 16 GB DIMMS to get the memory density you want per host. SQL, Exchange, and the others typically aren't too CPU intensive unless you are constantly pushing SQL hard all day, so you should be able to run all of your VMs on two single processor hosts, and probably even just one single processor host if it has enough RAM.
0
 
bergquistcompanyAuthor Commented:
Excellent information thank you!

In clarifying I want to confirm so in general in your VM environment you are not using FT and wouldn't look at it because of it's requirement for 1 vCPU and can you explain further why it is a limitation?
0
 
Luciano PatrãoICT Senior Infraestructure Engineer Commented:
Hi

@bergquistcompany FT is not the question, so you should focus on the question it self. If you have any other questions, you should open a new question. This is the EE rules.

But let me try to explain:
FT has this limitation, because VMware cannot support until now 2 vCPU and have the sure that the replication VM is exactly like the original.

They have tested with more than 1 vcPU, it works(in test lab) but is not 100% correct. So they still will only work with 1 vCPU until they have sure that is 100% equal in both sides.

I think this is regarding how memory blocks are replicated to FT copy.

http://communities.vmware.com/blogs/vmroyale/2009/05/18/vmware-fault-tolerance-requirements-and-limitations

http://jeremywaldrop.wordpress.com/2009/08/04/vmware-fault-tolerance-requirements-and-limitations/

Hope this can help

Jail
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

  • 7
  • 6
  • 5
  • +2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now